How to talk about the future of AI

Within our class discussions over the past two weeks, we’ve touched a great deal upon the pros and cons of using algorithms and artificial intelligence for a variety of purposes. As the module progressed, the conversations we were having became darker and more concerned with the negative aspects of such technology. Dr. Sample even asked us to come up with the fastest way to destabilize the world using algorithms! I think that while this sort of critical thinking and pessimism is necessary to think freely, I also believe that many of the scenarios we discussed are unrealistic.

It seems unlikely to me that, in the near future, algorithms and AI will be the cause of the downfall of humanity. To reiterate, I would think it is much more likely that a simple communication error would trigger a nuclear war (see the September 1983 Incident) than an AI-enabled computer would cause an economic collapse. So far, it seems as though AI is well-behaved and easy to control. Like most things, it could be manipulated and turned into a weapon, but I do not see this as a cause for alarm. Skepticism is always necessary when evaluating new technology, but we should not rush into doomsday-scenario talks.

Suppose I’m wrong and that AI has the potential to cause global catastrophe. What would that look like? We’ve talked about this a few times in class, and I’ve noticed that we are often tempted to use cinematic interpretations as a foundation for imagining this scenario. However, none of the directors of Black Mirror or Resident Evil have better ideas about what this would look like than we do. When we discuss ethical dilemmas and the pros and cons of a technology, our discussions should be down-to-Earth. For example, I’m certain that the State Department and the Department of Defense gave no credence to the scenarios presented in Dr. Strangelove or Fail Safe when writing the US’s official policy on nuclear weapons.


Above: A screenshot of Dr. Strangelove (Peter Sellers) from this website.

Overall, I feel that AI has more potential for good than for evil. When we discuss both, we should do our best to take fictionalized ideas of the best- and worst-case scenarios at face value and use our own intuition and experiences to fill in the gaps.

Leave a Reply

Your email address will not be published. Required fields are marked *