Artificial Intelligence (AI) — a fascinating topic, which I keep having trouble with wrapping my head around, perhaps because I simply don’t know enough about it.
I just watched Her from Spike Jonze, a beautiful and fascinating film that I would totally recommend watching, and it made me think again about the insane speed of technological development and the question whether we will ever create something that is smarter than us and will eventually overpower ourselves? And in turn, whether that would be a bad thing, or whether we should just consider that as some new development in Darwin’s evolutionary theory?
The biggest issue in this whole debate is the idea of machines or operating systems developing a conscience and way of independent reasoning — together with their ability to learn by experience, could this be potentially problematic to the world as we know it now: where we control merely everything that’s artificial?
People have been exploring this possible potential path in technological development for years, especially in science-fiction. Think, for example, about supercomputer Hall from Kubrick’s 2001: A Space Odyssee. In his article Is Google Making us Stupid (2008), Nicholas Carr parallels his own mind and what the internet does to us to the character of Hall. Carr describes his “uncomfortable sense that someone, or something, has been tinkering with my brain.” He compares his feeling with the famous scene with supercomputer HAL, where astronaut Dave Bowman is disconnecting the memory circuits that control HAL’s artificial brain. Carr states, “My mind isn’t going – so far as I can tell – but it’s changing. I’m not thinking the way I used to think.”
I’m sorry, I’m wandering off topic to an entirely different trend that’s going on, which is also very interesting but we’ll save that for another time (I can suggest some readings on that one if you are interested).
Let’s get back to the relevance of 2001: A Space Odyssee for the future of AI. I guess the biggest question right now is, once we are at the point where operating systems can develop independently, will we still be able to pull the plug, like Bowman does with HAL, if needed or desired?
Please note I’m not arguing against AI or to stop technological development altogether, at all — it allows us to do a lot of amazing things in many different fields (more about AI in general in the video below). As I mentioned before, I’m just not quite sure yet what my stance on this issue is. I am sure that I find this a fascinating topic — the dazzling speed of technological advancements is starting to blur the line between science-fiction and science-prediction.
Curious to see what you think about (the future of ) AI. Call out!