I Don’t Think, Therefore I Am Not

As I’m sure is the case for many people recently, more and more of my conversations are ending up being about AI. That is, if they’re not about the war in Iran and the price of petrol.

Most of the time the AI conversation is about how people are using it in their workplaces and how they’re seeing their workplaces change because of it. Sometimes it’s about a news story detailing the capabilities of a new model that’s been released, or the hundreds of billions of dollars being invested into it, or the ethical boundaries (or lack thereof) being drawn by the major companies, or senior personnel resigning due to this lack of boundaries, or some tone-deaf thing Sam Altman said.

Sometimes, though, it gets a bit more interesting and delves into where we see this all heading in 5 years, 10 years, 100 years, and what form of AI usage we think actually enriches the human experience.

Because what is good for the companies developing this technology, is not necessarily good for the people using it.

There are obvious incentives for companies to encourage, or demand in many cases, that their employees use AI to ‘help’ them do their jobs, and there are obvious incentives for OpenAI, Google, Anthropic and others to encourage the general public to use their products as frequently and as deeply as possible. And like any new product that makes our lives easier, we generally adopt it with open arms.

But AI, as we know, isn’t like any other new product. It has the ability to replace that most human of qualities, the thing that has been our key evolutionary weapon and our way of deriving joy and meaning from the world around us. And that is our ability to think.

René Descartes noted in Discourse on the Method in 1637 that:

…while I was trying in this way to think everything to be false, it had to be the case that I, who was thinking this, was something. And observing that this truth – I am thinking, therefore I exist – was so firm and sure that not even the most extravagant suppositions of the sceptics could shake it.

…whereas if I had merely stopped thinking altogether, even if everything else I had ever imagined had been true, I would have had no reason to believe that I existed. This taught me that I was a substance whose whole essence or nature is simply to think.

To take the converse of Descartes’ observation, my contention is that – if we don’t think, we are not. Or in other words, if we allow AI to do all of our thinking for us, we cease to exist as humans.

I should note at this point that I’m not ‘anti-AI’. I am ‘pro-human’. And I think this distinction significantly changes how we should view the issue.

The end goal, I believe, of any society should be to improve the quality of the lives of its members. And when new technologies are developed, this is the first question we need to ask ourselves – “How does this improve our quality of life?” Because it is absolutely not a given that all new technologies improve the quality of our lives. The most contemporary example of this being the last 20 years of social media steadily consuming more and more of our attention, and fuelling a steep rise in mental health issues in young people.

There are undoubtedly many cases where AI can improve our quality of life. I would say these broadly fall into two categories: low-level thinking and high-level thinking.

Low-level thinking is barely thinking at all, at least not consciously. An example of this in a work context might be data entry or other repetitive tasks. These are things that don’t add to our capacity for complex thought and reasoning.

High-level thinking, as I’m defining it, are those things that are simply beyond the realms of human intelligence. AI has the ability to find connections between billions of disparate data points and draw conclusions and solutions at a speed that no human brain can. Medical research is a key example of this where humans stand to benefit greatly.

Where I think AI is particularly dangerous to our quality of life is in the middle ground. It’s the mass of mid-level thinking that we outsource to AI that gradually destroys our mental capacity.

Mid-level thinking requires some effort, and we now have a tool available to us that can remove the need to make this effort. But we must think about what not making the effort costs us. Our brain is just like the rest of our body, in that if we don’t use it, we lose it. Outsourcing our thinking to AI is the equivalent of sitting on the couch watching Netflix, instead of exercising. It’s a tempting, comfortable thing to do in the short-term, but if we compound this behaviour day after day, our bodies and our minds become weak.

If we were to take a maximalist view and draw this line towards its conclusion, I picture a society not dissimilar to that of the “fitless humans” in WALL-E. A society where we have given in so completely to the constant drive for comfort that our bodies have lost the ability to move and our minds have lost the ability to think.

It is a problem of short-term comfort vs. long-term quality of life. And when the incentives for the companies so clearly point in one direction, it becomes a personal decision whether we drift along with that current, or whether we value our own minds.