Joseph Vukov: Worried about AI? Here are things you need to know.

The first time I used generative artificial intelligence, I felt like a kid at an amateur magic show. Is the card really floating in midair? The parents at this kind of show, of course, are less dumbstruck than the kids: The card is not floating but instead swinging on some string. It’s not magic. Not even a particularly good illusion. You simply have to know where to look.  

The same goes for generative artificial intelligence. Once you know where to look, even the most powerful AI stops looking like magic. No string here — instead, look at the AI’s training data. 

Training data is the information used to construct an AI. After programmers feed an AI a massive diet of training data, the AI learns to identify patterns in it and then generates output. The more data you feed an AI, the more subtle the patterns it can recognize and generate. That’s why ChatGPT can churn out travel itineraries, B-level college essays and social media marketing campaigns. 

In all the hubbub around AI, it can be tempting to think that AI will eclipse us. That it will expand infinitely, until it can do all that a college-educated human can do — and more. That it will take over not only the jobs of data crunchers and coders and copy editors, but also poets and artists and high-level managers.  

We are probably right to worry about some of our jobs. But many predictions about AI are overblown. The technology faces crucial limitations.  

First limitation: AI is limited by the data on which it is trained. Even if you were to train an AI on the entire internet, the AI would miss out on a lot: thoughts jotted down on a napkin; late-night conversations with a college roommate; that week in 2018 you spent camping in the Rockies; and the feeling of seeing your grandma after a long time apart. None of that is part of the AI’s world.

Second limitation: AI lacks critical thinking. Can an image-generating AI churn out several versions of a cat in a fedora painted in the style of Rembrandt? Yup. But can an AI discern which of the paintings is better than the others? No.

Yes, AI can generate incredible content. But it cannot evaluate the content it creates. At least not in the way you and I can.  

There’s a mistake many new AI users make. They assume it can simply replace entire swaths of human  expertise — such as creating art, writing code or penning essays. This is a misguided assumption. Will AI streamline tasks and eliminate some jobs? Likely, yes. Yet the most effective users of AI are those who are already experts in the relevant task.

In other words, AI can write code, craft text and generate images, but it is most effective if you already know what you are doing. For example, I have friends who write code, and they tell me that the code AI writes is good but needs to be consolidated and cleaned up by a human.

Likewise, as a writer, I believe that AI can be a helpful tool. It can generate ideas, word choices and metaphors. But for an undergraduate churning out a last-minute essay, AI will be far less useful. The essay won’t come together without someone to form it.  

Since I started writing about AI, I get asked a lot about the Terminator. Are cyborgs going to take over? No. Yet we should still worry about AI. It is poised to take over large swaths of human activity and, in doing so, erode our individual and shared humanity.

The truth is that generative AI is only the tip of the iceberg. The influence and potential dangers of the AI revolution go far beyond the flashy, generative versions.

For example, AI has been making a splash in health care. Applications can discern subtle differences in radiology scans and can be used to triage patients and complete physician’s notes. They can be used to craft care plans for patients upon discharge. Used correctly, AI could deliver more effective health care. But used improperly, AI-powered health care could exacerbate problems in delivery, rob medicine of the human element and reduce our view of a person to a collection of data.

AI is also in Big Retail. You’ve likely bought a book on the recommendation of Amazon’s algorithm, viewed videos based on YouTube’s suggestions and clicked on an ad for a product you never would have looked up on your own. In all these instances, AI predicts your preferences. Scarier still, the AI helps shape your preferences in the first place, creating a desire and then immediately offering the opportunity to satisfy it. In each of these interactions, we lose a sliver of our humanity. We cede our desires to the algorithms. We become more materialistic and less free.

We become, in a word, less human.

AI does, indeed, threaten our humanity. Not in the form of a cyborg but with the promise of a funny YouTube video or a new pair of jeans.  

In the early days of the internet, when it was slow-moving and quirky, we couldn’t have imagined smartphones, streaming platforms and online banking becoming part of our daily lives. 

Similarly, AI is finding its legs. Like the internet, AI is poised to infiltrate our lives in myriad and unexpected ways. We cannot predict precisely how or where AI will take up residence in 50 years.

How to prepare for this kind of infiltration? By reflecting carefully on AI now. By identifying those areas of lives we want to retain as human spaces and those we are comfortable ceding to the AI algorithms. By reflecting carefully on our values, and what it means to be human in the first place.

AI is here to stay. We need to ensure that humanity as we know it is here to stay as well. 

Joseph Vukov is a philosophy professor and associate director of the Hank Center for the Catholic Intellectual Heritage at Loyola University Chicago. He is also author of the new book “Staying Human in an Era of Artificial Intelligence.”

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

Related posts