‘AI and the Human Future’ presented at Marshalltown Public Library

T-R PHOTO BY DORIE TAMMEN Scott Samuelson of the Philosophy and Religious Studies department at Iowa State University presented ‘AI and the Human Future’ at the Marshalltown Public Library on Saturday afternoon.
The Marshalltown Public Library, Humanities Iowa, and the Iowa State Historical Society co-sponsored a thought-provoking program entitled “AI and the Human Future” on Saturday at the library. Scott Samuelson, with Extension and Outreach in the Philosophy and Religious Studies department at Iowa State University was the presenter.
He spoke about the ethical, moral, political, and even environmental concerns related to artificial intelligence.
Samuelson defined AI as “a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments with sufficient reliability.”
In 1985, Gary Kasparov, Russian Chess Grandmaster, played chess against the 32 best chess computers in the world. He won against all of them. In 1996, Kasparov defeated
“Deep Blue.” an IBM-powered chess computer. The following year, Deep Blue defeated Kasparov. Newsweek magazine’s headline on the story was “The Brain’s Last Stand.”
Move forward to 2005, when Playchess.com hosted a wide-open tournament. Some participants were Grand Masters, some were amateurs, and computers were allowed. The winners were two American amateur players who used three chess computers.
Kasparov concluded from this that a weak human plus a machine with a better process was superior to a strong computer, alone, and was also surprisingly superior to a strong human plus a machine, with an inferior process.
Samuelson compared this to what happened after the invention of the camera. Suddenly, portrait painters were no longer in demand. As a result, painters had to re-envision art, and the art world was transformed. Furthermore, the camera opened up portraiture to everyone. Even amateurs could take photos. Mass media was transformed, as well. This can be democratizing, but also destabilizing, as seen by the Nazi party using mass media as propaganda.
Three AI roads are open to us. 1.) Substitution: what things might AI do instead of us? 2.) Collaboration: what things might be enhanced by human-AI collaboration? 3.) Certified Organic: what things is it crucial for humans to do without AI? The big questions posed by AI are: who are we, what do we care about, and how can we reinvent ourselves and our institutions to focus on what matters? Do we want to rely on AI, alone, in medical diagnoses? How comfortable would we be with hospitals minus the human interaction we normally expect there?
“Good Old-Fashioned AI” (GOFAI) is where a specific task for a specific environment is programmed into a machine. An example is floor-cleaning robots. The problem with this is that it’s not easy to program everything that might be encountered in most situations or
tasks. How does one program the floor-cleaning machine to respond appropriately to every situation?
Beyond that, there is also machine-learning, whereby a goal is programmed into a machine, and it can figure out how to achieve it. In supervised learning, AI is given specific feedback to learn to do its task, something like learning from flashcards. In unsupervised learning, AI finds previously unspecified patterns and works from them, something like figuring out things on your own.
Furthermore, there is Narrow AI, which is an AI system designed to do specific tasks, such as playing chess, or cleaning a floor. Artificial General Intelligence (AGI) is akin to humanlevel AI or Superintelligence, which can perform several different tasks, including ones for which it was not specifically designed, such as playing chess and answering questions and driving a car.
We’ve all heard of the algorithms that are developed to generate AI’s intended behavior.
These algorithms are formed in academic, corporate, and government research environments by people with doctorates in mathematical fields. AI models are trained by data scientists or analysts, who may or may not have advance mathematical expertise.
This data training often requires a tremendous amount of energy and water use, with environmental consequences.
Samuelson pointed out that AI products are usually sponsored by powerful organizations with a profit motive. Open-source AI, on the other hand, is freely available to all and offers transparency, community-driven ethics, and a counter to monopolistic control.
Next comes the “human versus machine” question. AI can already outperform humans on many tasks, but what some things it can’t do, or doesn’t do well? We should not forget that AI lacks common sense, sometimes “hallucinates,” or makes things up, and its creativity isn’t truly exciting or revolutionary. It moves quickly but isn’t a quick study, and it lacks awareness of what it “doesn’t know.” AI can only fake EI, or emotional intelligence. (Do we really want AI to write a loving letter to our mom?) Finally, AI doesn’t suffer or love or eat, which Samuelson points out are pretty important to humanity. Human intelligence is biological, Samuelson says. We’re hungry, so we learn how make food.
One challenge we face with the use of AI is over-trust in it. After all, it really isn’t especially intelligent! As we become more reliant on AI, we lose skills to be good stewards of it, which
Samuelson calls “deskilling.” There is also the fear of mass unemployment. Will AI make more jobs irrelevant as opposed to creating new ones? AI might also empower evil if bad actors hack into it and use it for evil purposes, and sometimes Big Tech itself is a bad actor.
A transportation case study Samuelson offered was the crashing of two Boeing 737’s in 2018 and 2019, caused by AI sensor malfunctions. The first problem was that “AI makes mistakes.” The pilots weren’t able or weren’t trained to override the malfunctions, either so the second problem was “humans make mistakes.” The resultant risks of AI in this transportation case are that pilots become deskilled, pilots become less vigilant, fewer pilots are needed, and finally, bad actors might hack the system and figure out ways to fool the AI system.
Having said all of that and more, another major concern of many is resource depletion. AI consumes a tremendous amount of water and electricity. Do the benefits of AI outweigh the amount of resources it consumes? “The Economist” magazine has declared, “The world’s most valuable resource is no longer oil, but data.”
Next, there is the question of who should control our data? Should government power control it, or corporate power? In the US, corporate power holds the reins. In China, government power does. One could make the case that neither is a good option.
Samuelson ended the presentation with an endorsement of the humanities, remarking that we still need arts and crafts. we still need deep connection, to the world and to each other. As helpful as it can be in some situations and environments, let’s not allow AI to take our humanity away.