Artificial. Intelligence?

In the spring of 1973, when I was twelve-years old, as I did each year, I entered my school’s Science Fair. That particular year, I evaluated the effects of diet on mice. A month before the Fair, I bought two white mice, and I got them settled in separate cages. I fed Mouse A fresh water and Purina Mouse Chow, formulated with all the protein, fats, vitamins, and fiber required for a healthy mouse. I fed Mouse B Coca Cola and Lifesaver hard candies. Daily, I weighed each of them on a postal scale. I recorded their weights, as well as my observations about their fur, their activity level, and other criteria now lost to the mists of time. When the day of the Science Fair arrived, I had one frumpy, lethargic mouse and one bright and lively mouse.

I think of those mice when I consider the decline of Americans’ health in the fifty-plus years since that Science Fair. Type 2 diabetes has more than doubled. Incidents of certain cancers, such as colon, breast, kidney, liver, and pancreas, are rising dramatically in people under the age of 501. Three-quarters of us are overweight, and more than half of those overweight people are obese, which is significantly more than double the number of people who were overweight or obese when Mouse B was eating Lifesavers. In fact, childhood obesity has almost quadrupled in those fifty years. Even though the mechanisms are not completely understood, we know there is a strong connection between excess body fat and high blood pressure, heart disease, stroke, geriatric dementia, and all of the illnesses I mentioned above. We are getting better and better at treating some of these illnesses, but the fact remains that—like Mouse B—we are not thriving. Most of us are not our best possible biological selves.

What disrupted our biology, beginning fifty years ago? Among other things, our diet began to change—dramatically. Innovation and new technology made possible the production of more food at lower costs with less effort than at any time in human history. Pesticides increased yields in fruits and vegetables; antibiotics reduced losses in livestock herds. Preservatives kept food from spoiling due to mold, fungi, and bacteria. Synthetic antioxidants kept fats from becoming rancid. Plastic packaging, which weighs less than glass, kept transportation costs down. Society cheered these innovations, as American food production and transportation became extremely efficient. More food was affordable for more people. The percentage of household income spent on groceries dropped by half.

Not only did food become cheaper, it also became more convenient. Food scientists figured out how to mass produce whole meals, ship them to market, and get them into household freezers, refrigerators, and pantries. The food was shelf-stable, and the American consumers pronounced it good. This prepared, or processed, food liberated women. (Let’s be honest; fifty years ago family meals were prepared primarily by women.) Moms everywhere cheered canned soup and boxed mac-and-cheese and TV dinners. There was no need to grow a garden, let alone spend hot days at the end of August canning produce. Mom didn’t spend the first hour of the morning scrambling eggs and frying bacon, because the kids could pour themselves a bowl of Fruit Loops and drop a Pop Tart in the toaster. In the evening, dinner could be pulled out of the freezer and microwaved—within minutes! It seemed like a miracle.

But fifty years later, some doubts have crept in. Maybe some of those innovations have trade-offs at best, or are downright deadly at worst. But understand that we wanted these things. We still want them. It’s fashionable to point the finger at Big Ag or Big Food, and cry, “You’re making us sick!” But Big Ag and Big Food only sell what Big Public will buy. We want cheap food. We want convenience. We want strawberries in January; we want eggs at a dollar per dozen; we want Oreos—even if eating those inexpensive, convenient things makes us less than our optimal, biological selves.

Which brings me to artificial intelligence.

The term is so broad, I’m not sure we all agree on what artificial intelligence means. For the purpose of exploring the concept in this essay, I’ll define artificial intelligence as a combination of algorithms, software, and, in some cases, hardware, which mimics human intelligence by combing through vast amounts of data or content and then manipulating that information to arrive at an answer or an action (an output). In this way, AI mimics human thinking, but AI can comb through (or process) the inputs faster and more thoroughly, by orders of magnitude, than the human mind. It can adjust its manipulation of inputs to improve its outputs without needing to be reprogrammed. In this way, AI mimics human learning.

The people most enthusiastic about AI, no matter what application is being considered, talk about it in terms of efficiency, convenience, and revolutionary breakthroughs. I feel like I’ve heard all this before—during the food revolution that began during my childhood.  What could possibly be wrong with being able to produce food so efficiently that we could feed the world? What could possibly be wrong with liberating us from the drudgery of harvesting, preserving, and preparing our own meals?

To untangle my uneasiness about AI, I took a step back and considered the ramifications of the internet and social media. Just as we now know more about how our bodies are responding to our new (in evolutionary terms) diet, we now have a couple decades worth of understanding about how our brains are responding to our shift from the printed word to the electronic word.2

The brain is composed of two kinds of cells—neurons and glial cells. Glial cells can be thought of as tech support; they feed and support the neurons. But the neurons, or nerve cells, are where the “thinking” takes place. The neurons communicate with each other, just like neighbors talking over a fence. The more frequently a pair of neurons communicate, the stronger their bond.

Imagine your fenced backyard is adjacent to other neighbors. You gossip frequently with Neighbor #1, so a path to his fence gets worn in the grass. One day, Neighbor #1 says something mean about your dog, and you decide to shun Neighbor #1. You begin to gossip more frequently with Neighbor #2. Pretty soon the grass grows back on your path to the fence of Neighbor #1, but you’ve worn a new path to Neighbor #2. A third neighbor moves in. You like her, so you begin to create a well-worn path to her section of the shared fence. Now you’ve got two strong pathways. Neighbor #1 finally realizes the error of his ways, gives your dog a biscuit, and you resume your communication. The creation and breakdown and re-creation of pathways is never settled.

This is how our brains work. The more we do a certain task, or think a certain thought, the stronger the neural connection. If we abandon a certain action or thinking pattern, eventually that neural connection fades to be rerouted in another direction when called upon. It takes surprisingly few repetitions, or hours of focus, to establish new neural connections. The brain is incredibly good at constantly rewiring itself. We say it is very plastic.

The other important thing to understand about our brains is the distinction between and the functioning of Short Term Memory (STM), Working Memory (WM), and Long Term Memory (LTM). Information first arrives in STM—the name of your new co-worker, the ad for prostate therapy that flashes across your screen, the pain from stubbing your toe, the fury at being cut off in traffic, the score of the baseball game at the bottom of the seventh inning, the crying baby at the next table in the restaurant. We are constantly bombarded with information, and we have evolved to notice change and “distraction.” (It was important that our early ancestors, as they wandered through the jungle, be “distracted” by the rustling ferns in time to side-step the lion as he pounced to eat them.)

Information in STM is passed to WM. Think of WM as a work space such as a kitchen counter. Imagine you’ve been to the grocery store, and your husband is hauling in all the groceries and dumping them on the kitchen counter. You are picking up one to three items at a time and moving them to the refrigerator, the freezer, or the pantry. Just as you have only two hands and can only handle two to four things at a time, your WM can only process one to maybe four inputs at a time. WM is the brain’s bottleneck. I’ll come back to this. But first, let’s understand LTM.

Your LTM is where everything is stored. It’s your pantry, refrigerator, and freezer, but it has literally unlimited capacity. You cannot fill up your LTM. When you retrieve something from LTM, you move it back into WM where you manipulate it to create or achieve something—a sentence, a new idea for the garden, a brilliant chess move, an insight about politics. Just as on a Tuesday evening, you might pull tomato sauce, cheese, and basil from the pantry to make a pizza, and on a Thursday you might pull tomato sauce, basil, and pasta to make spaghetti, you can pull items from your LTM and reconfigure them into different patterns or “schemas.” The variety and quality of inputs and experiences you have stored in your LTM, and the frequency with which you pull them back to WM to manipulate them in different configurations, will determine your “intelligence.” That’s what makes you uniquely you. That’s what makes you human. Your unique humanity lies in whatever schemas you build and rebuild over the course of your life.

But let’s return to WM—to the brain’s bottleneck. The kitchen-counter-grocery analogy is not apt in one very important respect. In your kitchen, even if the lemons and onions and milk and cereal are coming in from the car faster than you can put them away, eventually you’ll get to them. But in your brain, if you don’t recognize or notice or refresh the bits of information coming across WM, they will disappear. You have less than a few seconds and they’re gone. It’s as if—poof!—all those lemons and onions and milk and cereal vanish, never to get stored in the pantry or refrigerator. So when you want to create a meal, there is nothing to pull back out onto the counter.

To sum up:

  • our brains are very plastic;
  • our brains have evolved to notice distractions;
  • our brains can only process one to four items at a time;
  • our brains can’t store—for future use—anything that hasn’t been processed;
  • what we store and process is what makes us uniquely human.

We have arrived at the heart of the snarl. How are our brains responding to the internet and social media, and what does that tell us about our relationship to artificial intelligence?

The internet and social media are designed to distract. When we read something on a laptop or smartphone, we are bombarded with other imagery on the screen, visual notifications crossing the screen, a tool bar of icons, hyperlinks in contrasting colors, short videos, and sound alerts. All of it is overtaxing our WM. We are not as focused as we think we are. “Try reading a book while doing a crossword puzzle; that’s the intellectual environment of the Internet.”3  Internet consumption has also been compared to an alcohol-induced black-out; you’re alive, but you don’t remember a thing.4

Our brains are changing as a result of our shift from printed reading to screen reading, and we are getting pretty good at locating, categorizing, and assessing discrete bits of information5 (all things, I might note, that computers do well, and which we will never be able to do as quickly or with as much capacity). But because our WM can’t keep up—because we are distracted and not focused—we are not storing information. Therefore, we have less with which to assemble schemas, gain intelligence, build knowledge, solve problems, and acquire wisdom.

We are building strong neural connections for the tasks that computers can do better than we can, and we are losing neural connections for the very thing that makes us human.

And so we turn to AI as a way to efficiently and conveniently “solve our problems,” perhaps, in part, because our brains are no longer up to the task.

Perhaps, from a macro perspective, it doesn’t matter to the universe whether humans or machines solve problems. But from my micro perspective—from the me who is only me and is no one else—do I want to reduce or give up the very thing that makes me human? Am I willing to dull my life instead of enrich it? Flatten it instead of sharpen it? Are convenience and efficiency worth diminishing my best possible neurological self?

In a piece promoting the use of AI in the creative process, the following incident was held up as an example of the goodness of AI:6

An 84-year-old woman in India was on life support. Her husband, who is 92, was distraught. For all the obvious reasons, but for another one, too. He wanted to tell her how much she had meant to him, how wonderful their 60-plus years of life together had been. But he didn’t know how to say that in words. As it happens, his granddaughter, my friend’s daughter, works in A.I. She guided her grandfather through some A.I. prompts. Asked her grandfather some questions and entered them into ChatGPT. It produced a poem. A long poem. He said it perfectly captured his feelings about his wife. And that, on his own, he never would have been able to come up with the right words. He sat next to her, reading the poem, line by line. She died soon after. And he said it allows him to know he told her everything.

Far from filling me with gratitude for AI, that story made me profoundly sad. Was ChatGPT’s poetry what the dying woman wanted to hear? She’d been married to her husband for more than 60 years. She had to know he didn’t have one poetic neuron in his brain. Would not his silent presence have meant more than his artificial poetry?

But that’s the beauty of it. The 92-year old man can use AI, and I don’t have to. We get to choose, to an extent. Not completely…but more than we realize. One man’s Hostess Twinkie is another man’s French pastry.

Which brings me back to my two mice. The better Science Fair experiment would have been to offer both diets to both mice. If they had been free to choose, which would they have chosen—Purina Mouse Chow, with all the protein, fats, vitamins, and fiber necessary for a healthy mouse? Or Coca Cola and Lifesavers?

With my mice at the Science Fair in 1973

__________

1 https://news.harvard.edu/gazette/story/2022/09/researchers-report-dramatic-rise-in-early-onset-cancers/

2Nicholas Carr, The Shallows (New York, NY: W.W. Norton & Company, Inc., 2020, 2011, 2010)

3 Ibid, 126

4 https://read.lukeburgis.com/p/everything-is-fast

5 Nicholas Carr, 141-142

6https://thedispatch.com/article/artificial-intelligence-chatgpt-literacy-creative-process/

Back to Top