Frog in the Pot?

We know the story of the frog in the pot who doesn’t notice the water heating up until it’s boiling and he’s no more. Could that be happening with us and big data algorithms (AI)? Are we slowly relinquishing making our own decisions? 

First, let’s agree that few of us have managed to keep hidden from Amazon, Netflix, or Google. Every time we surf the web or type an address into google maps, algorithms covertly monitor us, analyzing our every decision. Then, in my case, it tells the Dansko and REI corporations that their advertisements to me should be about wellness, the outdoors, and comfortable clothes (with senior women as their models). While we give little thought to this, except to comment that after buying an umbrella google floods us with parasol and rain gear ads, to a corporation this information is gold. 

Sometimes, we knowingly share our information for better recommendations. We want the algorithms to make decisions for us. Who hasn’t received emails suggesting you watch this or that movie “based on the movies you’ve have watched?” But businesses have learned that self-reporting can be unreliable because people don’t always tell the truth. You may start watching Victoria because it seems that every friend has sung it high praise. If your friend, Justine, with a Ph.D. in British Royal History recommends it, it’s must be good. But to you, it’s boring. You give up and turn it off. Then you tell your friends that you thought it was incredible. 

In his book, 21 Lessons for the 21st Century, Yuval Noah Harari writes this can be solved if (or when) the tech giants start collecting “real-time data on us as we actually watch movies, instead of relying on our own dubious self-reports.” Algorithms would monitor which movies we watch through to the end and which ones we turn off half-way. So, “even if you tell the whole world that Gone with the Wind is the best movie ever made, the algorithm will know that we never made it past the first half-hour and never saw Atlanta burning.” That’s only the beginning, the data collection will continue and go deeper.   

Engineers are working on software to detect human emotions. They won’t hook us up to monitors. All they need to do is to add a camera to the television that detects eye movements and facial muscles. Then, as cameras and software improve, algorithms will know how we react to different scenes, laugh, cry, yawn, or take a bathroom break without hitting pause. “Next, connect the algorithm to biometric sensors, and the algorithm will know how each frame influences our heart rate, our blood pressure, and our brain activity.” As we watch the sexual tension between Offred and Nick in The Handmaid’s Tale, the algorithms detect a tinge of sexual arousal. A forced laugh watching SNL—you didn’t hear or get the joke—lights up a different part of the brain. Biometric sensors can detect what we are barely aware of within ourselves.  

Ok. But at least for now, one can say that on some or even many occasions, Amazon and Netflix algorithms selections made poor choices for us. Can’t argue with that. But the mistakes are because of insufficient data and/or faulty programming, fixable errors. The longer they monitor our behavior the better the algorithms will perform.  Again, Harari points out, “Amazon won’t have to be perfect. It will just need to be better on average than us humans. And that is not so difficult because most people don’t know themselves very well, and most people often make terrible mistakes in the most important decisions of their lives.” Ouch. That hurts. “Even more than algorithms, humans suffer from insufficient data, from faulty programming (genetic and cultural), from muddled definitions, and from the chaos of life.” 

Cathy O’Neill, in her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, gives a human face to casualties of faulty programming. She questions if our blind faith in big data is justly placed. Should mathematical models make the decision about who is admitted to college, who gets insurance, or is hired? All this is taking place with the human resource department understanding the process less and less. They can explain how the algorithms work, why you weren’t hired, or why Stephanie wasn’t accepted to the University of Michigan in spite of her excellent grades, and perfect ACT scores. 

Our grandkids’ school districts start testing them in third grade and the results are interpreted by algorithms. Many of the children of tech giants (and the wealthy) leading the AI revolution go to Waldorf Schools, a tech free environment where students learn through music, writing, literature, legends and myths. In other words, they are not subjected to stressful standardized tests from an early age.

From 21 Lessons, “In some countries and in some situations,” people might not be given any choice, and they will be forced to obey the decisions of Big Data algorithms. Yet even in allegedly free societies, algorithms might gain authority because we will learn from experience to trust them on more and more issues, and we will gradually lose our ability to make decisions for ourselves.” GPS comes to mind.

What about ethical decisions software engineers have to make such as in programming cars? One scenario, two kindergartners chase their ball right in front of a self-driving car. “Based on its lightning calculations, the algorithm driving the car concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane and risk colliding with an oncoming truck. The algorithm calculates that there is a 70 percent chance that the owner of the car—fast asleep in the backseat—will be killed. What should the algorithm do?” It depends on what the data engineers write into the program. They could decide that two children’s lives outweigh one adult’s and that’s that. Algorithms don’t have emotions. They follow the program one hundred percent. Would we buy a car programmed to save the owner or the lives of the children? We may think our cars are already getting high tech, lights and sounds warn us when other cars are too close, a cup of coffee flashes on the dashboard to indicate our need to pull over and rest. Understanding the thousands of decisions now made by algorithms is a high mountain to ascend. 

Nobody seems to be suggesting we can make AI human-like. Our sons are not about to bring home a nice AI girl. Harari explains that although it’s likely that AI intelligence will grow, it’s unlikely it will ever have consciousness. But just in case, he adds,  complacency would be a mistake. So for every dollar we spend on warfare, we should spend another dollar on raising the consciousness of the human species. Perhaps those majoring in Computer Science Programming should consider a minor in philosophy.   

2 thoughts on “Frog in the Pot?

  1. There are so many layers to the consequences of algorithms and AI. My biggest concern is how will it will effect our kids and grandkids. Can you imagine living in a world without privacy or boundaries? How will they self actualize when technology will define who they are for them? I think a career is psychiatry is the way to go!
    Thanks, Edith. Lots to think about.

  2. Love how you got me involved with the frog in the boiling water and I stayed involved thinking about how I may be the frog! And, I agree that every dollar spent on warfare should be matched with another dollar on raising the consciousness of the human species. You are a clever and thought-provoking writer Edith!

Comments are closed.

About edithandersen49

Girls compete with one another. Women empower and uplift one another.