End of the World

Lessons from Icarus: A Sermon about AI (sermon)

First Unitarian Universalist Society of Burlington

March 22, 2026

Reverend Karen G. Johnston, Senior Minister

[The “Time for All Ages” that was part of this worship service shared the story of Icarus, a boy whose father gave him a new technology to free them both, but Icarus did not heed his father’s instructions and flew too close to the sun, ultimately dying. The reading for this service was an excerpt from the essay by Anthony Doerr called “Empty Pockets.”]

Since I am no expert on AI, in this sermon on artificial intelligence (or AI), I may get some things wrong. Not on purpose, but because I am human.

But AI gets things wrong all the time, so there’s that.

In my role as your called minister, I approach every Sunday service as a conversation between and among us – the Worship team made up of many people, including myself but not only myself, and you, be you here in the sanctuary or watching livestream.

Not a performance or production. Not a monologue. A conversation.

One that is impacted by those in the room. The baby crying. The spontaneous laugh. Natural light touching us just so. Eye contact during the stone ritual. This  is true in those congregations with two services: same sermon, same order of service, and yet, the experience is palpably different.

Worship services are highly relational.

Earlier in this congregational year, the Worship Associates team explored the use of AI in planning and implementing worship. Our take-away was a covenant with each other and with you, the Congregation: we would not knowingly use AI tools to create content for our Sunday morning services unless it was as a means of accessibility. Some of us use AI in our personal or professional lives, by choice or compelled by professional circumstance. Even so, we made this decision because what happens here on a Sunday morning is deeply human and covenantal. It is a form of a relational promise.

AI would change that. Imagine, for a moment, that I delivered a sermon, one that moved you deeply, that connected you with the very insight you had been longing for. Or that evoked a standing ovation. What if, in the receiving line, you shared how a certain part really resonated with you and then I said, “Oh, that part? I got that from AI.”

Sunday worship is conversation that allows us to be in accountable relationship with each other. So, no AI.

Before we go any further, I want to introduce some pop culture icons of artificial intelligence in our society. Or maybe this pantheon is only from my life as a GenXer.

  • We start with Frankenstein, originally published in 1818, the story of the hubris creating artificial life – here depicted in film made in 1931 and just last year. THE iconic cautionary tale.
  • There is 1984’s ominous Terminator, threatening to be back.
  • In the first season of Star Trek: The Next Generation, in 1987 we get to know Data, the sentient android who serves in Star Fleet and is therefore, inherently good. Towards the end of that first season, we meet his evil twin, Lore, embodying the heart of the quandary of all technology: that tools can be used for good and for evil.
  • Here is 2002’s movie version of Minority Report, where sophisticated computers analyze patterns, in this case of human behavior, predicting criminal activity in order to arrest individuals BEFORE they commit crimes. What could go wrong there?
  • There is Sonny in 2004’s I, Robot – a humanoid robot, not likely to be confused with a human.
  • Here is Wall-e from Pixar’s 2008 – Wall-e is a lonely robot, living on earth which humans have trashed and abandoned – see all that e-waste and other trash that has smothered earth?
  • In 2013 there is the film called “her” in which the protagonist falls in love with an AI who is a disembodied voice on a computer screen.
  • A year later, in 2014, we have Ex Machina – in which the AI robot not just a voice on the computer, but presents as human in a rather convincing way
  • A few years later, the multi-season show, Westworld, where sophisticated robots serve the purpose of entertainment and distraction, yet attain consciousness. Again, what could go wrong?
  • How could I not include Janet, from the four amazing seasons of the series The Good Place? Is she Good? Bad? In control? Out of control? Able to love? All fair questions.
  • And this final image, lesser known, yet loved loyally by those familiar with the two-book solarpunk series by Becky Chambers – Psalm for the Wild-Built and Prayer for the Crown-Shy. Mosscap is a robot who meets the nonbinary tea monk, Dex, both of whom live on a planet where machines had become sentient and humans refused to grant them their freedom and a great cataclysm ensued. These two members of opposing remnant communities explore what friendship and possibility looks like after near total destruction.

AI is no longer sci-fi. It’s already in our lives. For some of us, this is welcome. For others, it’s opening the door to Hal from 2001: A Space Odessey.

How is AI already in our lives? If you open your smart phone or any app with facial recognition, that’s AI. Grammar check, auto-complete, spam-filtering: all AI. GPS: yup. Recommendations for you on your favorite streaming app: again, it’s a form of AI.

Among my Unitarian Universalist colleagues, there is a variety of opinions about AI. On one end, it can be described as thoughtful curiosity paired with enthusiastic exploration. On the other side, there is full-throated condemnation. In-between is a middle ground we might call caution fueled by curiosity and fear.

Once AI started popping up in conversations and in use here at First UU, I realized that I needed more than my not-particularly-well-informed opinions. So, I started taking a newly offered class designed just for UU ministers – I’m halfway through.

I’ve learned some basic stuff. Like, that most of what we currently call generative AI are actually Large Language Models (LLMs) and that the term, artificial intelligence, is a misnomer than not because “intelligence” isn’t really the right word. 

I’ve learned that LLMs – generative AI – hallucinates, which is the word to describe when generative AI invents books or articles; when it confidently gives wrong answers to factual questions; when it misquotes sources; or out-and-out fabricates statistics. These so-called hallucinations remind us that generative AI is not a truth-telling machine or a thing that can reason, but a pattern-recognition machine that is biased by the content that it has been trained on.

Generative AI is good at summarizing, pattern recognition, translating (not just language, but complex concepts into accessible expressions, analyzing large data sets, and speed. Generative AI is not good at reasoning or ethical discernment, embodied wisdom, creativity, or relational accountability.

In the short time since generative AI has been accessible to the public, our use of AI has shifted. In 2024, the top use of AI was to “generate ideas.” In 2025, it was…[can you guess?]…therapy and companionship.”

This is what happens when we have a pandemic of loneliness paired with a broken mental health care system.

Many say that the AI horse is out of the barn. That our task, rather than to insist on its being corralled again, is to figure out how to ethically go about riding it into the sunset.

How to maximize the benefits while minimizing the harm.

How to fly not too high to the sun and not too close to the ocean mist.

The presence of AI in medical settings can have hugely beneficial impact. A recent study found that AI, all by itself, outperformed doctors in diagnosing complex cases and cancer detection.

Yet, there’s the ecological damage associated with AI: the mining of rare earth minerals, the use of copious amounts of water. Each time you do an AI-assisted google search, it uses ten times the energy compared to a traditional Google search. And while some of us are noticing this, there are horrible ecological damages and human rights abuses associated with the use of all electronic devices that have become ubiquitous in our lives – smartphone, laptops. Is AI all that much different?

And I haven’t even mentioned the job loss.

Or the negative consequences on our neural networks when we rely on AI to do things we could do.

Or stealing the work of artists.

Or assisting the voracious war machine.

The more we use, the more we become inured, chasing shiny objects or wanting for our children a future suffused with AI-generated medical advancements, but not the evil that seems to come with it.

Just two days ago, Vermont’s own Senator Bernie Sanders released a video of him conversing with Claude, the Large Language Model AI assistant that Mary Beth referenced in her reflection

There’s Bernie, sitting at a conference table, talking to Claude through a smart phone. He asks Claude about the threats of AI on democracy and privacy. It’s so weird to me, hearing this human-SOUNDING voice, with familiar inflections, responding to his questions. He asks for Claude’s opinion on legislative strategies to protect human beings from AI. It’s so weird. It’s so compelling.

It’s a 9-minute video and at the end, Bernie asks the question what else about the threat of AI to privacy should the American people know?  Here’s Claude’s chilling response:

I think the key thing is that privacy isn’t just a personal issue, it’s a democracy issue. When companies and governments have detailed profiles of millions of people, they have power over those people in ways most Americans don’t fully grasp. They can manipulate your choices, predict your behavior, and influence your thinking. So, this isn’t abstract. It affects whether democracy actually works. That’s why I think your focus on this is so important, Senator.

I am no Luddite – colloquially known as someone who eschews the use of technology. I have a smartphone and a laptop. I preach from this ipad. I am not asking us to go back to phone booths on every other corner.

I did, however, recently learn that the historical Luddites were actually focused on protecting the humanity of workers who were being oppressed by new technological innovations being pressed upon them by the owning class. And I can get behind that.

I can get behind the protections we need to put in place to protect our planet and our planet’s democracies. Because, friends, I fear, we think we are building the android Data, when, in fact, we are building Lore.

We must not fly too close to the sun.