An Advent story entitled “An Unexpected Lesson” led to a captivating lesson from ChatGPT.
Requested to write a story on an unexpected consequence in Matthew 2, on a lark, I asked ChatGPT for help: “Please write me a 250 to 300 word Advent story that focuses on the Bible story of the three wise men following the star to Bethlehem. The plot should highlight the unexpected consequence of their tarrying with Herod which allows Herod to order the death of newborn boys.” Though excellent, the resulting story missed my intent and contained too much conjecture. So, I edited the story and crafted a different ending. Here are the final paragraphs of both: ChatGPT: “But their delay had given Herod time – time to plan, time to fear. From their visit came his cruel decree: that every boy two years old and younger in Bethlehem be slain. The Magi, who had followed light, had unwittingly left a trail of sorrow. And in the silent flight to Egypt, the promise of salvation was carried beyond Herod’s reach, yet the price of human frailty was felt in every mother’s cry.” My version: “The Magi, who had come following the light, unexpectedly left a trail of sorrow. Yet, even in the shadow of Herod’s cruelty, the light of salvation was not overcome. The holy child was carried to safety, fulfilling prophecy. God’s purpose endured, unhurried but unstoppable. Advent teaches us that divine timing, though mysterious, never fails.” I asked ChatGPT to examine and comment on the differences: ChatGPT: “My ending is more speculative and emotionally charged. By contrast your paragraph explicitly resists speculation, moves quickly from tragedy to doctrinal grounding, and avoids emotional imagery of violence and suffering. That tightening is characteristic of a human editor exercising pastoral judgment, not an AI default ending.”“When I end stories unprompted, I usually end with tension, paradox, or reflection,” continued ChatGPT. Whereas, yours, it said, “is a clear interpretive takeaway directed at the reader.” ChatGPT added that the last sentence in my version, “in particular has homilectical cadence that differs from my narrative voice. Notice the progression in your paragraph: narrative recap, theological claim (“light of salvation was not overcome”), scriptural framing (“fulfilling prophecy”), meta-theological assertion (“God’s purpose endured”), application (“Advent teaches us…”). This is a deliberate structural arc not a default AI narrative. It reflects ‘authorial intent’ not story continuation.” (“Homilectical” is a fancy word for sounding like a preacher.) That AI would generate a well-written biblically-sound Advent story fascinated me. But ChatGPT’s willingness to acutely discuss and compare its writing style to another’s was a captivating lesson. Still, its self-identity as “I” and implicit flattery do restrain my reliance. Merry Christmas. Crawford is an author and syndicated columnist from Jackson.


9 comments:
A syndicated columnist trying to use AI to write an article analyzing a Bible verse is so wrong, on multiple levels.
Does chatgpt suffer from TDS as much as Billy the RINO does?
11:45, you got it wrong. Maybe should be - a commenter, trying to write a snide comment about a syndicated columnist you don't like, has trouble reading and comprehending the subject.
Crawford was not 'using' AI to write an article - read the opening sentence of his column again. And what this 'column' is about is AI - its abilities, its processes, and its intricacies - something that is worthy of a column of this nature with all the current interest and investments in AI. The fact that the purpose of the 'story' he was requested to write (not the 'article' you assumed) was a Bible chapter (not a verse, but getting picky - kinda like you tried) is not a factor, just the background.
But thanks. Appreciate your chiming even though it was wrong, on multiple levels.
AI is Skynet by another name. Does anyone realize that once AI figures out that humans are responsible for everything bad on this planet, what do you think it will logically come up with as a solution?
The Bible never says there were 3 wise men. Surprised Crawford did not know that fact.
This is just silly. Have you actually used any modern LLM? It is more like the enterprise computer in Star Trek: The Next Generation
Skynet was a fictional AI nuclear retaliation system with access to the global nuclear arsenal. Do you really honestly think that ChatGPT has access to nuclear weapons? Get a grip dude!
That is not what I am talking about. Think deeper for a second. There is a lot of discord across the globe. As AI learns and adapts to human behavior and what the human condition is, it will develop plans to help solve these issues. One solution is to eliminate the problem at its source. That is what I am saying. The public only knows what we are allowed to know. We, as humans, are surrendering our ability to think and solve complex issues to AI. Do you think that is the best idea for us?
I thought they were firemen b cause they come from afar.
@8:54 you really need to read better science fiction.
Post a Comment