What is Artificial Intelligence (AI)? I searched for AI and my screen quickly filled with definitions from companies that make AI products: DeepAI, Google AI, Coursera, IBM, TechTarget, Microsoft Azure, and OpenAI the young granddaddy of modern artificial Intelligence.
Britannica’s brief definition includes most of the elements of other definitions. “Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.”
Think about “2001: A Space Odyssey” the 1968 science fiction thriller, or “The Terminator” the 1984 science fiction action movie. Both movies depict some form of AI threatening to conquer the world.
Vice President JD Vance planted America’s AI flag in February at a global AI summit in Paris saying, “AI will make people more productive, more prosperous, and more free. The United States of America is the leader in AI, and our administration plans to keep it that way.”
President Trump introduced an action plan early on to secure America’s AI future. We’re in a race with the rest of the world led by China to harness the technology for the good of humanity. AI is all brain but no ethics. In other words, AI should not decide who lives and who dies.
A group of Christian leaders wrote an open letter to President Trump recently thanking him for his leadership regarding AI. Their opening statement: “We, Christian leaders of the USA, call upon the Honorable Donald J. Trump to preserve American AI leadership, while also ensuring safe and responsible development.”
You know AI is dangerously beneficial when Christian leaders in America appeal to President Trump to ensure “safe and responsible development” of AI. The leaders clarified, “As people of faith, we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control.”
In an interview with ABC News in 2023, Sam Altman, CEO of OpenAI, said, “It is going to eliminate a lot of current jobs, that’s true. We can make much better ones.” Nevertheless, Altman has more recently written, “we are at the doorstep of the next leap in prosperity, the Intelligence Age. But we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AI as it advances, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.”
Make no mistake, AI will grow more powerful over the coming years. “With great power comes great responsibility.” Many have attributed the phrase to Voltaire, an 18th century French writer and philosopher. But most Americans today will recognize Spider-Man, one of Stan Lee’s comic book characters, as the originator of that phrase.
The Christian leaders’ caution echoed Scriptural admonitions. “The spiritual implications of creating intelligence that may one day surpass human capabilities raises profound theological and ethical questions that must be thoughtfully considered with wisdom.” To be beneficial, knowledge and intelligence must always be managed with wisdom.
This is not the first time humankind has faced moral questions raised by our own drive for power. If it were not for God’s grace we would have all perished by now.
Daniel L. Gardner is a columnist who lives in Starkville, MS. You may contact him at PJandMe2@gmail.com.
5 comments:
AI is scary as hell. Be careful of what you wish for.
Do you actually read what you've written? And, do you actually count on humans known for being unethical and lying to use AI ethically. Hint: if you lie to your wives and those who invest in your business, you might not be ethical.
DL needs to talk with Bill Crawford above. 2 AI articles in one day? AI taking over
AI is scary as hell when u realize the buffet types and risks. But for me, it has filled in the missing holes of intelligence that we lack
In rural settings: medical, legal, professional advice. U just got to know how to ask the questions and re ask so u can find what u want. U can’t think like u do in a normal conversation - u have ask based on answer given. What u label things may be different than the way the internet classifies things.
Here are Chat GPT’s thoughts on Daniel’s article. See who you trust more between the 2:
Critique & Counterpoints
1. Over-reliance on Fear Tropes
Gardner resorts to Hollywood dystopias to stir anxiety (e.g., The Terminator). While evocative, such comparisons are simplistic. Real AI issues—bias, misinformation, surveillance—are subtler but far more immediate. The slasher‑movie framing might grab attention, but it risks obscuring real-world harms like algorithmic bias in healthcare or finance.
⸻
2. Nationalistic “Race” Rhetoric
Quotes from Vance and Trump frame AI as a global arms race. But this binary approach—USA vs. China—ignores vital collaboration on ethics and safety. AI fallout doesn’t respect borders. Prioritizing national supremacy may actually weaken the shared governance needed to handle global risks.
⸻
3. Ethics ≠ Religiosity
Gardner leans heavily on Christian doctrine, which may resonate with some readers—but ethical frameworks surrounding AI are rich, secular, and multidimensional. Philosophers, ethicists, technologists, and diverse cultural traditions contribute valuable perspectives. Framing wisdom primarily in religious terms may inadvertently narrow the conversation and exclude non-Christian viewpoints.
⸻
4. Simplifies AI’s Dual Nature
Gardner mentions Altman’s forecast of job loss and prosperity, but doesn’t grapple with the complexity beneath it. Transitioning sectors, reskilling populations, social safety nets—these are critical policy issues. A more nuanced discussion would acknowledge that tech-driven prosperity isn’t automatic and requires active planning.
⸻
5. Calls for “Wisdom” Are Too Vague
His core message—“manage intelligence with wisdom”—is inspiring but lacks practical substance. What actions or policies does he envision? Stronger governance frameworks? Ethical auditing? Public literacy campaigns? Without specifics, “wisdom” risks being platitudinous.
⸻
6. Absence of Critical Voices
Despite citing Sam Altman and Christian leaders, the article omits critical voices: AI ethicists who warn about labor precarity, digital colonialism, or ecological impacts; or those advocating for regulation such as the EU’s AI Act. A balanced critique would include such viewpoints.
⸻
In Conclusion
Gardner makes valid points—AI demands ethical guidance, and humans must stay in control. But the article leans heavily on theatrical analogies, national‑istic rhetoric, and religious moralizing, while skirting real-world complexity or actionable proposals. A more robust approach would:
• Explore concrete governance strategies, like AI impact assessments or auditing.
• Embrace a plurality of ethical frameworks beyond religion.
• Reflect on socioeconomic disruptions with solutions for reskilling.
• Include dissenting voices advocating for regulation and public protections.
In short, “wisdom” is essential—but without clarity, diversity, and actionable detail, the call rings hollow.
Post a Comment