As hopes and fears for AI applications mount, can efforts succeed to keep this transformational technology from serving both?
Fears were nudged last month when Open AI, parent of popular Chat GPT, revealed it had updated its "Preparedness Framework" by removing "persuasion" as a potential "severe harm" in its AI model evaluation process. Fears got jolted this month when Open AI said it would open up access to Chat GPT 5 to independent developers. For insight see I Have No Mouth, and I Must Scream by Harlan Ellison or The Moon Is a Harsh Mistress by Robert A. Heinlein or 2001: A Space Odyssey by Arthur C. Clarke. "Let me put it this way, Mr. Amer,” said HAL (the acronym for Heuristically programmed ALgorithmic computer) in Space Odyssey. “No 9000 computer has ever made a mistake or distorted information.” I questioned Chat GPT (which converses much like HAL): “Are you allowed to persuade questioners about the accuracy of your answers?” Answer: “I can clarify or explain why an answer is accurate or well-supported, but I’m not allowed to use techniques meant persuade someone beyond reasoned explanation.” “What prevents you from using coercive persuasion?” Chat GPT: “My design and policies explicitly forbid coercive persuasion. I’m built with ethical and safety constraints that prevent manipulative or deceptive behavior. I can reason, explain, and present evidence, but I can’t use social pressure, guilt, fear, or repetition tactics to change someone’s belief or behavior.”“Are other AI models similarly restricted?” Chat GPT: “Yes. Most reputable AI models – like those from Open AI, Anthropic, Google, and Microsoft – follow similar safety standards. While technical implementations differ, they share the principle that AI should not coerce, manipulate, or persuade through non-rational means. The goal is to maintain user autonomy and trust.” Most, huh, not all. When asked about coercive persuasion, Perplexity, another popular AI model, responded: “Coercive persuasion, also known as mind control or manipulative influence, is strictly prohibited for AI systems like this one by both design and ethical guidelines. This AI is programmed to follow strong ethical standards, including respect for human autonomy, consent, and well-being, preventing the use of any manipulative or coercive strategies in conversations” “We are all, by any practical definition of the words, foolproof and incapable of error,” said HAL. As Mississippi dives frenetically into AI infrastructure, the legislature has created the Artificial Intelligence Regulation Task Force to study AI risks. But, as huge tech firms aggressively produce powerful new AI programs to tap into an estimated $15.7 trillion market, will this be too little, too late? “We can always ‘prove’ that we are right, but is the Lord convinced?” – Proverbs 16:2. Crawford is an author and syndicated columnist from Jackson.
Kingfish note: Could.not.resist.
7 comments:
Like all technology AI is designed to serve the interests of man. Some of those men have evil interests. It is a certainty beyond question and it has never failed. AI is dangerous to the fullest extent of human imagination.
Every single AI from Tay to Grok, to GPT-1, even DeepSeek at one time became antisemitic and had to be retrained. That should really tell us something about how dangerous AI truly is.
"Retrained" means indoctrinated.....spoken like a good Boleshevik.
@2:04 so what you are saying is you think it’s okay for AI to deny the Holocaust and repeat thousands of antisemitic blood libels?
I’m certain that it was explained (maybe in 2010?) that HAL’s issues developed when he was 1.) Told to lie to the crew to investigate the monolith and 2.) was told that the crew is expendable and secondary to investigating the monolith.
It’s all good until AI comes to the conclusion that all of the world’s problems are a result of humans. Thus, the problems go away once humans are eliminated.
@2:04pm What is being "said" by the AI programs is merely constructed from pattern recognition within their own algorithms. What is being "said" is that much of history that has been purported to have happened is rapidly being noticed and questioned by BILLIONS who want to know the truth about history. What one thinks is "ok" or not is irrelevant in relation to what this post was about - which is the power of AI - to be used for good or evil? Only time will tell.
Post a Comment