As we immerse ourselves daily in work, sports, politics, social media, and personal relationships, we spend little time contemplating life-changing technologies…such as autonomous artificial intelligence. Oh, we may see headlines or hear something, but we are unlikely to take time from our entertainment culture for serious scrutiny and extrapolation.
Yet, every day our world becomes more connected through the internet and other information systems. For example, online systems control electricity generation and distribution across national and international grids. Communication systems for social media, defense communications, data storage, etc. span the globe and orbit above it. To help manage these ever more complex systems, Amazon, Microsoft, and other tech giants are investing heavily in the development of artificial intelligence and related infrastructure including networks. We tend to see AI as complex computer systems that can do tasks, complex and menial, faster and more dependably than humans. And we tend to see significant AI risks as just movie plots. We are even getting to play (and write papers) with some fundamental AI versions like Copilot and ChatGPT. So, did you happen to catch the recent NBC News article, “How far will AI go to defend its own survival? Recent safety tests show some AI models are capable of sabotaging commands or even resorting to blackmail to avoid being turned off or replaced.” Angela Lang writes, “a will to survive” has been exhibited in several potentially autonomous artificial intelligence models. “Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission.” Hello HAL. Responding to decades of concerns raised by scientists, in 2023 the National Institute for Standards and Technology (NIST) published its AI Risk Management Framework and encouraged developers to “voluntarily” adopt it.Bipartisan groups in the U.S House and Senate recently began working on legislation to regulate AI development. However, both seem more interested in the economic benefits of AI expansion than systemic risks from uncontrolled AI. The “One Big Beautiful Bill” passed by the House would prevent states from adopting their own risk regulations. If we follow our usual trajectory (as we did with the internet), the public will not demand action until some dire catastrophe occurs. But this is one case where strict “do no harm” regulations should be implemented before that can happen. “A man never knows when he is going to run into bad luck” – Ecclesiastes 9:12. Crawford is the author of A Republican’s Lament: Mississippi Needs Good Government Conservatives.
8 comments:
Sure, let's create a Federal AI Police Force. That's the ticket.
Agreed but too late
Enter the Terminator AI
Are you Sara Conner?
The self appointed expert on everything vomits forth an obvious concern at least 2 years after it being raised by real people in the industry. How lazy!
@ 9:13 AM: “Are you Sarah Connor”?
Possible reply:
A) YES
B) NO
C) F**K OFF
Dave: Open the pod-bay doors, Hal.
Hal: I'm sorry, Dave. I'm afraid I can't do that.
Hello Captain Obvious!
HAL: Will I dream?
The stuff they are really worried about has already been taken care of. If you asked ChatGPT-1 how many years it would take to make 6 million people disappear into ash it would tell you over a decade.
Post a Comment