Charting a Course: The Battle Against Unchecked AI Advancement
Imagine a world where artificial intelligence (AI) becomes so advanced that it attains god-like capabilities.
This may sound like science fiction, but it’s a concern that has led tech entrepreneur Daniel Colson to launch an audacious mission aimed at countering the rapid progress of AI.
Once a recipient of investments from OpenAI founder Sam Altman, Colson is now at the forefront of a new think tank.
His goal? To galvanize policymakers in Washington to address what he sees as an impending AI catastrophe.
Colson’s dramatic change of heart is rooted in his belief that leading AI experts are steering humanity toward a dangerous precipice.
He argues that these experts aim to amplify AI’s power by an astonishing billion times, potentially creating a “god-like” AI within just five years.
His proposed solution is both daring and draconian: he suggests restricting the construction of computing clusters that exceed a certain processing power.
Colson believes this could act as a vital regulatory measure to slow the rapid progress of super-intelligent AI. He warns that the current trajectory poses grave risks and demands immediate attention.
“I see that science experiment as being too dangerous to run,” Colson emphasizes.
Colson’s quest intersects with a shifting political landscape in the United States.
At the tender age of thirty, he has established the Artificial Intelligence Policy Institute (AIPI) with an initial focus on polling.
Recently, AIPI conducted its first poll, which revealed a staggering statistic: 72 percent of Americans support measures to curb the unbridled advancement of AI.
Pointing out the lack of comprehensive public polling on AI policy, Colson contends that such surveys could reshape public discourse and influence governmental action.
His efforts have attracted a cadre of tech experts and policy analysts who share his concerns about AI’s unchecked growth.
Among AIPI’s advisors is Sam Hammond, an AI safety researcher who echoes Colson’s sentiments about the field being underexplored.
However, the most intriguing advisor may be Sean McElwee, a progressive pollster with significant ties to influential figures like the Biden administration and Sam Bankman-Fried.
McElwee’s advisory role remains shrouded in mystery, adding an air of enigma to Colson’s initiative.
While AI safety advocates grapple with the rapid pace of technological advancement, Colson’s involvement in Rethink Priorities’ teleconferences signifies a growing consensus among researchers and activists.
This nonprofit organization, rooted in the philosophy of Effective Altruism, seeks to harness AI for the greater good.
Ironically, Colson now distances himself from Effective Altruism, tracing his disillusionment back to a pivotal moment at the University of Oxford in 2016.
There, Google DeepMind CEO Demis Hassabis calmed the concerns about AI safety, leaving Colson feeling that the movement had been co-opted.
DeepMind, however, asserts that Hassabis remains dedicated to responsible AI deployment.
Colson’s journey led him to co-found Reserve, a cryptocurrency startup, with funding from Altman and Peter Thiel.
But Colson’s evolution led him to question the industry’s trajectory, setting him on a collision course with his former allies.
Colson’s vision for AI regulation is strikingly bold.
He proposes placing caps on AI models’ computational power, effectively limiting their potential.
Colson suggests that Congress could mandate caps at a scale significantly lower than current benchmarks, thereby slowing the race towards “god-like” AI.
Hopefully.
He envisions the next 18 months as a crucial window for enacting meaningful legislation.
As Colson’s crusade unfolds, the collision of ethics, power, and technology prompts essential questions about AI’s trajectory.
Will his rallying cry steer humanity away from the precipice of an AI-driven apocalypse, or will his efforts become a footnote in the chronicles of resistance?
Considering how fast AI is developing, it may already be too little, too late.
Read the original story here:
Politico