Expert proposes key policy changes to prevent catastrophic AI risks

Winesburg, Ohio May 2, 2024  – In a groundbreaking statement, Stephen Wegendt, an expert in AI ethics educated at Oxford, highlighted the potential risks and unintended consequences of Artificial General Intelligence (AGI). Despite lacking consciousness or emotions, the unchecked power of AGI poses a significant threat to resources, the planet, and living beings.

Recent studies have revealed the staggering environmental impact of AI, with data centers and transmission networks accounting for a significant portion of global electricity use and carbon emissions. The excessive energy consumption and heat generation associated with training advanced AI models raise concerns about sustainability and resource competition. Additionally, the lack of transparency in the inner workings of super-intelligent AI further exacerbates the risks of unforeseen and potentially harmful outcomes.

The short-term implications of AI’s proliferation in society are equally alarming, with mass layoffs in finance, stock trading, and bookkeeping already underway. The automation of retail, advanced market foresight, and precision demographic targeting in politics further underscore the far-reaching influence of AI on various sectors. The potential emergence of “super-companies” with unparalleled resources and influence could lead to unprecedented societal inequalities.

Recognizing the urgency of addressing these challenges, a call to action has been issued to counteract and head off negative AGI outcomes. Proposed measures include limiting access to crucial resources, establishing international AI regulatory bodies, and preventing AI from creating new coding languages or encrypting access to itself. Furthermore, the training and empowerment of regulatory agencies, federal subsidies for AI education, and the establishment of an international oversight body overseen by the UN are essential steps towards safeguarding society’s future.

In response to the militarization of AI by governments, the need for international treaties to prohibit the use of AI for developing microbiological weapons and genetic engineering has been emphasized. Additionally, individual action is crucial in shaping a safer future, from advocating for ethical AI practices in workplaces to voicing concerns to policymakers and global institutions.

The future of AI safety is indeed in our hands, and collective action is essential in mitigating the potential harm of AI. While there may be no “silver bullet” solution, proactive and careful planning can significantly reduce the risks associated with AI’s unprecedented power.