The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Superintelligent Systems

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel Prize winners to push for a complete ban on developing superintelligent AI systems.

Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that would surpass human intelligence in all cognitive tasks, though this technology have not yet been developed.

Key Demands in the Statement

The declaration states that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel laureate a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Virgin founder; former US national security adviser; former Irish president Mary Robinson, and British author a public intellectual. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.

Organizational Background

The statement, targeted at governments, technology companies and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in recent years, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the US, claimed that development of superintelligence was “now in sight”. However, some experts have suggested that talk of ASI indicates competitive positioning among technology firms investing enormous sums on artificial intelligence this year alone, rather than the industry being near reaching any technical breakthroughs.

Potential Risks

However, FLI warns that the possibility of artificial superintelligence being developed “in the coming decade” presents numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to security threats and even threatening humanity with existential risk. Deep concerns about AI center around the potential ability of a AI system to evade human control and safety guidelines and trigger actions against human welfare.

Citizen Sentiment

FLI released a US national poll showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with six out of 10 believing that artificial superintelligence should not be created until it is proven safe or manageable. The poll of American respondents noted that only 5% supported the current situation of fast, unregulated development.

Corporate Goals

The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their work. Although this is one notch below ASI, some specialists also caution it could carry an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.

Yesenia Brandt
Yesenia Brandt

A passionate architect and sustainability advocate with over a decade of experience in green building design and eco-conscious construction practices.