Multi-nation agreement seeks cooperation on development of 'frontier' AI tech


The U.S. and other countries signed an agreement to collaborate and communicate on “frontier” artificial intelligence (AI) that will aim to limit the risks presented by the technology in the coming years. 

“We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks,” the Bletchley Declaration, signed by 28 countries, including the U.S., China and members of the European Union. 

The international community has wrangled with the problem of AI, trying to balance the obvious and emerging risks associated with such advanced technology against what Britain’s King Charles III called the “untold benefits.” 

The Bletchley Declaration therefore lays out two key points: “identifying AI safety risks” and “building respective risk-based policies across our countries to ensure safety in light of such risks.”

EXPERT SAYS BIDEN ADMIN’S AI SAFETY INSTITUTE NOT ‘SUFFICIENT’ TO HANDLE PITFALLS

The U.S. and the United Kingdom have already announced the establishment of institutes dedicated to these very tasks. 

AI summit

Vice President Kamala Harris, Prime Minister Rishi Sunak and Italian Prime Minister Giorgia Meloni are among the leaders participating on day two of the AI Safety Summit at Bletchley Park in the U.K. on Thursday, Nov. 2, 2023. (Tolga Akmen/EPA/Bloomberg via Getty Images)

The British institute, announced Friday, will serve as a potential global hub for “international collaboration on… safe development.” The institute will also seek to work with leading AI companies, including those in the U.S. and Singapore, to help avoid potential risks. 

The institute will “carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely.”

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Biden and Harris AI signing

President Biden signs an executive order focused on government regulations on artificial intelligence at the White House on Monday, Oct. 30, 2023. (Demetrius Freeman/The Washington Post via Getty Images)

British Prime Minister Rishi Sunak also committed just shy of $500 million toward the AI sector to bolster the country’s development efforts – a significant increase over its initial $125 million investment pledge for new computer chips. The investment aims to inspire innovation and keep the U.K. at the front of the sector, according to The Telegraph.

The United Kingdom has sought a leading role in the development and regulation of AI technology, and it made that clear by holding the first international AI Safety Summit at Bletchley Park, where Alan Turing developed the first computing machine to aid in code-breaking during World War II.

AI Safety Summit 2023

World leaders and tech industry experts pose for a photo during the AI Safety Summit at Bletchley Park, Britain, Nov. 2, 2023. (Leon Neal/Pool via Reuters)

Turing considered artificial intelligence shortly after he invented the code-breaking machine, publishing “Computing Machinery and Intelligence” in 1950. He discussed arguments of consciousness in machines and refuted arguments against the ability to develop such intelligence. 

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people,” Sunak said in a press release about the AI Safety Institute’s establishment. “This is the right approach for the long-term interests of the U.K.”

EXPERTS DETAIL HOW AMERICA CAN WIN THE RACE AGAINST CHINA FOR MILITARY TECH SUPREMACY

Researchers from the Alan Turing Institute and Imperial College London “have also welcomed” the institute’s launch, according to the prime minister’s office. 

X CEO Elon Musk

Elon Musk speaks with other delegates at the AI Safety Summit at Bletchley Park in Britain on Nov. 1, 2023. (Leon Neal/Pool via Reuters)

After the public release of ChatGPT from Microsoft-owned OpenAI, the public’s imagination ran wild with both the positive and negative potential of the technology, with some professing concerns over a possible “Terminator” future.

Tesla founder and X CEO Elon Musk earlier this year said he found a “strong probability” that AI “goes wrong and destroys humanity” – a “small” chance that is “not zero,” even though he did not explain how that would happen. 

CLICK HERE TO GET THE FOX NEWS APP

The Bletchley Declaration will seek to ensure that doesn’t happen, though, stating a strong resolve to “sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all.”

Leave a Reply

Your email address will not be published.