Anthropic, Google, Microsoft, and OpenAI Launch Frontier Model Forum for Safe and Responsible AI Development
Anthropic, Google, Microsoft, and OpenAI Launch Frontier Model Forum for Safe and Responsible AI Development
Share:

Washington: On Wednesday, major players in the AI sector unveiled a group devoted to "safe and responsible development" in the industry. The Frontier Model Forum's founding members include Anthropic, Google, Microsoft, and OpenAI.

According to a statement released by Microsoft on Wednesday, the forum aims to promote and develop a standard for evaluating AI safety while assisting governments, businesses, policy-makers, and the general public in understanding the risks, limits, and possibilities of technology. 

Additionally, the forum aims to establish best practises for tackling "society's greatest challenges," such as "climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats." 

Also Read: What impact will the ongoing judicial reform have on Israel's Arab citizens?

Anyone who is working on "frontier models" that are intended to advance machine learning technology and is dedicated to the security of their projects is welcome to join the forum. The forum wants to collaborate with academics, NGOs, and governments to create working groups and partnerships.

Microsoft President Brad Smith said in a statement that "companies developing AI technology have a responsibility to ensure that it is safe, secure, and remains under human control."

Thought leaders in AI have emphasised the dangers of unchecked development and have called for more meaningful regulation in a field that some fear could bring civilization to an end. All forum participants' CEOs, with the exception of Microsoft, signed a statement in May urging governments and international organisations to prioritise "mitigating the risk of extinction from AI" on a par with averting nuclear war.

Also Read: A 14-year-old Palestinian is killed by army fire as an Israeli minister visits a flashpoint mosque

 

Thought leaders in AI have emphasised the dangers of unchecked development and have called for more meaningful regulation in a field that some fear could bring civilization to an end. All forum participants' CEOs, with the exception of Microsoft, signed a statement in May urging governments and international organisations to prioritise "mitigating the risk of extinction from AI" on a par with averting nuclear war.

The CEO of Anthropic, Dario Amodei, cautioned the US Senate on Tuesday that artificial intelligence (AI) is much closer than most people think to surpassing human intelligence, and he urged them to pass strict regulations to stop nightmare scenarios like AI being used to create biological weapons. 

He echoed OpenAI CEO Sam Altman, who earlier this year warned that AI could go "quite wrong" in testimony to the US Congress.

Also Read: UAE mourns for three days in honour of the late ruler's brother

Under the leadership of Vice President Kamala Harris, the White House established an AI task force in May. It obtained an agreement last week with the forum members, Meta, and Inflection AI to permit third-party audits for security flaws, privacy risks, discrimination potential, and other issues before releasing products onto the market and to report all vulnerabilities to the appropriate authorities.  

Join NewsTrack Whatsapp group
Related News