WASHINGTON: In a significant development, the United States government has teamed up with seven top tech companies in the field of artificial intelligence, including Google, OpenAI, and Meta. The objective of this collaboration is to establish robust guardrails that effectively manage the risks associated with AI technologies.
The measures put forward in this partnership encompass the thorough testing of AI systems' security, with the results of these tests being made publicly available. The companies involved in this endeavor are Amazon, Anthropic, Meta, Google, Inflection, and OpenAI.
President Joe Biden, addressing the gathering at the White House after the meeting on Friday, emphasized the tangible nature of the commitments made by these companies. He expressed his belief that AI would significantly impact people's lives worldwide and underscored the importance of managing this innovation responsibly and with a focus on safety.
Nick Clegg, Meta's president of global affairs, asserted that AI should be a force for the greater good of society. In order for this to happen, he stressed the need for transparency from tech companies regarding the inner workings of their AI systems and encouraged collaborative efforts between industry, government, academia, and civil society.
The agreement reached between the government and the tech companies includes provisions for security testing of AI systems by both internal and external experts before their release. This approach aims to allow people to identify instances of AI application through the implementation of watermarks, with regular public reporting on AI capabilities and limitations.
Moreover, the companies have committed to researching potential risks associated with AI, including biases, discrimination, and privacy invasion. Such endeavors underscore the serious responsibility involved in developing and deploying AI technologies correctly while recognizing the enormous potential benefits they can bring.
OpenAI highlighted that the watermarking agreements would necessitate the development of tools or APIs enabling users to determine if a particular content piece was created using their AI system. Google, on the other hand, had already committed to deploying similar disclosures earlier in the year.
Additionally, Meta announced its decision to open-source its large language model, Llama 2, thereby providing free access to researchers, a move that aligns with OpenAI's approach to GPT-4.
Overall, this collaborative effort between the US government and the leading tech companies marks a crucial step in ensuring responsible and accountable AI development and deployment for the benefit of society at large.
Google to get Zero-Day as a Bug found late reported after weeks by Apple
Google Pixel 8 Pro Specifications and Anticipated October 2023 Launch