news

openai and anthropic agree to submit new models to us government for safety assessment before launching

2024-08-30

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

according to media reports on thursday, august 29th, eastern time, ai leaders openai and anthropic have agreed to allow the ai ​​safety institute under the us government to evaluate the capabilities and possible risks of new ai models before launching them, and jointly study methods to mitigate potential security risks and ensure that these ai technologies will not have a negative impact on society.

the u.s. ai safety institute was established in 2023 under an executive order on ai by the biden-harris administration. the institute is tasked with developing tests, evaluations, and guidelines to ensure that ai technology can innovate responsibly. in addition, under an agreement announced thursday by the u.s. department of commerce’s national institute of standards and technology (nist), the u.s. government will work closely with the uk’s ai safety institute to provide feedback to these ai companies to help them improve their security measures.

elizabeth kelley, director of the american ai safety institute, said:

“safety is critical to driving breakthrough technological innovation. these agreements are just a start, but they will be critical as we lead the future of ai responsibly.”

jason kwon, chief strategy officer at openai, expressed support for the collaboration:

“we strongly support the mission of the national ai safety institute and look forward to working together to develop safety best practices and standards for ai models. we believe the institute plays a key role in ensuring american leadership in the responsible development of ai. we hope that through our collaboration with the institute, we can provide a framework that the world can learn from.”

anthropic also said that it is important to build the ability to effectively test ai models. jack clark, the company’s co-founder and head of policy, said:

“ensuring ai is safe and trustworthy is critical to enabling the technology to have a positive impact. through testing and collaboration like this, we can better identify and mitigate the risks posed by ai and promote responsible ai development. we are proud to be part of this important work and hope to set a new standard for the safety and trustworthiness of ai.”

it is also worth noting that openai, which is backed by microsoft, is preparing a new round of financing with the goal of raising at least $1 billion, which would value the company at more than $100 billion. microsoft has invested $13 billion in openai since 2019 and now owns 49% of the company's profits.

news of the new funding comes about a month after openai revealed it was testing a new feature called searchgpt, which combines ai technology with real-time search data to potentially allow chatgpt to not only answer questions but also help you find answers online.