news

openai and anthropic agree to submit new models to the us government for safety assessment before launching

2024-08-30

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

it home reported on august 30 that artificial intelligence companies openai and anthropic have agreed to allow the us government to access major new artificial intelligence models before these companies release them to help improve their security.


image source: pexels

the us ai safety institute announced on thursday thatthe two companies have signed a memorandum of understanding with the institute, committing to provide access to the model before and after it is publicly released.the u.s. government said the move would help them jointly assess security risks and mitigate potential issues. the agency said it would work with its british counterparts to provide feedback on security improvements.

jason kwon, chief strategy officer at openai, expressed support for the collaboration:

“we strongly support the mission of the national ai safety institute and look forward to working together to develop safety best practices and standards for ai models. we believe the institute plays a key role in ensuring american leadership in the responsible development of ai. we expect that through our collaboration with the institute, we can provide a framework that the world can learn from.”

anthropic also said that it is important to build the ability to effectively test ai models. jack clark, the company’s co-founder and head of policy, said:

“ensuring ai is safe and trustworthy is critical to the positive impact of this technology. through testing and collaboration like this, we can better identify and mitigate the risks posed by ai and promote responsible ai development. we are proud to be part of this important work and hope to set a new standard for the safety and trustworthiness of ai.”

sharing access to ai models is an important move as federal and state legislatures consider how to set limits on the technology without stifling innovation. it home noted that on wednesday, california lawmakers passed the frontier ai model safety innovation act (sb 1047), requiring california ai companies to take specific security measures before training advanced base models. this has drawn opposition from ai companies including openai and anthropic, who warned that this could hurt smaller open source developers, although the bill has been amended and is still awaiting the signature of california governor gavin newsom.

meanwhile, the white house has been working to get voluntary commitments from major companies on ai safety measures. several leading ai companies have made non-binding commitments to invest in cybersecurity and discrimination research and to work on watermarking ai-generated content.

elizabeth kelly, director of the ai ​​safety institute, said in a statement that the new agreements are "just the beginning, but they are an important milestone in our efforts to help responsibly govern the future of ai."