news

Former OpenAI researcher warns his former employer: Unregulated AI will cause catastrophic harm

2024-08-24

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

IT Home reported on August 24 that according to a report by Business Insider this morning Beijing time, after OpenAI publicly expressed its opposition to California SB 1047 (AI Safety Act), two former OpenAI researchers stood up to publicly oppose their "old employer" and issued a warning.

The California AI Safety Act will require AI companies to take measures to prevent their models from causing "serious harm," such as developing biological weapons that could cause mass casualties or causing economic losses of more than $500 million (IT Home Note: currently approximately RMB 3.566 billion).

The former employees wrote to California Governor Gavin Newsom and other lawmakers that OpenAI’s opposition to the bill was disappointing but not surprising.

"We chose to join OpenAI because we wanted to ensure the safety of the 'incredibly powerful AI systems' that the company was developing," two researchers, William Sanders and Daniel Kokotailo, wrote in the letter.But we chose to leave because it lost our trust in developing AI systems safely, honestly, and responsibly.

The letter also mentioned that OpenAI CEO Altman has publicly supported the regulation of AI on many occasions, but when actual regulatory measures were ready to be introduced, they expressed opposition. "Developing cutting-edge AI models without adequate safety precautions poses a foreseeable risk of catastrophic harm to the public."