news

openai admits that the o1 inference model will increase the risk of creating biological weapons

2024-09-14

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

ai reasoning model o1

phoenix.com technology news, september 14, beijing time, openai admitted that its latest ai reasoning model o1 "significantly" increased the risk of ai being abused to create biological weapons.

openai's system card, a tool that explains how ai works, said the new model was "medium" risk for issues related to chemical, biological, radiological and nuclear (cbrn) weapons.this is the highest risk openai has ever given.openai said this meant the model "significantly improved" the ability of experts to create biological weapons.

experts say that if ai software with more advanced features falls into the hands of bad actors, it will bring a higher risk of abuse. for example, the o1 model has the ability to make step-by-step inferences.

mira murati, openai's chief technology officer, told the financial times that the company was "cautious" in launching the o1 model to the public because of its advanced features. she added that the model had been tested by a so-called "red team" (experts in various scientific fields). murati said the o1 model performed much better than previous models in overall safety indicators. (author/xiao yu)

for more first-hand news, please download the phoenix news client and subscribe to phoenix technology. if you want to read in-depth reports, please search "phoenix technology" on wechat.