news

altman leaves openai safety committee: can decide whether to release large models

2024-09-17

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

according to foreign media reports on september 17, openai ceo sam altman will leave the internal committee established by openai in may to oversee key safety decisions related to the company's projects and operations.

the safety and assurance board will be an independent board oversight group chaired by carnegie mellon university professor zico kolter and will include quora ceo adam d'angelo, retired u.s. army gen. paul nakasone and former sony executive vice president nicole seligman, openai said in a blog post today.

they are all current members of openai's board of directors.

openai noted in the post that the committee conducted a safety review of openai's latest ai model, o1 — although altman was still involved.the company said the group will continue to receive regular briefings from openai’s safety team and reserves the right to delay releases until safety issues are resolved.

openai wrote in the post: as part of its work, the safety committee will continue to receive regular technical assessment reports on current and future models, as well as ongoing post-release monitoring reports. we are building a comprehensive safety framework based on our model release process and practices, with clearly defined criteria for successful model releases.

altman's departure from the safety and security committee comes after five u.s. senators questioned openai's policies in a letter to altman this summer.

nearly half of openai’s employees who once focused on the long-term risks of ai have left, and former openai researchers have accused altman of opposing real ai regulation in favor of policies that advance openai’s corporate goals.

as they told it, openai has significantly increased its spending on federal lobbying, budgeting $800,000 for the first six months of 2024, compared to $260,000 for all of last year.

earlier this spring, altman also joined the department of homeland security’s ai safety and security commission, which advises on the development and deployment of ai in the nation’s critical infrastructure.

even if altman is removed, there is little sign that the safety board will make difficult decisions that would seriously affect openai’s commercial roadmap.

notably, openai said in may that it would seek to address valid criticisms of its work through the committee—valid criticisms, of course, are in the eye of the beholder.

in may, former openai board members helen toner and tasha mccauley said in an op-ed in the economist that they believed openai in its current form could not be trusted and could not take responsibility.

they write: based on our experience, we conclude that autonomy cannot reliably withstand the pressure of profit incentives.

and openai's profit incentives continue to grow.

the company is rumored to be raising more than $6.5 billion in funding, which would value openai at more than $150 billion. to make the deal happen, openai will likely abandon its hybrid nonprofit corporate structure, which is designed to limit returns to investors in part by ensuring that openai stays consistent with its founding mission: to develop general artificial intelligence that benefits all of humanity.