news

OpenAI announces change in developer conference format and will not release GPT-5

2024-08-06

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

August 6 news, last year,AIStartupsOpenAIHeld its first developer conference in San Francisco with much fanfare and introduced a number of products, including theGPT Store (similar to Apple's App Store).

However, this year's event will be relatively low-key. On Monday, OpenAI announced that it would transform its DevDay developer conference into a series of developer-focused participatory sessions. The company also confirmed that it would not release the next generation of the main flagship model during DevDay, but would focus on updates to its API and developer services.

An OpenAI spokesperson revealed: “We do not plan to announce our next model at the developer conference. We will focus more on introducing existing resources to developers and showcasing stories from the development community.”

This year's OpenAI DevDay events will be held in San Francisco on October 1, London on October 30, and Singapore on November 1. All events will be held in the form of workshops, breakout sessions, live demonstrations from OpenAI product and engineering teams, and developer meetings. The registration fee is $450 and the registration deadline is August 15.

In recent months, OpenAI has adopted a more steady, iterative strategy in the field of generative AI, rather than pursuing breakthrough leaps. The company has chosen to fine-tune and fine-tune its tools while training successors to its current leading models, GPT-4 and GPT-4 mini. The company has improved methods to improve the overall performance of its models and minimize the frequency with which models deviate from their intended tracks, but according to some benchmarks, OpenAI appears to have lost its technical lead in the generative AI race.

One reason may be that high-quality training data is becoming increasingly difficult to find.

Like most generative AI models, OpenAI's model is trained on a large amount of online data - many creators choose to block their data because they are worried that their data will be plagiarized or that they will not get due recognition or compensation. According to data from Originality.AI, an AI content detection and plagiarism detection tool, more than 35% of the world's top 1,000 websites now block OpenAI's web crawlers. The MIT Data Source Project study also found that about 25% of "high-quality" data has been excluded from the main datasets used to train AI models.

Research firm Epoch AI predicts that if the current trend of blocking data access continues, developers will run out of data available to train generative AI models between 2026 and 2032. This, coupled with fear of copyright lawsuits, has forced OpenAI to sign costly licensing agreements with publishers and various data brokers.

OpenAI is said to have developed a reasoning technique that can improve the responsiveness of its models on certain problems, especially mathematical ones. Mira Murati, the company's chief technology officer, promised that future OpenAI models will have "PhD-level" intelligence. This prospect, while promising, also faces huge pressure. OpenAI is reported to have spent billions of dollars on training its models and hiring high-paid researchers.

Time will tell whether OpenAI can achieve its ambitious goals while dealing with numerous controversies. Regardless, slowing down the product cycle may help refute those who claim that OpenAI has neglected AI safety work in the pursuit of more powerful generative artificial intelligence technology. (Xiaoxiao)