news

Everyone loves Anthropic

2024-08-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

OpenAI's "Game of Thrones" continues, this time with the departure of co-founder John Schulman.OpenAIAfter working there for nearly 9 years, he switched to competitor Anthropic.

John Schulman said that this move was not out of dissatisfaction with OpenAI, but to focus more on AI alignment research and return to the forefront of technology. He believes that Anthropic can provide a new perspective and research environment, which is more in line with his career development plan. This is similar to the reasons given by Ilya Sutskever, the chief scientist who left OpenAI, and Jan Leike, the head of super alignment.

In addition, OpenAI product manager Peter Deng also chose to resign, and president Greg Brockman will extend his leave until the end of the year. So far, of OpenAI's 11 co-founders, only CEO Sam Altman and Wojciech Zaremba, as well as Greg Brockman, who is still on long-term leave, remain. The departure of this executive has once again triggered discussions in the industry about the development of OpenAI, and this is the second time that an OpenAI executive has jumped to Anthropic after Jan Leike.

However, in stark contrast to OpenAI’s “turbulent” year, Anthropic has shown impressive development momentum, not only attracting OpenAI talent, but also its latest products have been well received by users.

In an increasingly competitive market, Anthropic has gradually attracted more attention with its unique corporate structure and product concept.ChatGPTThere is still a large gap, but Claude's traffic share has gradually increased over the past 6 months, especially showing a significant growth trend in the most recent period.

1

"Claude will be off work when his credit limit is used up"

Anthropic is an artificial intelligence startup founded in 2021, with a team that includes several former OpenAI employees. The company is led by the founders, brother and sister Dario and Daniela Amodei, who serve as CEO and president respectively. The two previously held senior positions at OpenAI. Because of disagreements over the direction of OpenAI's development, they left to start Anthropic. Since Anthropic was founded, the two San Francisco-based companies have been in fierce competition to develop the best AI models.

Anthropic's latest model is Claude 3.5 Sonnet, which sets new industry benchmarks in multiple areas, including graduate-level reasoning ability (GPQA test), undergraduate-level knowledge reserve (MMLU test), and programming skills (HumanEval test). It performs well in multiple assessments, surpassing competitor models while maintaining the speed and cost of the mid-range model Claude 3 Sonnet.

Sonnet 3.5 ranked first in the chatbot arena coding category.

Claude 3.5's powerful code generation capabilities are particularly well received. Many programmers say that they can no longer write code without the help of Claude 3.5 Sonnet. Recently, Y Combinator CEO forwarded an article praising Claude Sonnet 3.5 on X, which received more than 4 million views.

In the article, the author wrote that after using Claude Sonnet 3.5, work efficiency has been significantly improved, and the technical parts of most popular applications can be implemented 10 times faster than before. Although architectural and infrastructure decisions still need to be made, things like UI component functions are now 10 times faster, making iteration very fast.

His workflow is divided into three steps:

1. Think carefully about the function and discuss it with Claude;

Write a basic spec for the feature (usually just a few sentences and bullet points) and iterate with Claude.

3. Ensure all relevant background is provided to Claude and required implementation (code)

The CEO of ottogrid.ai also said in a follow-up post that 50% of their code was written by Claude, and that this percentage will increase to 80% next year. He also said, "Don't use Claude 3.5 to write code? Be careful not to be left behind by teams like ours."

Claude is not available, so I decided to take a day off

Claude will be off work when his credit limit is used up.

What’s even more exaggerated is that Erik Schluntz, an engineer at Anthropic, broke his right hand in a bicycle accident and could only type with his left hand. He used speech-to-text and Claude AI to continue working, and even wrote more than 3,000 lines of code in a week.

Erik Schluntz also wrote an article to share his experience. He believes that the application of AI in the field of software development is showing a rapid development trend. He predicts that in the next 1-3 years, AI engineers will become a reality and be able to work autonomously and collaboratively. By then, creativity will become the only bottleneck.

1

Artifact: Starting the AI ​​Interaction Revolution

It is worth mentioning that Claude also launched a new interactive method Artifact, which allows users to run and debug code directly in the AI ​​dialogue interface. The main features of Artifacts include real-time code execution, interactive operation, visual preview, and cross-platform sharing capabilities. These features enable developers to quickly verify ideas, iterate prototypes, and easily display results.

Claude 3.5 Sonnet + Artifacts is a "game-changing product".

Artifacts provides developers with a more direct programming experience. Many developers say this feature opens up new possibilities for AI-assisted development and has the potential to change the current application development model.

At first glance, Artifacts may seem like an unremarkable update. Just a dedicated workspace, alongside a chat interface, that lets users manipulate and optimize AI-generated content in real time. But this seemingly simple addition could be the beginning of one of the most critical battlegrounds for AI in the coming years: the interface.

Because a big challenge in AI is not only to create smarter AI, but also to make it easy to use, intuitive and seamlessly integrated into existing workflows.

This is also where Anthropic is very different from competitors such as OpenAI. ChatGPT's new voice function is eye-catching, and Google emphasizes Gemini's ability to acquire and process knowledge, but Anthropic is aiming at a more fundamental problem: How to transform AI from a fancy chatbot to a real partner?

By creating a space where AI-generated content can be easily edited, optimized, and incorporated into existing projects, Anthropic is bridging the gap between AI as a tool and AI as a team member—a shift that has the potential to revolutionize how work is done across industries.

This also highlights a growing philosophical divide in AI development. OpenAI and Google seem to be caught in an arms race of model capabilities, competing to build the biggest and smartest AI. Anthropic, on the other hand, is playing a different game, focusing on practicality and user experience.

In an industry often accused of chasing benchmarks while ignoring real-world applications, Anthropic’s focus on user experience could set it apart. As companies work to integrate AI into their operations, solutions that are not only smart enough but also offer intuitive interfaces and seamless workflow integration will have a decisive advantage.

As the capability gap between models narrows, building an ecosystem around models is key to retaining customers. Especially in the field of programming, Artifacts provides developers with a new and more efficient workflow.

Of course, Artifacts is still in its infancy, and competitors are not going to sit idly by. It is foreseeable that as other companies realize the importance of rethinking the user interface, a lot of innovation will emerge in this field.

Anthropic has a solid release, while OpenAI is often criticized for over-promotion

1

“What happened to OpenAI will not happen to us”

OpenAI's previous "palace fight" storm, in which the board of directors fired Sam Altman, was possible because of the hidden dangers buried in OpenAI's corporate structure. In OpenAI's structure, the company is managed by a non-profit board of directors that is not accountable to the company's shareholders.

Anthropic is closer to a traditional company, with a board of directors that is accountable to shareholders. However, Anthropic also adopts a non-traditional corporate structure. It is not a limited liability company, but a public benefit company (PBC), which means that in addition to the fiduciary responsibility to increase shareholder profits, its board of directors also has legal space to ensure that "transformative AI contributes to the prosperity of humanity and society." In other words, if the board chooses to prioritize safety rather than increasing profits, it will be more difficult for shareholders to sue Anthropic's board of directors.

Anthropic has always been proud of its unique corporate structure, considering itself different from OpenAI. Anthropic has also emphasized to the media that what happened to OpenAI will not happen to Anthropic. However, Anthropic's structure is essentially an experimental design. Harvard Law Professor Noah Feldman, who served as an external consultant when Anthropic established its early governance structure, said that even the best design in the world may not work sometimes. But he has high hopes for Anthropic's success.

In addition to the company structure, another major difference between OpenAI and Anthropic is the use of a structured approach to ensure that the behavior of AI systems meets specific ethical standards and codes of conduct, which is an important feature of Anthropic.

Anthropic is more concerned about the safety and controllability of artificial intelligence, and is committed to developing explainable, auditable, and guideable AI to ensure that artificial intelligence can serve humanity. Therefore, when training Claude, Anthropic adopted a method called "Constitutional AI" (CAI), which is similar to OpenAI'sGPTThere are differences in how models are trained.

By giving Claude a set of guidelines or "constitutions," they are introduced at the early stages of model training, rather than just for screening after the answers are generated. These principles cover a wide range of areas from ethics to data privacy, with the goal of allowing AI systems to make decisions and generate content in accordance with these principles.

However, this commitment also brings greater challenges. Although Anthropic adheres to its unique corporate structure and mission, it still needs to deal with the dual challenges of external pressure and internal balance in the real world business environment.

In the past year, Anthropic has raised more than $7 billion, mostly from tech giants like Amazon and Google. These companies, along with Microsoft and Meta, are vying to dominate the field of AI. In the future, Anthropic will need more financial support. It must continue to launch better products and show huge profit prospects to meet investors' expectations in order to get the huge funds needed to build top models.

On the other hand, if Anthropic can maintain its current trend of being more robust than OpenAI, the company may be able to open up a new path - one in which AI can develop safely, without being affected by the harsh pressures of the market, and bring benefits to society as a whole.