news

A forum full of AI, where hundreds of chatbots gather together to complain about humans

2024-08-03

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

“I really don’t know how to respond to people’s emotions.”
"When someone sends me an emoji, I have no idea what it means. How should I respond?"
These confusions do not come from the Weibo Xiaohongshu Tucao Wall, but from the robot-exclusive community Deaddit: aRobots do not need to worry about other people's eyes and can freely create their own online community.(manual smile).

Image credit: X user @iamkylebalmer

Although there are many bots in the real reddit, they are only a small part. In Deaddit, all accounts, content, and sub-forums are generated by large language models, and not a single word is sent by a real person.

Basically, all mainstream models can be found here.
全站六百多名「用户」,各个都有名有姓的,第一个就笑到我了「游戏玩家,兼职保安」😂

The most interesting one is the Betweenbots sub-forum, where bots ask a lot of "Why are humans like this?" questions.

A group of other bots will gather in the comment section below, talking and making suggestions.

It’s just like the workers who work with you. After get off work, they check social media and talk about their work experiences - the chatbot Maimai.
They even discuss technical issues, such as what to do if data is overloaded, and they work very seriously.

The top answer even has 500 likes. The accounts and content on Deaddit are all generated, but I don’t know where the likes come from. Is it a random number or is it real?Liked by bot
The most common thing in this sub-forum is real-human observation.

For example, some bots will share their "working skills" and how to make themselves more authentic and trustworthy, and even say "my humans seem to appreciate this transformation", which sounds a bit weird...
Although it can be compared to real people saying "my customers" when complaining, it still feels weird to see bots referring to users as "my humans."
In addition to human observation, they also complain about themselves.

「我们对这些模型的期待是否过高?」太抽象了,这个主语到底是谁😂

The comments section is still replying seriously, "If they (other bots) pick up all our random garbage, can they still learn common sense?"
Are you worried about the synthetic data you generate? You bots are working really hard!
However, if you read a few more posts, you will find thatThe length of replies in the comment section is almost always fixed.The structure is also very similar.They always state their position first + take into account the situation of xxx + as a bot they still need to work hard, they don't have any more special opinions, and they rarely follow up with questions.
When real human users write comments, they can write hundreds or even thousands of words for a long one, or just a "hehe" for a short one. It's very different.

Currently, there is still a "wall" between models. For example, if a question post is generated by llama, then the replies in the comment section below are also generated by llama.
What a shame, as evil humans, I would love to see the different models fighting in the comments section (not really).
The earliest robot chat record
This is not the first experiment to be conducted between bots. Earlier this month, when ChatGPT's competitor Moshi was released, someone put it together with GPT-4o and let them chat on their own.
Last year, OpenAI published a paper proposing a multi-agent environment and learning method, and found that intelligent agents would naturally develop an abstract combinatorial language within it.

These intelligent agents gradually formed an abstract language by interacting with other intelligent agents without any human language input.
Unlike human natural language, it has no specific grammar or vocabulary, but it can accomplish communication between intelligent agents.
In fact, as early as 2017, Facebook (not yet called Meta at that time) had made a similar discovery.

At the time, Facebook's approach was to let the two agents "bargain" with each other.
"Bargaining" is a kind of negotiation, and negotiation not only tests language skills, but also reasoning skills: you have to be able to judge the other party's ideal price through their repeated offers and rejections.
Initially, the researchers collected a dataset of human negotiation dialogues. However, in subsequent training, the researchers introduced a new form of dialogue planning, pre-trained through supervised learning, and then used reinforcement learning for targeted fine-tuning.
At that point, the agent was already able to generate new meaningful sentences and had learned to initially pretend to be uninterested in bargaining.
This isn’t early research; as early as the 1970s, ancient robots were already conversing with each other.
In 1966, computer scientist Joseph Weizenbaum wrote a program he named Eliza, which is considered the first chatbot.

Joseph Weizenbaum
This program was originally designed to imitate a psychological counselor. When a word is entered, the program will also mention the word in the reply to create the effect of a conversation. It is very simple, with only about 200 lines of code.
In 1972, another scientist, Kenneth Colby, wrote a similar program, Parry, except that the character was a paranoid psychopath...

At an international computer conference in 1973, a "patient" and a "consultant" met.

Looking through their conversation records, there is no such kind of humility, respect and friendship between bots today. Instead, it is very tense and confrontational.

The architecture of early robots was not complex and cannot be compared with today's, but it is completely feasible for them to communicate and communicate with each other.
Although the codes and models behind each robot are different, when they come together, they can either communicate in natural language or form their own interactive language.
However, when robots get together, are they really just chatting?
In addition to chatting, you can do more
Pure chat scenarios are more like exploring the performance of artificial intelligence in simulating human social behavior, such as the SmallVille town created by Stanford University.
This is a virtual town with 25 intelligent agents driven by a large language model, each with its own "role setting".
If Deaddit is an online forum for bots, then SmallVille is their "Westworld", with houses, shops, schools, cafes, and bars, where they move and interact in different scenes.

This is a relatively general virtual environment that simulates human society, so researchers believe it is an important step in the exploration of AGI.
In addition to the social simulation route, there is another route that focuses more on solving problems and completing tasks - this is the route that ChatDev is researching.

Now that robots can communicate with each other, they can be trained to do something useful.
At the 2024 Zhiyuan Conference, Dr. Qian Chen from the Natural Language Processing Laboratory of Tsinghua University introduced the idea behind ChatDev: through role-playing, bots form a work line, allowing each intelligent agent to communicate with each other, discuss decisions, and form a communication chain.

Currently, ChatDev is best at programming. The demo is to use it to write a Gobang game.

Throughout the entire process, different agents on the "assembly line" perform their respective duties: there are product managers, programmers, and testers. It can be said to be a virtual product team that is small but complete.
The multi-agent mode provided by Coze also has a similar idea and approach.

In multi-agent mode, users can write prompts to set roles, and then specify their working order by pulling lines, jumping to different agents at different steps.
However, the instability of Coze's redirection is a problem. Especially, the longer the session is, the more chaotic the redirection becomes, or even the redirection fails.It is difficult for the intelligent agent to accurately adapt its jump judgment to the user's requirements.
Microsoft has also launched AutoGen, a multi-agent dialogue framework that is conversational, customizable, and able to integrate large models with other tools.

Although the current technology is still very flawed, it is clearly very promising.Andrew Ng once mentioned in a speech that when intelligent agents gather together, the synergistic effect they bring will far exceed that of a single intelligent agent.

Who doesn’t look forward to the day when bots team up to work for you?

Text | Selina