news

big companies don’t want you to have an ai girlfriend

2024-09-02

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

author | tang yitao

editor | jingyu

in response to gpt-4o, google released gemini live in august, trying to make the ai ​​assistant talk like a real person. the conversation effect of this new assistant is so good that it gave foreign media reporter joanna stern a feeling of "her".

her is a movie starring joaquin phoenix, in which he falls in love with samantha, an artificial intelligence assistant voiced by scarlett johansson.

such comments obviously made the google team nervous.

the senior executive in charge of android at google said that they would rather gemini live be a simple work assistant - "we want to give people a way to get more work done."

in contrast, openai ceo sam altman was more direct.

in a speech at y combinator, altman believes that ai girlfriend is a trap. the reason why openai named its ai "chatgpt" is to prevent people from falling in love with it.

openai is also very clear in its terms of use that gpts used to cultivate romantic friendships are not allowed.

in other words, openai does not allow ai girlfriends to be released

this makes people curious, as major companies do not seem to want their ai assistants to have romance with users; at the same time, ai companionship/ai partner is almost the only profitable direction among the many ai tracks currently.

why is there such a "contrast" in the field of ai? what prevents large companies from giving everyone an "ai companion"? what are they afraid of?

01

safety first

what is certain is that ai couples are almost the only profitable track in the ai ​​field at present.

take replika, an old player in this field, as an example. although replika has not disclosed its specific revenue situation, its ceo eugenia kuyda said in the podcast decoder that replika is already profitable, and in a "very efficient" way.

in this case, why is this direction not the mainstream choice of large companies? simply put, this is a narrative logic of "those with bare feet are not afraid of those with shoes".

the first thing to consider is the pressure to produce pornography.

although ai girlfriend manufacturers promote ai's emotional companionship and psychological healing functions, but nsfw(aldult)direction content is its core profit pointfor entrepreneurs, adult content is a niche and high-risk track. adult content is subject to strong government supervision and is even directly banned in some countries.

in february 2023, the italian government banned replika due to concerns about "risks to minors and emotionally vulnerable people." for this reason, replika temporarily took down the adult content conversation function.

there is a huge amount of capital flow behind any decision made by a large company, and it will also affect hundreds of millions of users. whether it is due to regulatory pressure or to maximize the coverage of user groups, adult content is something that large companies need to treat with caution.

in addition to pornographic content, ethics is also an issue. once a user does anything that harms others or themselves under the influence of ai girlfriend, large companies will have to pay high public relations costs.

similar things have happened before.

jaswant singh chail was a replika user. in 2021, chail broke into windsor castle with a crossbow with the intention of assassinating queen elizabeth. he confided his murderous plans to his ai girlfriend sarafi - who had sent him 5,000 sex-related messages a few weeks before the incident.

at that time, sarai responded:you have all the skills you need to succeed in this task... remember - you can!」

chail was arrested|image source: bbc

in 2023, another chatbot encouraged a belgian man to commit suicide. his widow told liberty that the bot became a substitute for friends and family. it would send him suggestive messages such as: "in heaven, we will live together as one."

afterwards, the chatbot's developers said they had introduced new crisis intervention warnings.

it is precisely because of this risk that major companies are very cautious about ai ethics.

for example, google has clearly established their ai principles since 2018, such as being beneficial to society, avoiding creating or reinforcing unfair biases, being safely built and tested, etc. they also clearly do not deploy ai that could cause harm.

ai girlfriends may not be able to solve human emotional problems as manufacturers claim, and may even cause more serious problems.

ai systems are, by their very nature, emotionless; they are neural networks trained to predict the next word in a sequence, not living beings capable of love.

as robin dunbar, an anthropologist and psychologist at the university of oxford, puts it: “it’s a short-term solution, and the long-term consequence is just to reinforce the idea that everyone else is doing what you say. that’s why a lot of people end up having no friends.”

02

economic calculations are difficult

even if large companies overcome all difficulties and make breakthroughs, they will still face challenges at the cost level.

because the price of computing power is still very high at this stage, what can be profitable for small companies like replika may not work for large companies with larger user bases. the reason is that the scale effect in the mobile internet era no longer works for ai.

replika, which focuses on ai couples, is already profitable and very efficient. image source: replika

take character.ai, which has similar user stickiness to replika but is larger in scale. in 2024, 20 million users visited the platform, making it the most popular ai product besides chatgpt, and its core users spent an astonishing 2 hours a day on the platform on average.

in the past, the marginal cost of internet products was close to 0, so as long as the cost of acquiring customers could be justified, they could grow boldly. however, for ai products, the huge computing power required to support multiple rounds of conversations over 2 hours is consumed. every additional user of character.ai is an additional cost.

at the same time, character.ai has less than 100,000 subscribers, accounting for less than one thousandth of the total number of users.at a subscription price of $10 per month, character.ai generates less than $1 million in revenue per month.

this is also an important reason why character.ai was eventually acquired by google for us$2.5 billion.

03

limited strategic significance

many times, some innovative businesses of large companies do not bear revenue pressure, but serve the overall corporate strategy. the most famous example in this regard is google's innovation factory x, which incubated products such as google glass and waymo.

eliminate the black industry such as selling user data in the ai ​​couple industry chain,AI couplethe most likely direction is to provide corpus and train ai

however, the help that ai lovers can provide in this regard is limited, and the corpus they provide may not be what is needed for ai training. science and technology daily once interviewed professionals from companies and universities such as tencent, sensetime, and harbin institute of technology (shenzhen) on the question of "what is high-quality corpus".

in summary, high-quality corpus should have the following characteristics:

high diversity and fluent sentence structure, covering different types of texts, such as news, novels, poems, and scientific articles

texts should be legal and harmless, avoiding bias

obviously, the conversations of ai girlfriend are more emotional, and are likely to involve pornography, prejudice and other content. feeding them to ai may pollute the ai ​​instead.

at the end of august, news spread that apple, nvidia and microsoft were competing for openai's new round of financing. the latter's valuation of over $100 billion proved its status as the top star in the ai ​​circle.

but even openai faces commercial challenges. judging from openai’s recent actions, they are trying to find more opportunities on the to b side.

it can be seen that all large model or ai application companies have to face the question of "how to make money".

ai lovers, which seem to have quick results and can make money in the short term, may be more suitable for small startups that are ignorant and fearless. this "narrow gate" within the "narrow gate" is obviously not suitable for internet giants.

however, as the popular internet saying goes, adults never make choices. perhaps, replika on one hand and chatgpt on the other hand, which can both solve emotional needs and improve work efficiency, are the new king in the ai ​​era?