news

microsoft vice president vik singh: ai chatbots need to "learn to ask for help" rather than "create illusions"

2024-09-02

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

it home reported on september 2 that on september 1 local time, according to afp, microsoft vice president vik singh said in an interview, "frankly speaking, what is really missing today (generative ai) is the ability to proactively say 'hey, i'm not sure, i need help' when the model is not sure (whether its own answer is accurate). "

since last year, microsoft, google and their competitors have been rapidly deploying generative ai applications such as chatgpt and gemini, which can generate a variety of content on demand and give users the illusion of "omniscience." despite the progress made in the development of generative ai, they still "hallucinate" or make up answers.

image source: pexels

vik singh insists that “really smart people” are working on ways to get chatbots to “admit and ask for help” when they don’t know the right answer.

meanwhile, marc benioff, ceo of cloud software giant salesforce, also said last week that he saw many customers becoming increasingly frustrated with the misleading performance of microsoft copilot.

it home learned that in recent years, artificial intelligence has flourished, and applications such as chatbots have gradually become popular. people can get information from these chatbots (such as chatgpt) through simple instructions. however, these chatbots are still prone to "hallucination" problems, that is, providing wrong answers and sometimes even dangerous information. one of the reasons for "hallucination" is inaccurate training data, insufficient generalization ability, and side effects in the data collection process.