2024-10-06
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
image source: stanford university
are you confused about artificial general intelligence (agi)? this is what openai is obsessed with ultimately creating in a way that "benefits all mankind." you might want to take them seriously because they just raised $6.6 billion to get closer to that goal.
but if you're still wondering what exactly agi is, you're not alone.
at the credo ai responsible artificial intelligence leadership summit on thursday, li feifei, a world-renowned researcher often called the "godmother of artificial intelligence," said she also didn't know what agi was.in other moments, fei-fei li discussed her role in the birth of modern artificial intelligence, how society should protect itself from advanced ai models, and why she thinks her new unicorn startup world labs will change everything.
but when asked what she thought about the "ai singularity," lee was as confused as the rest of us.
“i come from academia in artificial intelligence and was educated in more rigorous and evidence-based approaches, so i didn’t really know what those words meant,” lee said to a packed room in san francisco next to a door overlooking the golden gate the big windows of the bridge."frankly, i don't even know what agi means. people say you know it when you see it, and i guess i haven't seen it. in fact, i don't spend a lot of time thinking about the words because i think there's still there are many more important things to do..."
if anyone knows what agi is, it's probably li feifei. in 2006, she created imagenet, the world's first large-scale ai training and benchmark dataset, which was critical in catalyzing our current ai boom. from 2017 to 2018, she served as the chief scientist for ai/ml at google cloud. today, fei-fei li leads stanford’s human center for ai institute (hai), and her startup world labs is building “large-scale world models.” (if you ask me, this term is almost as confusing as agi.)
openai ceo altman tried to define agi in an interview with the new yorker last year.altman describes agi as "the mid-level human equivalent of a coworker you can hire."(equivalent of a median human that you could hire as a coworker.)
meanwhile, openai's charter defines agi as "highly autonomous systems that outperform humans at most economically valuable work."
clearly, these definitions are not good enough for a $157 billion company. therefore,openai has created five levels for internal evaluation of its progress toward agi. the first level is chatbots (like chatgpt), then reasoners (apparently, openai o1 is this level), agents (which supposedly is next), innovators (ais that can help invent things), and the last level is organizational level (ai that can complete the work of the entire organization).
still confused? me too, and so does li. furthermore, it sounds far more than an average human colleague can do.
li mentioned earlier in the conversation that she has been curious about the concept of intelligence since she was a child. this led her to start researching artificial intelligence before it was profitable. in the early 2000s, lee said she and several others were quietly laying the groundwork for the field.
“in 2012, my imagenet was combined with alexnet and gpus—what many call the birth of modern artificial intelligence.it is driven by three key factors: big data, neural networks and modern gpu computing.once that moment comes, i think the entire field of artificial intelligence and our world will never be the same again. "
when asked about california's controversial artificial intelligence bill, sb 1047, lee spoke carefully to avoid revisiting the controversy that governor newsom had just put to rest by vetoing the bill last week. (we recently spoke with the author of sb 1047, who preferred to reopen the debate with lee.)
“some of you may know that i expressed my concerns about this vetoed bill [sb 1047], but now i’m thinking about it deeply and looking forward with anticipation,” lee said. “i’m very flattered, or honored, that governor newsom has asked me to be a part of the next steps after sb 1047.”
california’s governor recently invited lee and other ai experts to form a working group to help the state develop safeguards for ai deployments. lee said she will use an evidence-based approach in the role and will do her best to advocate for academic research and funding. however, she also wants to make sure california doesn't penalize tech workers.
“we need to really focus on the potential impacts on people and our communities rather than blaming the technology itself…we punish automotive engineers if a car is misused, intentionally or unintentionally, and injures a person—e.g.fordor general motors - makes no sense. merely punishing car engineers will not make cars safer. what we need to do is continue to innovate towards safer measures, while also improving regulatory frameworks – whether that’s seat belts or speed limits – and the same goes for artificial intelligence. "
this is one of the better arguments i've heard for sb 1047, a bill that would punish tech companies affected by dangerous ai models.
while lee is advising the state of california on ai regulation, she also runs her startup world labs in san francisco. this is li's first time launching a startup, and she is one of the few women leading a cutting-edge artificial intelligence lab.
"we're still far from a very diverse ai ecosystem," li said. "i do believe that diverse human intelligence will lead to diverse artificial intelligence and will lead to better technology for us."
over the next few years, she’s excited to bring “spatial intelligence” closer to reality. human language, the basis of today's large-scale language models, may have taken millions of years to develop, while vision and perception may have taken 540 million years, li said. this means that creating large world models is a more complex task.
"this is not just about letting the computer see, but really letting the computer understand the entire three-dimensional world, which i call spatial intelligence," li said."we don't just see to name things...we really see to do things, navigate the world, interact with each other, and bridging the gap between seeing and doing requires spatial knowledge. as a technologist, i'm really excited about this ”
compiled by: chatgpt