news

ultraman 10,000-word interview: leading the ai ​​revolution in the center of the storm, a dialogue about turbulence and the future of technology

2024-09-07

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

text: web3 sky city · city lord

after a while, a full video of a face-to-face interview with openai ceo sam altman was released after he returned to openai. it is just in time with greg on leave, ilya starting a new company, openai continuing to lose money and new financing news.

this interview was hosted by the famous host trevor. what was a bit surprising was that trevor had a very deep understanding of ai technology, which was better than 95% of non-technical hosts.

in this interview, trevor and sam had a wide-ranging conversation about artificial intelligence, openai, and sam's personal career and journey. sam started with his return to openai, shared how he reflected on the relationship between himself and the company's mission in a series of chaos and uncertainty, and talked about the future development direction of agi.

sam not only elaborated on the huge potential of agi in cognition, productivity and social effects, but also responded to ai safety, ethical issues, capitalist trade-offs and challenges of technological revolution. at the same time, he frankly discussed his thoughts on openai's corporate governance, how to balance technological progress and social responsibility, and emphasized the importance of transparency and democratized technology governance.

the journey back to the helm of openai: sam altman reflects on his brief departure and shares how he rediscovered his love for openai and its mission during this turbulent time. despite his brief dismissal, he emphasized that he learned a profound lesson from it and felt deeply about the importance of continuing to advance agi.

the future of agi and technological revolution:sam altman firmly believes that agi will bring about an unprecedented technological revolution, although social and economic changes may not be as rapid as expected. he believes that although technology will get closer and closer to agi, its actual impact on society will be gradual, allowing society to gradually adapt to the changes brought about by new technologies.

the balance between capitalism and ai:in the interview, sam altman discussed in detail how openai can operate under the capitalist system while maintaining its original intention of ensuring that the benefits of agi are widely distributed. although openai needs to compete in capitalism to obtain the resources needed for development, he emphasized that technological progress should not come at the expense of ethical and social responsibility.

democratic governance and social impact of ai:sam altman made an important point that ai technology governance should be more democratic. he believes that the public should be more involved in deciding how ai systems are used and what rules are set, and emphasized the importance of technical transparency and diverse governance.

concerns about ai safety and ethics: in the interview, sam altman admitted that although ai has great potential, it also has many potential security risks. he particularly pointed out the risks of ai in computer security and synthetic biology, and called on society to promote technological development in a responsible manner.

a broader vision for ai:sam altman is optimistic about the future. he believes that as agi matures, everyone will have more opportunities to pursue their personal interests, and technology will greatly improve global education and medical care. although technological innovation will bring about changes in work and social models, he believes that ultimately human spirit and creativity will drive us into a more prosperous future.

Trevor:

thank you for taking the time to be interviewed.

sam: thank you.

trevor: don’t you think it’s a crazy time right now?

sam: it feels like the craziest moment i've ever experienced.

Trevor:

yeah, you're at the center of it all. i wonder what that feels like. because i'm just an avid observer of this space and this world, you know? i feel like you're the one who's been affected by it all. i mean, sam altman was on the shortlist for time magazine's person of the year.

sam: glad i didn’t get that.

trevor: glad you didn't get it?

sam: yes, of course.

trevor: why?

sam: i’ve gotten more attention this year than i’ve ever gotten in my entire life.

trevor: okay, okay. so you don't like attention? you don't want attention?

Sam:

no, it's brutal. in some ways it's fun. in some ways it's useful. but in terms of personal life, quality of life tradeoffs. yeah, absolutely not. but this is the situation right now. this is what i signed up for.

trevor: yeah, now is the time for notoriety.

sam: yes.

trevor: do people recognize you on the street?

Sam:it's a really bad tradeoff. yeah, i feel like you'll always... i'm sure it's going to happen to you, but i can't stay anonymous anymore.

Trevor:yes, but people don't ask me questions about the future.

Sam:they don’t ask if you’re going to destroy the world.yeah. there's a slight difference. people might want to take pictures with me. that's the extent of it. a lot of pictures.

trevor: well, congratulations. you're time magazine's ceo of the year.

sam: yes.

Trevor:

this is probably one of the weirdest moments because i guess when time magazine made this decision. a few weeks ago, you might not have been ceo of the year. i don't know if they could still give you this award. i guess it was for your previous job.

sam: yeah, i don’t know. i don’t know how that works.

trevor: how does it feel to be back as ceo?

Sam:

i'm still reprogramming reality, to be honest with you. in some ways, it feels great because i've learned a lot throughout this process about how much i love this company, the mission, and the people. and i've had a few moments throughout this process where i've experienced the full range of human emotions for a short period of time. but one very clear moment for me was when it all happened on a friday afternoon, friday noon. the next morning, saturday, several board members called me and asked if i'd be willing to discuss returning.

i had mixed feelings about it, but ultimately, i knew i wanted to come back.

i really love this place and what we're doing. i think it's important for the world and it's something that's close to my heart.

Trevor:

in the tech world, hiring and firing is something everyone has to get used to.

i know in the past, you were fired from y combinator. everyone has a story about that.

sam: i don’t want to talk about that.

trevor: tell me, tell me. these things, you do the research and then go from there.

Sam:

i decided about a year before (leaving yc) that i wanted to come to openai.

it was a complicated transition. but i had been working on openai and yc at the same time and was very sure that i wanted to come to openai. i have never regretted it.

Trevor:then you've never been fired. it's a tough situation to be in as a human being.

sam: is this considered a dismissal?

trevor: what if you get fired and then immediately rehired?

Sam:i was going to say more than that… this is just brutal… i guess i shouldn’t talk about this right now.

it's a very painful thing to happen. for me personally, as a human being, it feels very unfair and the way it was handled was really unfair.

Trevor:

i can imagine. a lot of people will talk about being fired.

this became a trend during the pandemic, especially with people talking about getting an email or a group video and then thousands of employees getting laid off.

you rarely expect this to happen to a ceo of a company. even more so, you don’t expect it to happen to a ceo who many are calling the “steve jobs” of this generation and the future.

that’s not what you would call yourself, by the way.

sam: of course not.

Trevor:

a lot of people say that about you, but i don't think it's fair to call you this generation's steve jobs. to me, you're this generation's prometheus. you really are.

it seems to me that you have stolen the fire from the gods. you are at the forefront of this transformation, the era we are living in now, where ai was once the stuff of science fiction and legend, and you are now the leading edge of something that could change civilization forever.

do you think there will be a total change in the future?

Sam:

i think so.

it's entirely possible that i'm wrong about what i'm about to say, but my sense is that we'll build something that pretty much everyone agrees is agi. defining agi is very hard, but we'll build a system that people will look at and say "ok, you did that." that system is agi, a system that is human-level or above.

Trevor:

before i explain in detail, what do you think is the biggest difference between people's perception of ai and artificial general intelligence?

Sam:

as we get closer to this realm, it becomes very important to define it, and there are differences. for some people, agi refers to a system that can do some important part of the current human work. of course, we will find new jobs and new things to do. but for others, agi refers to a system that can help discover new scientific knowledge. these are obviously very different milestones with very different impacts on the world.

even though i'm so tired of the term that i can't even stop using it, there's a reason why i don't like it anymore. the term is agi. i think for most people it now just means very smart ai, but beyond that it's become very vague. i think it's mainly because we're getting closer.

what i'm trying to say is that we're going to create agi, whatever you want to call it. at least in the short and medium term, it's going to change the world far less than people think. in the short term, i think society has a lot of inertia, the economy has a lot of inertia, the way people live their lives has a lot of inertia. that's probably healthy and beneficial for us to manage this transition. we all do things a certain way, and we're used to doing it that way. society as a superorganism also does things a certain way, and it's also used to doing it that way.

look at what happened with gpt-4, and i think that's instructive. when we first launched it, people had a real moment of panic, saying, "wow, i didn't expect this to happen." but then they went on with their lives. it did change things, and people are using it. it's a better technology than it was before. of course, gpt-4 isn't great, and 5, 6, 7, whatever, will be much better. but version 4 of the chatgpt interface, i think was the moment when a lot of people went from not taking it seriously to taking it very seriously. and yet, life goes on.

Trevor:

do you think this is good for humanity and society? is this how life should go on? as one of the fathers of this product, one of the parents of this idea, do you want all of us to stop, take a moment, and evaluate where we are now?

Sam:

i think the resilience of the human race, both individually and collectively, is amazing. i'm really happy that we have this ability to absorb and adapt to new technologies and changes and just make them part of the world. it's really beautiful.

i think covid is a recent example where we saw this. the world adapted very quickly, and then very quickly it felt normal. another example that's not so serious but instructive is all that ufo news. this was a few years ago. a lot of my friends would say, maybe those are real ufos or aliens or something, people who were complete skeptics. and yet they went to work the next day and played with their kids.

trevor: yeah, i guess, what else can you do? if they’re flying, they’re flying.

so do i wish we took more time to assess progress? we as a world are doing that, and i think that’s great. i firmly believe that iterative deployment of these technologies is very important. we don’t want to develop artificial general intelligence (agi) in a secret lab where no one knows it’s coming and then release it into the world all at once and have people say, hum, here we are, so…

Trevor:

do you think we have to gradually adapt to it and grow with it as a technology?

Sam:

yes.

so i think it's great that there's a conversation going on right now where society, our leaders, and our institutions are actually using these technologies to get a feel for what it can do, what it can't do, where the risks are, where the benefits are. i think in some sense, probably the best thing we did for our mission was to iterate on our deployment strategy. we could have built it in secrecy and spent many more years building it and then rolled it out all at once, which would have been terrible.

Trevor:

today we walked into openai's building, which is kind of like a fortress and feels like a home from the future. i saw you had a post saying you came in as a guest.

i was like, oh my god, that's so weird. it's like going home, but it's not home, but it's home.

Sam:

it should have felt like a really weird moment, like putting on a guest badge here. but i felt like everyone was exhausted and full of adrenaline. there really wasn't that sense of a big moment that i was hoping for. there were definitely some moments from that day that were worth reflecting on and telling about. for example, one of my proudest moments was when i was really tired and distracted. we thought the board would bring me back as ceo that day, but just in case they didn't, i interviewed for l3, our lowest-level software engineer position, with one of our best people. and he gave me a "yes" to the job. that was a really proud moment.

the part that stands out in my memory of that day is that i still have that skill. although the badge isn't as impressive as i would like.

Trevor:

i’d love to know what you think i did right as ceo to gain the support we’ve seen publicly from openai employees.

i'm not going to ask you for details when this came to light, because i know you can't comment on the contents of an internal investigation. but i know you can talk generally about the overall feeling of the company and what's been going on recently. we rarely see situations unfold the way that openai has. you have this company and this idea that, for most people on the planet, doesn't exist one minute. the next minute, you release chatgpt, and this simple prompt, just a little chat box, changes the world. and you hit a million users in the fastest time ever, i think. i think it was five days. yeah, five days. and then it quickly hit 100 million people.

very quickly, i knew on an anecdotal level that for me, this thing went from being unknown to being known to everyone in the world. i was explaining it to people, trying to get them to understand it. i had to present it to them like poetry and something simple that they could understand. and then people started telling me about it. now it's become this ubiquitous concept that people are trying to figure out what it is and what it means.

but on the other hand, there's this company that's trying to somehow control and shape the future. and people support you. we saw the story, sam altman was out, no longer ceo. and then everything was in turmoil. i don't know if you've seen some of the rumors. they're just crazy. one of the craziest things i saw was said by someone, and this is both absurd and funny. they said, i heard from a reliable source that sam was fired for trying to have sex with an ai.

Sam:

i don't even know how to respond. when i saw that message, i was like... i think at this point, i should officially deny this happened.

Trevor:

i think that's unlikely to happen because people don't understand the combination of the two. but it strikes me that the sensationalism of this event seems to have brought openai into another spotlight at a different moment.

one of the big things was the support you got from the team, someone stepped up and said, we're here to support sam no matter what. that doesn't happen often in companies. usually the ceo and the employees are out of touch to some extent.

but this feels like more than just one team.

Sam:

now, this isn't false modesty. there are a lot of places where i'm willing to take a lot of credit. in this case, it's not entirely my credit, except as a spokesperson. but i think one of the things we do well is we have a mission that people really believe in.

i think what happened was that people realized that the mission, the organization, and the team that we had worked so hard on and made so much progress on, but there was so much more to do, was really threatened like that. and that, i think, was what triggered the reaction. it really wasn't about me personally, even though hopefully people like me and think i'm doing a good job. it was about the shared loyalty that we all felt, and the sense of responsibility to get the mission done, and hopefully maximize our chances of achieving that goal.

Trevor:

at the highest level, what do you think the mission is? is it to achieve general artificial intelligence?

Sam:

make the benefits of agi as widely distributed as possible and successfully address all safety challenges.

Trevor:

that's an interesting second point. i'd love to talk to you later and talk more about the safety aspect of this. when you look at openai, it does seem like it was created with a very strong focus on safety. you brought together a group of people and you said, we want to start an organization, a company, a collective that is dedicated to creating the most ethical ai that will benefit society. you can see that reflected even in how the company's profits are distributed, how the investors get their profits, and so on.

but even openai changed at a certain point. do you think you can resist the forces of capitalism? i mean, there's so much money involved. do you think you can truly maintain a world where money doesn't define what you're doing and why you're doing it?

Sam:

that has to be a factor. like, if you just think about the cost of training these systems, we're going to have to find some way to compete in the realm of capitalism, for lack of a better term. but i don't think that's going to be our primary motivator. i love capitalism, by the way. i think it has huge flaws, but relative to any other system the world has tried, i think it's still the best thing we've come up with. but that doesn't mean we shouldn't try to do better. i think we're going to find ways to spend the enormous, record-breaking capital that we need to in order to continue to push the frontier of this technology. that's one of our early learnings. it's just that this thing is a lot more expensive than we thought.

we kind of knew we had this idea for an expansion system, but we just didn’t know how it was going to go.

Trevor:

you've always been a big fan of scaling, and i've read that about you. one of your mentors, and one of your current investment partners at injection momentum, said that whenever you would present a problem to sam, the first thing he would think of is, how do we solve this problem? how do we solve it? and the second thing he would then say is, how do we scale these solutions?

sam: who said that? i don't remember. i'm terrible with names.

trevor: interesting. but i know it was someone at your job who said that.

Sam:

interesting. but i haven't heard anyone say that about me.

but i think this has been an observation that i've made across a lot of different companies and fields, is that scale often leads to amazing results. so, massively scaling these ai models leads to really amazing results. and massively scaling makes them better in not only all the obvious ways, but also in some non-obvious ways. there are non-obvious benefits to companies scaling. there are non-obvious benefits to groups of companies like y combinator scaling. and i think that's not taken seriously.

in our case, knowing how to scale in the early stages was really important. if we had been smarter or had more courage to think about it, we would have made a much bigger move at the beginning. but it was really hard to say i'm going to build a bigger computer worth $10 billion. so we didn't do that. we were a little slower than we should have been to understand this, but eventually we understood it. now we see how big we need to scale.

i think capitalism is cool. i don't have anything against the system. no, that's not true. i have a lot of objections to the system, but i have to admit that i haven't found anything better yet.

trevor: did you ask chatgpt if they could design a system?

sam: i have a different… maybe not designing a new system. yeah. but i ask a lot of questions about how ai and capitalism intersect and what that means.

one of the things is, we were right about the most important of our initial assumptions, which was that ai was going to happen. deep learning has the potential to ... by the way, a lot of people scoffed at this. it's totally true.

trevor: oh my god, we were mocked mercilessly.

Sam:

yes.

even some of the ideas about how to get there, we were right. but about a lot of the details, we were wrong, which is common in science and normal. we had a very different conception of building this system before language models started working. we had a very different conception of building agi at the time. we didn't grasp the idea that this would be an iterative tool that was constantly improving and that you could talk to it like you talk to a human. so we were very confused about what building agi would entail. we were thinking about the moment before agi and the moment after it. and then you need to hand it off to other systems and other governance.

now i think it can, and i'm really happy about that because i think it's much easier to navigate. i think it can, i don't want to say just another tool, because it's different in a lot of ways, but in a sense, we made a new tool for humanity. we added something to the toolbox. people are going to do all kinds of incredible things with it, but it's still humans planning the future, not some agi in the sky. you can do things you couldn't do before. i can do things i couldn't do before. we're able to do a lot more. in that sense, i can imagine a world where we achieve our mission by creating really great tools that greatly impact human capabilities and other things. i'm very excited about that.

for example, i love that we offer free, ad-free chatgpt. because i personally do think that ads have always been a problem with the internet. but we just put this tool out there. that's the downside of capitalism, yeah, yeah. one of them. i personally think there are bigger downsides, but we put this tool out there and people can use it for free. we don't want to turn them into a product. we don't want them to use it one more thing. i think this shows an interesting path forward where we can do more things in this regard.

Trevor:

so, let's do that. there are a lot of things that i hope to touch on in this conversation that we have, and obviously we can't answer all of them. but there are some ideas, some points, some areas that i hope we can explore. i guess the first and most timely question is, what happens to the company going forward? where do you think it's going? one of the things that i find particularly interesting is the makeup of the new board, especially for openai, where there were women on the board before and now there aren't, and there weren't financial incentives for board members before and now there are. i wonder if you're concerned that the protections that you helped put in place are now gone, and that you have a board that's no longer focused on protecting people or defining a safer future, but instead is focused on making money and getting as big and powerful as possible?

Sam:

i believe that our current and previous governance structure and board are not working in important ways. therefore, i fully support finding ways to improve it, and i will support the board and their work to achieve that goal.

clearly, boards need to grow and diversify, and i think that's going to happen pretty quickly. we need to hear the voices of those who will advocate for those who haven't traditionally been advocated for, and think very carefully about not only ai safety, but also about learning the lessons of the past about how to build these very complex systems in a way that interacts with all aspects of society. doing as much good as possible while mitigating the bad effects and sharing the good, all of that needs to be reflected.

so i'm very excited about having a second chance to get all of these things right that we clearly got wrong before. things like diversity, making sure we represent all of the major stakeholder groups that need to be represented, figuring out how to make this more democratic, continuing to push for some decisions from governments to govern this technology. i know it's not perfect, but i think it's better than anything else we can think of at the moment. and engaging more with our user groups and having them help set the boundaries of how this technology can be used is really important.

one of the main directions going forward will be to expand the board and the governance structure. again, i know we have a small board right now, but i think they're very committed to all of the things that you just talked about.

and then there's another big category of questions. if you had asked me a week ago, i would have said stabilizing the company was my top priority. but internally at least, i feel pretty good. we haven't lost a customer, we haven't lost an employee. we're continuing to grow, which is pretty amazing. we continue to launch new products. our key partnerships feel strengthened, not hindered. everything is going according to plan in that regard. and i think the research and product plans for the first half of next year feel better and more focused than ever.

Trevor:

i find myself always thinking about you as a person when this whole boardroom storm was happening. whenever there's a storm, i'm always interested in what's going on in the center of the storm. i'm curious, where were you when this all broke out? like, what were you doing at the time? what was going on in your world on a personal level?

Sam:

i laughed because people commented on me that i seemed to be in the eye of a hurricane, with everything spinning around me, and i remained very calm. this time, things did not turn out as expected.

it was an experience that was like being in the eye of a storm, but not calm. i was in las vegas watching the formula 1 race.

trevor: are you a fan of f1?

sam: i am.

trevor: which team do you support? any favorites?

Sam:

honestly, i think verstappen is so good that it's hard to say which team to support, but i think that's probably the answer everyone would say.

Trevor:

actually, it depends on when they started watching the sport. i was a fan of schumacher because that's when i started watching the racing. at first it was nigel mansell, then ayrton senna, you know what i mean?

verstappen is very precise and i can see why he is so popular.

Sam:

even though it’s a little tiresome to watch him win so many times, he’s really quite amazing.

that night, i arrived late on a thursday. someone had forgotten to weld a manhole cover, so on the first lap it blew off, kind of blowing out one of the ferrari engines, and that stopped practice. i didn't get to see the race, didn't see any racing all weekend.

i took a phone call in my hotel room, had no idea what it was going to be, and was fired by the board. it felt like a dream, and i felt so confused. everything was so confusing, and it didn't feel real. it was obviously sad and painful, but the dominant emotion at the time was confusion, and it felt like i was in a fog and smoke. i had no idea what was going on.

the way this happened was unprecedented and crazy. for the next half hour, my phone was flooded with so many messages that imessage crashed on my phone. these messages were from everyone and the phone became completely unusable because the notifications kept popping up. imessage got to a point where it stopped for a while, messages were delayed, and then marked everything as read so i couldn't even tell who i had talked to, it was chaos.

i was talking to the team here, trying to figure out what was going on. like microsoft was on the phone, and other parties were getting in touch. the whole situation was really unsettling, and it didn't feel real. i calmed down a little bit and started thinking about the future direction. i really want to work on agi (artificial general intelligence). if i can't do it at first, i'll still keep working on it. i'm thinking about the best way to go about it.

greg quit, and a few other people quit, and i started getting a lot of messages from people who wanted to work with me. at that point, going back to openai was not even on my radar. i could imagine what the future would be like, but i didn't quite understand how big of an industry event it was because i didn't really read the news. i just felt like i was getting a lot of messages.

Trevor:

because you are actually in the storm.

Sam:

yeah. i was just trying to support openai, figure out what to do next, and try to understand what was going on. then i flew back to california, met with some people, and started to feel very focused on the future. but i also wanted openai to do the best. i barely slept the entire night, with a lot of conversations going on. it was a crazy weekend. i'm sure i still haven't fully come to terms with it all, still a little shocked, still trying to pick up the pieces. i'm sure i'll have more feelings about it when i have time to sit down and process it.

Trevor:

do you feel like you had to jump back into everything all at once? because, as you said, you were on this mission. you could see the drive in your eyes, and now the world had reached a point of no return. you were heading in a certain direction, and all of a sudden, it looked like you couldn't make it happen in the space that you were in. but as you said, microsoft stepped in, and satya nadella said, come work with us, and we'll rebuild this team.

if there's one thing people agree on about sam altman, it's that he's tenacious. he's uncompromising and believes that if you have a goal and believe in something, you shouldn't let life get in the way of you. and you seem to be moving in that direction. you haven't said anything publicly about openai or discredited it in any way. but this seems to put a lot of pressure on you.

Sam:

absolutely. i don't think it's something i can't recover from, but i think it's impossible to go through it and not be affected by it, that would be very strange.

Trevor:

do you feel like you're losing a part of yourself?

Sam:

yes. we started openai in late 2015. the first day of work actually started in 2016.

i was doing this for a while at yc, but i've been working on it full-time since the beginning of 2019. agi and my family are the two things that i care about most. so losing one of them, for me, in a sense, i should say, i'm more committed to agi and care more about the mission. but of course, i also care about this organization, these people, our users, our shareholders, and everything that we've built here. so, it's really, really painful.

the only life experience i can compare it to is when my dad died, which was certainly much worse. it was a very sudden thing, that confusion and sense of loss. in that case, i felt like i had a little time to feel it all. but there was a lot more to come, and it was so unexpected, and i had to pick up the pieces of his life for a while. it wasn't until about a week after that that i really had time to breathe and think, oh my god, i can't believe this happened. so yeah, that was definitely much worse. but here, there are similar echoes.

Trevor:

as you look to the future of the company and your role in it, how do you strike a balance right now between pushing open ai forward and continuing to move in a direction that you believe in? do you still have an emergency brake? do you have some kind of system within the company where, if you feel like we're creating something that could have a bad impact on society, we can step in and stop that behavior. do you have that capability? and is that system built in?

Sam:

yes, absolutely. we have had systems like this in the past. like, we've created some systems and chose not to deploy them. i'm sure we'll do that again in the future. or we've created a system and said, hey, we need to take longer to make sure it's secure before we can deploy it. like in the case of gpt-4, it took us almost eight months after training it to be ready to release it to do all the alignment and security testing. i remember talking to some of the team members and saying, this is not a board decision, this is people here doing their work and working on our mission. so this will continue.

what i'm really proud of about this team is their ability to operate well in chaos, crisis, uncertainty, and stress. i give them an a+, they're doing a really good job. as we get closer to more robust, very robust systems, i think the culture that we've built and the kind of ability that the team has, like staying calm in a crisis and making good, thoughtful decisions, is probably the most important element. i think the team here has really demonstrated that they can do that, and that's really important.

some people say that one of the things we learned about openai is that sam could run the company without any title. i think that's completely wrong and not true at all. the correct lesson is that the company could run without me at all. it was a cultural thing, and the team was ready, and the culture was ready. i'm really proud of that and really happy to be back to continue working on it.

watching the team manage this has helped me sleep easier at night as we face the challenges ahead. there will be bigger challenges than this one, but i hope and believe this is the hardest one because we were unprepared. now that we realize the magnitude of this, we are not just another company, far from it.

Trevor:

let's talk about this. chatgpt, openai, whatever it ends up being called. you have dall·e, you have whisper, you have all these amazing products. if you have any ideas for brand names or brand architecture, i'd be very interested. i feel like chatgpt has done it, and now it's everywhere. yes, it's a terrible name, but it's probably become too common to change. do you think it's okay to change it at this point? can we just simplify it to gpt or just chat?

Sam:

i don't know, maybe not.

Trevor:

sometimes i feel like a product, a name, or an idea goes beyond the marketer’s dreams, and then people just buy into it.

Sam:

no marketer would have chosen chatgpt as the name for this, but we’re probably stuck with it, and that’s probably okay.

Trevor:

now, i was fascinated by the multimodal nature of it. i remember the first time i saw dall·e come out, it was just an idea, and seeing how it worked, seeing this program could create an image from nothing. i tried to explain it to people, and they would ask, where does that image come from? and i was like, there is no photo, there is no source image. they thought it was impossible.

what it sees, i was like, it's hard to explain that. it's hard for me to understand sometimes. but when we look at the world that we live in right now, we talk about them in numbers, gpt 3.5, gpt 4, gpt 5, 6, 7, whatever it is. i like to talk more about the actual use cases of the product without the technical jargon.

between different products, from chatgpt 3, 3.5 to 4, we see that the so-called reasoning ability has reached a higher level, and even demonstrated creativity to some extent.

when i look at the products in this world that you're building right now, like the general purpose large language models and now the specialized large language models, i'm wondering, do you think the use cases are going to change dramatically? do you think right now it's probably just like a little chatbot that everyone likes, and that's where the products are going to end up? or do you think the world is going to move to a world where there's a specialized gpt for everything? like, trevor will have a gpt that does things for him, or a company will have a gpt that does things for them. how do you think about that?

obviously, predicting the future is difficult, but from where we are now, where do you think we're going?

Sam:

i think it's going to be a mix of both. it's really hard to predict the future, and i could be wrong, but i'm willing to try. i think it's going to be a mix of the two things you just mentioned. first, the underlying model will become so good that it's hard for me to say with any confidence what it can't do. it's going to take a long time, but i think we're heading in that direction.

Trevor:

how long is "long time" in your time frame? for example, how do you measure it?

Sam:

not in the next few years. it's going to get better every year for the next few years. but like i was going to say, i'm pretty sure there's still a lot of things this model can't do by 2026.

Trevor:

but doesn't this model constantly surprise you? when i talk to engineers working in this space, or anyone involved in or related to ai, the single word people say the most is "surprise." people keep saying, they taught it this, thought chatgpt was learning this domain knowledge. all of a sudden, it starts speaking a new language, or we thought we were teaching it this, and all of a sudden it builds bridges, things like that.

Sam:

so, the subjective experience for most people here is that it's probably going to be sometime between 2019 and 2022. but now, i think we've learned not to be surprised. now we trust exponential growth, most people do. so, gpt-5, whatever we call it, is going to be great at many things. we're going to be surprised at some specific things it can do, and we're going to be surprised at some specific things it can't do. no one is going to be surprised at how great it is. at this point, i think we've really deeply understood that.

the second thing you mentioned was these customized gpts. more importantly, you also mentioned personal gpts, like trevor gpt. i think this is going to be a big trend over the next few years. these models will understand you, access your personal data, answer questions the way you want them to, and work very effectively in your environment. i think a lot of people are going to want this.

Trevor:

it almost makes me wonder if the new labor market becomes that your gpt is almost your resume, that the gpt is almost more valuable than you in a weird way. you know what i mean? it's like a combination of everything you think about and everything you've ever thought about, and the way you synthesize your thoughts, plus your own personal gpt becomes... i'm just thinking about a crazy future where you go to a job and they ask you, what's your gpt? and you say, well, this is mine.

Sam:

we always think of these like agencies, these personalized agencies, i'm going to let this thing do things for me. but what's interesting is that this thing you're talking about is, if it's not that, but how other people interact with you. right. it's like your impression, your avatar, your echo, whatever. i can see it getting to that point.

Trevor:

because if we are not a combination of everything we are, then what are we? it's a strange thought, but one i can believe in. i've always been fascinated by where it can go and what it can do.

you know why? when chatgpt blew up in the first few weeks, i'll never forget how quickly people realized that the robot revolution, i know it's not robots, but for people, they were like, the robot revolution, machines didn't take over the jobs they thought they would. you know, people thought it would take over truck drivers and so on. and what we found is that no, those jobs are actually harder to take over. in fact, the jobs that are being taken over by what's called "brain work" are those jobs, like your white collar jobs. you're a lawyer? well, when you have chatgpt 5, 6, 7, whatever you're called, there may not be that many lawyers. you're an engineer, you're like, where is your place... the human body is really an amazing thing.

do you think there will be any advancements that will replace the human body or are we still stuck in the spiritual realm?

Sam:

no, i think we will eventually have robots that work, like humanoid robots will eventually work. but we worked on that in the early days of openai, we had a robotics project. i didn't know that.

we know that we built a robotic hand that can manipulate a rubik's cube like a hand. that requires a lot of dexterity. i think there are a lot of different insights that go into this, but one of them is that it's much easier to make progress in the world of bits than in the world of atoms.

for example, this robot is hard to make, but for the wrong reasons. the reason it's hard is not that it helps us advance some tough research problem, but that the robot keeps breaking, and it's not very accurate, and the simulator is terrible. whereas a language model, you can do all these things in a virtual environment, and you can progress much faster. so, focusing on the cognitive aspects helps us solve more productive problems faster.

but it's also in a very important way, i think, that solving cognitive tasks is the more important problem. like, if you build a robot, it can't necessarily figure out how to help you build a system that does cognitive tasks. but if you build a system that does cognitive tasks, it can help you figure out how to build a better robot.

yes, that makes sense. so i think cognition is the core thing that we want to focus on. i think it's the right decision. but i hope we can get back to robotics.

Trevor:

have you ever thought about when you would consider artificial general intelligence to be achieved? like, how do we know? for me personally? like, when do i feel like the mission is accomplished?

Sam:

because everybody's talking about general artificial intelligence, but i ask, how do we know what that is? so it goes back to that point before, everybody has a different definition. i personally will tell you when i get really excited. i get really excited when we have a system that can help discover new physics. but this feels like way beyond general intelligence. that seems, you know what i mean? it's beyond the definition that i think most people have.

Trevor:

maybe it's because i sometimes wonder, how do we define general intelligence? do we define it as genius in some field? or is it like a child is artificial general intelligence? that's for sure. but you have to keep programming it, they're born with no words, they can't walk, they don't know anything. and you keep programming this agi to get it to where it needs to go.

so if you, let's say, you get to a point where you have a four-year-old version of that.

Sam:

if we have a system that can be like, just figure it out, can operate autonomously, with some help from a parent, and understand the world like a four-year-old. yes, we can call it agi. if we can really solve that general ability, new problems can be figured out on their own as they arise. four-year-olds don't always understand things perfectly, but we obviously understand it at that point.

Trevor:

if we don't fundamentally understand the nature of thought and thinking, can we get to that point? it seems like we can. do you think we can get to that point? i think so. or can we get to a point where... so i'm sure you know this.

one of my favorite stories in the ai ​​space was a project at microsoft. they had an ai that was trying to learn to recognize male and female faces, and it was pretty accurate to a certain extent, about 99.9% accurate. however, it was consistently terrible at black people and especially black women, constantly mistaking them for men. the researchers kept trying to improve it, and they kept thinking, what's going on?

there was a time when i told this story like this, which may be a little inaccurate, but i think it's very interesting. there was a time when they "sent" ai to africa, i think it was kenya. they sent the ai ​​to africa and told the kenyan research team, can you use it for a while and try to solve this problem? when this ai was on the other side of the world, running with their data set and african faces, it became more and more accurate in recognizing black women in particular.

but ultimately, they found that the ai ​​never knew the difference between male and female faces. all it did was draw correlations between makeup. so, the ai ​​determined that people with red lips, rosy cheeks, and maybe blue on their eyelids were female, and the others were male. and the researchers said, yeah, you're right. it just found what's called a cheat code, you know. it's like, i understand what you think the criteria for men and women are, and then it distinguished according to those criteria.

and then they realized that because black women are generally underserved in terms of makeup, they don't wear makeup that much, so the system doesn't know that. but we don't know that the system doesn't know. so i was thinking, how would we know if agi knows or doesn't know something? or would we find that it just cheated to get to the answer in some way? how would we know that?

when this uncertainty is intertwined with so many aspects of our lives, what is the cost of our ignorance? do you understand?

Sam:

i believe we're going to make progress in understanding what these systems are doing. there's been some progress in interpretability. it's an area of ​​studying these models, and there are different ways and different levels of doing it. you can try to understand what each artificial neuron in the system is doing, or you can look at the steps as the system thinks step by step and see which step you don't agree with. we're going to find out more.

the ability to understand what these systems are doing and hopefully they can explain to us why they came to certain conclusions and do so accurately and robustly. i think we'll make progress in understanding how these systems do what they do, but also how our own brains do what they do. so i think we'll eventually understand that. i'm curious, and i'm sure you are too.

i think we'll make more progress in implementing methods that we know work to make these systems better and better and help us solve the explainability challenge. and i think as these systems get smarter, they'll be fooled less often. so a more complex system might not be making these cosmetic distinctions, it might be learning at a deeper level. i think we're seeing some evidence of that happening.

Trevor:

you reminded me of two things when you said not so easily fooled. one is the safety aspect, and the other is the accuracy aspect. the media has been talking about this a lot. you remember, they said, the ai ​​is hallucinating, it thinks it's going to kill me. i find it particularly interesting that people like to use the word "thinking" to describe large language models. because i always feel that journalists should try to understand what it is doing before reporting on it. but they are actually giving the public a misunderstanding when they use the word "thinking" a lot. i have sympathy for this.

Sam:

we need to use familiar terms, we need to anthropomorphize. but i agree with you that this is misleading.

Trevor:

because you're saying it's thinking, and then people are thinking, is it thinking about killing me? well, it's not thinking, it's just using this amazing transformer to figure out which words are most likely to fit in relationship to each other.

sam: what do you think you're doing?

Trevor:

what am i doing? that's an interesting question. that's what i'm saying, is the ideas that we pieced together.

we talk about "hallucinations." let me start with the first half. do you think we can achieve the goal of ai not having hallucinations?

Sam:

i think the better question is, can we make sure that ai doesn't hallucinate when we don't want it to, at a similar frequency to how humans don't hallucinate. and to that, i think the answer is yes. but actually, one of the reasons people like these systems is that they can do novel things. if it only... yeah, hallucinations are both a feature and a bug.

Trevor:

that's what i was asking. isn't it part of being an intelligent being to hallucinate?

Sam:

absolutely.

if you think about, for example, the way an ai researcher works, they look at a lot of data, come up with some ideas, read a lot of material, and then start thinking, maybe this, or that. maybe i should try this experiment. now i have this data. so, that doesn't work. now i'll come up with this new idea.

this human ability to come up with new hypotheses, new explanations, that never existed before, most of which are wrong, but then have a process and a feedback loop to figure out which ones might make sense and ultimately do make sense, is one of the key elements of human progress.

Trevor:

how do we prevent ai from being garbage in, garbage out? right now, ai works on information that was created by humans in some way. it's learning from material that we think is learnable. with all the new stuff coming out now, like open ai, anthropic, lambda, etc., it feels like we may be heading into a world where there's more ai-generated information than human-generated information, and that information may not be getting the scrutiny it deserves.

so ai gets better as it learns from itself in a way that may not have been vetted? do you understand what i mean?

Sam:

totally understand.

how do we solve this problem? it goes back to knowing how to behave in different contexts. for example, you want to have some fantasy in your creative process, but you don't want to have those fantasies when you're trying to report accurate facts about an event. and now, these systems can generate new images that are beautiful in some important sense, and they're fantasies in some ways, but they're good fantasies. but when you want the system to just give you the facts, it's gotten a lot better, but there's still a long way to go in that regard.

that's ok. i think it's a good thing if these systems are being trained on data that they generate themselves, as long as there's a process for these systems to learn which data is good and which is bad. again, "hallucination" isn't an adequate description of this process because if it generates new scientific ideas, those ideas might be considered "hallucinations" at first. that's valuable.

but you know, what's good and what's bad. and there needs to be enough human oversight to make sure that we all still collectively control the direction that these things go. but within those constraints, i think it's very good that the system will be trained using the generated data, and that will be the case in the future.

and then you reminded me of another thing that i've been thinking about, and i'm not quite sure how to calculate it, but i wonder if, like, at some point a system like gpt-5 or 6 will generate more words than the entire history of humanity combined. that feels like a major milestone.

actually, it may not be important for me to say this now.

Trevor:

how is it generated?

Sam:

for example, the model generates more words in a year than all humans generate. there are eight billion people in the world, or whatever it is. you can calculate how many words are spoken per year on average.

but the question is, what does this get us? and that's why i was surprised when i said it. for some reason, this seemed like a major milestone to me, but i couldn't figure out why.

Trevor:

maybe humans are like monkeys that are always typing. so, i think this is worth exploring.

Sam:

but i don't use the word "thinking" because i think you're right, that's not the right word. maybe we can say the number of words generated by ai versus the number of words generated by all humans.

Trevor:

i'm almost done. so i want to ask a few questions, and if i don't ask these questions, people are going to kill me.

one of the main questions, and this is my personal question, we always talk about ai learning from data, they are fed into a data set. that's what we're talking about. that's why you need those billion-dollar supercomputers, so that computers can learn.

how do we teach ai to think better than the humans who feed it data, which is clearly flawed? for example, how does ai learn beyond the limited data we give it? when it comes to race, economic status, and perspectives, because we are limited, how do we teach it to not be limited by the limited data we give it?

Sam:

we don't know yet, but that's one of the biggest research directions ahead of us, how to go beyond human data. i hope if we can do this q&a again in a year, i'll be able to tell you the answer, but i don't know yet. it's really important that we don't know.

but i do believe that this will be a significant force for injustice around the world. i think these systems will not have the deep flaws that all of humanity shares. they will be able to be made far less racist, sexist, and biased. and they will be a force for economic justice in the world.

i think if you provide a good ai tutor or a good ai medical advisor, that will help the poorest half of the world more than just the richest half, even though it helps lift everyone up.

so i don't have a definitive answer to the scientific question that you raise, but i do believe at this point that these systems certainly can, and we do have to do some hard social work to make them a reality, but they have the potential to greatly advance justice in the world.

Trevor:

and maybe that leads perfectly to the second question, which is, what are you doing? what is openai doing? are you thinking about doing anything to mitigate the wealth disparity that this new technology is creating again?

every new technology that comes along is amazing for society as a whole, but you can’t deny that it creates a moment where if you have it, you have it all, and if you don’t, you’re out.

Sam:

i think we'll learn more over time, but right now, i think one of the really important things that we do is provide a truly free service. that means no ad support, just a free service that's used by over a hundred million people a week. and, it's hard to say for anybody because in some countries we're still blocked or we're still blocked, but we're working hard to make a really high-quality, easy-to-use service accessible to anybody everywhere we can. free ai, and that's important to every single one of us.

i think there are other things we want to do with this technology. for example, if we can use ai to help cure diseases and make those cures available to the world, that's obviously beneficial, but getting this tool into the hands of as many people as possible and letting them use it to build the future is really important. and i think we can push that even further.

trevor: two more questions.

sam: can i add one more?

trevor: the time is all yours.

Sam:

another thing that i think is important is who gets to decide what these systems say and don't say, what they do and don't do, and who sets those limits. right now, it's basically openai employees making the decisions, and no one would say that's like a fair representation of the world. so figuring out not just how to democratize this technology, but figuring out how to democratize the governance of this technology is a big challenge for us over the next year.

Trevor:

this ties right into what i was about to ask you about, the security aspect of all of this. we talked about this at the beginning of this conversation. when designing something that can change the world, you have to acknowledge the fact that it could change the world in the worst possible way or direction. with every leap in technology, one's ability to cause greater damage increases.

is it possible to make ai completely safe? and then the second part is, what is your nightmare scenario? what would make you push a red button and shut down openai and all ai? when you go, you know this, if this is, if this can happen, we have to shut it all down. what are you afraid of?

the first part is, can you keep it safe? the second part is, what is your scariest scenario?

Sam:

i think, first of all, the insight that you mention, that the number of people capable of inflicting catastrophic harm decreases every decade or so, is a profound fact that we have to confront as a society.

second, on how to make a system safe, i don't think it's an either/or thing. just like we say airplanes are safe, but airplanes do still crash occasionally, although they are extremely rare. we say drugs are safe, but sometimes the fda still approves drugs that may cause some people to die. so safety is not determined unilaterally, but society decides that something is acceptable safe after weighing the risks and rewards. i think we can do this, but it doesn't mean that things can't really go wrong.

i think there are big problems with ai that we have to guard against. i think society actually has a pretty good, if messy, process for collectively deciding what the safety thresholds should be. it's a complex negotiation involving many stakeholders, and we as a society are getting better at it over time. but we have to guard against the kind of catastrophic risks that you mention.

nuclear power is an example, and nuclear war had a very large impact on the world. the world dealt with it differently and has done a very remarkable job over the last nearly 80 years. i think it will be similar with ai. one example that people talk about a lot is the use of ai to design and create synthetic pathogens that can cause huge problems. another topic that people talk about a lot is computer security issues and ai that is capable of hacking beyond any human capability.

and then there's this new scenario where if the model is powerful enough, it can help devise ways to offload the weights from the server, make a lot of copies and modify its behavior, which is more of a science fiction scenario. but i think as a world, we need to look at this head on. maybe not that specific case, but the idea that there's a catastrophic and even potentially existential risk, and just because we can't define it precisely doesn't mean we can ignore it.

so we do a lot of work here to try to predict and measure what these problems might be, when they might emerge, and how we can detect them early. i think all the people who say you shouldn't talk about this, you should only talk about misinformation and bias and the problems of the day, are wrong. we have to talk about both and be safe every step of the way.

Trevor:

ok, that's just as scary as i thought it would be. so, by the way, are you really thinking about running for governor? is that true?

Sam:

no, no. i thought about it very briefly in 2017 or 2016, or even similar to those years. i thought about it as well.

it sounded like a vague thought for a few weeks, just for fun.

Trevor:

ok

i guess my last question is, what's next? what's your dream? if sam altman could wave his magic wand and make ai what you want it to be, what do you think it would bring to the future? what are the advantages? what are the benefits for everyone? and on every level, it seems like a really nice, positive thing.

Sam:

thank you for asking this question. i think one should always end on a positive note.

yeah. i think we're entering the most abundant period of human history. i think the main two drivers are artificial intelligence and energy, but there will be other things as well. those two things, the ability to come up with any idea, the ability to realize those ideas, and do it at scale, the limits of what people can have will be kind of what they can imagine and what we negotiate together as a society. i think it's going to be really awesome.

we were just talking about what it would mean if every student had access to a better educational experience than the wealthiest, best-resourced students have today? what would it mean if we all had better health care than the best health care the wealthiest people have access to today? what would it mean if people were generally freed up to do the work they find most personally fulfilling, even if it meant they had to enter new categories of work? what would it mean if everyone could have a job they loved and have the resources like a large company or a large team?

so maybe instead of having 800 people at openai, and everybody gets 800 smarter ai systems that can do all these things, people can just create and make all these things, which i think is really extraordinary. i think that's the world we're heading towards. it's going to take a lot of work, in addition to the technology, to make it happen, like society is going to have to change a little bit. but the fact that we're heading into this era of abundance, i'm really happy about that.

Trevor:

i'll stop here. i'm a huge fan of the potential benefits of openai. a huge, huge fan. i work in education in south africa. my dream has always been that every child has access to the best possible mentors. do you know what i mean? literally, no child is left behind because they can learn at their own pace.

Sam:

by the way, how are the kids who are using chatgpt to learn stuff? these stories, like, i get emails every day. it's awesome.

Trevor:

it is. it is truly amazing. especially as learning becomes more multimodal, like with the addition of video, it becomes even more amazing.

i dream about, as you said, healthcare. i dream about all of this. one existential question that i think we don't discuss enough is, once ai effectively takes over all of these things, how do we redefine the purpose of humanity? because, like it or not, throughout history, you'll find that our purpose has often defined our progress.

our purpose used to be just religion. for better or worse, if you think about it, religion is really good at getting people to think and act in a certain direction beyond themselves. they say, "this is my purpose." i wake up to serve god, whatever god you think of. i wake up to serve god. i wake up to please god. i think that gives humans a sense of being on their way to a purpose, and also a sense of community and belonging.

as we move into a world where ai eliminates this, i hope we don't forget how many people tie their entire identity to what they do rather than who they are. once we replace those roles, when you don't have clerks, secretaries, switchboard operators, assistants, factory workers, we've seen historically what happens. it's like activism pops up out of nowhere, there's a massive backlash. have you thought about this? is there a way to intercept it before that happens?

Sam:

how would you describe our goals right now?

Trevor:

i think at the moment our purpose is to survive, and that survival is tied to some form of income generation, because we're told that's how survival works, you have to make money to survive. but we're seeing periods where that changes. there's a great example in france where they used to (and i think they still have a version of it) have an artist fund where they say, we're going to pay you as an artist, and you just have to create and make france look beautiful. that's really beautiful.

i know you're a fan of ubi. yes, we shouldn't leave before you talk about it.

Sam:

i don't think people's survival should be tied to how willing or able they are to work. i think that's like a waste of human potential. yes, i totally agree with you.

Trevor:

wait, let me ask you this before you go. is this why you think universal basic income is so important? because you don’t waste time and money on something you don’t believe in, so you’ve spent a lot of time and money on universal basic income. the last i saw, you were involved in a project for about $40 million. it was actually $60 million.

Sam:

i think universal basic income is certainly not a comprehensive solution to the challenges that are before us, but i do think that eliminating poverty is undoubtedly a good thing. i think a better redistribution of resources will lead to a better society for everyone. but i don't think sending money is the key part, like providing tools and providing governance, i think is more important. people want to be architects of the future, and i think i can say there is a continuing thread of meaning or something like a mission for humanity.

individually, it's about surviving and thriving, for sure, but collectively, we do have a collective imperative to make the future better. we often get sidetracked, but the story of humanity is: make the future better. that's technology, that's governance, that's how we treat each other. that's like exploring the stars, that's how to understand the universe, whatever it is. i'm very confident that this confidence is deep within us. no matter what tools we get, that fundamental desire, that human mission to thrive as a species and as individuals, is not going to go away.

so i'm optimistic about the world two generations from now. but you bring up a really important point, which is that people who are already in their careers and are actually quite satisfied with them, they don't want change. change is coming. one thing we saw in previous technological revolutions is that it seemed that within about two generations, society and people could adapt to any degree of change in work. but not within ten years, and certainly not within five years.

we're going to face this, and i think to some extent, as we've said before, i think it's going to be slower than people think, but still faster than society has had to deal with in the past. i do feel a little scared about what this means and how we adapt to this. we're going to have to face this, and i think we're going to find a way around it. i'm confident we're going to figure it out.

i also believe that if we give our children and grandchildren better tools than we did, they're going to do things that will amaze us. i hope they're going to be really sad about how awful our lives are for all of us. i hope the future is going to be really incredible. this human spirit and desire to just kind of explore and express yourself and design a better and better world, or even transcend it. i think that's wonderful, and i'm really happy about that.

in a sense, we shouldn't take this little thing too seriously. in star wars, there's a quote from one of the villains: "don't be too impressed by this technological terror you've created." i think it was darth vader who said it. what this means is that this technological terror is no match for the power of the force.

in an important sense, i do feel similarly about ai. we shouldn't be too surprised by this. the human spirit will get us through this, for example, and it will be far greater than any technological revolution.

Trevor:

it’s a beautiful message of hope.

i hope you are right, because i love this technique.

one thing i want to say to you, as sam altman, one of time's ceos and persons of the year, i think you'll continue to hold that position, especially in this role. because of the huge impact that openai and ai itself are going to have on us, i implore you to continue to remember how you felt when you were fired because you're creating a technology that's going to put a lot of people in a similar situation. i see that you have that humanity, and i hope you keep that in mind as you create.

Sam:

you know what i did on saturday morning, like, really early, when i couldn't sleep? i wrote down what i could learn from this so that i could do better when other people go through a similar situation and blame me like i'm blaming the board right now.

trevor: did you get that?

Sam:

there were a lot of useful single lessons. but the empathy i gained from the whole experience and the recalibration of my values, albeit at a terrible cost, was definitely a blessing in disguise. i’m glad i had the experience in that sense.

Trevor:

well, sam, thanks for your time. thank you very much, really enjoyed it all. i hope we can chat in a year's time about all the new developments.

Sam:

you should definitely do this, it would be fun.

Trevor:

i will, bro. that's awesome. thanks.