- World of DaaS
- Posts
- OpenAI CSO Jason Kwon
OpenAI CSO Jason Kwon
Power, Policy and the Race to Scale

Jason Kwon is the Chief Strategy Officer at OpenAI.
In this episode of World of DaaS, Jason and Auren discuss:
Compute bottlenecks and power infrastructure challenges
User expectations vs AI capabilities
Geopolitical competition in AI development
Bringing top AI talent to the United States
1. The Role of Compute and Power in AI
Jason Kwon, Chief Strategy Officer at OpenAI, explained that the most important factor for AI depends on the level of analysis. At the industry level, compute and power are key bottlenecks, shaping both research and inference capabilities. At the organizational level, however, success is more about making the right research bets and structuring organizations to innovate effectively rather than just having the most resources. Regulatory and energy infrastructure challenges also complicate scaling efforts across countries.
2. Advances in GPT-5 and Accessibility
The conversation turned to OpenAI’s recent release of GPT-5. Kwon highlighted two main improvements: accessibility for everyday users and deeper capabilities for advanced use cases. Many people had only been using GPT-4, but GPT-5 introduces reasoning models and automated routing that balance cost, speed, and quality. This opens up new opportunities in fields like healthcare, legal, and technical research. While hallucinations remain a challenge, better grounding in external information and improved understanding of user intent are steadily reducing errors.
3. Competition, Data, and Policy
The discussion also addressed global AI competition, particularly between the U.S. and China. Kwon noted that training data access could create advantages for some labs, especially where copyright rules differ. He suggested the U.S. should focus on attracting top AI talent, adopting technology more rapidly, and ensuring strong infrastructure. Issues around data ownership, customer trust, and integration with legacy systems are likely to shape the competitive landscape in the coming years.
4. Human Expectations and the Future of Work
Kwon observed that user expectations for AI grow as quickly as the technology itself, leading to frustration when models fall short. Still, he sees this as an opportunity, since using AI effectively is becoming a skill in its own right. Looking ahead, he envisions AI tools that can learn by observing users and gradually take over tasks, creating new forms of productivity. He closed with a personal philosophy: rather than simply following passions, people should cultivate agency and consciously choose what to pursue, shaping their own motivation and identity.
“Talent is one of the key ingredients in terms of continuing innovation, and we should be figuring out a way to have as much AI talent come to the United States as possible.”
“Scarcity sometimes creates the innovation.”
“Love agency and figure out how to have more of it in your life.”

The full transcript of the podcast can be found below:
Auren Hoffman (00:00.782) Hello, fellow data nerds. guest today is Jason Kwon. Jason is the chief strategy officer at OpenAI. Jason, welcome to World of DaaS. I'm really excited. Now, what, um, if you think of like the, what's super important for AI, there's this debate, like it could be compute, it could be data, could be algorithms, could be regulatory things. Like where do you fall on? Like what's the most important thing for AI now?
Jason Kwon (00:08.987) Great to be here, Auren.
Jason Kwon (00:27.812) Yeah, good question. So I think this kind of depends on what level that you're asking this question at. Are you asking at the industry or country level, or are you asking at the level of an individual organization? So at the country or industry level, I think it's probably compute. And that's because just the breadth and diversity of experiments that you can run on research, which is what kind of powers everything, you need compute for that. And the more compute that you have, the more experience. Sorry?
Auren Hoffman (00:53.55) And there's a bottleneck right now on that. And there's just a bottleneck on compute.
Jason Kwon (00:59.566) Yeah, I think to some degree there is, but the market is trying to respond to that demand signal. And even the scale up of inference that you want in order to benefit from the advanced capabilities in the models, because the reasoning paradigm has definitely shown that there's a scaling law that also applies the amount of test time compute that you apply. The compute is going to matter both for advanced capabilities and also making the most of the capabilities. And this is at an industry level.
Auren Hoffman (01:28.11) compute, you also, would you couple that with like things like power and stuff or is it like one big bundle?
Jason Kwon (01:32.462) Yes. Yeah, yeah. So I would think of the compute equation as the capital and power that you need to build the physical infrastructure to make use of the AI infrastructure, the models and software. But I think the other part of this is there's another level at which you could ask this question, which is at the level of an organization. There, I think the answer is a bit different. And this is where you can get a lot of debates is because people might be thinking about this.
in terms of different levels. So which level of generality are you talking about? So at the organization level, I don't think it's necessarily compute. And that's because if you just reduce compute to capital, it's just money and then you buy the physical infra. We don't necessarily assume in lots of other industries or even in technology industries that if you're just the most capitalized company or organizations that you're automatically going to win.
It's how you make use of that resource and apply it to various bets. so, yeah. Right.
Auren Hoffman (02:33.842) And often it's the opposite, know, the most capital like something and someone comes up with something very innovative to respond to that.
Jason Kwon (02:42.02) And so it's just the scarcity that sometimes creates the innovation. And so bet selection, I think in terms of research bets, that matters a lot at the organization level. And then having some kind of organizational capacity or structure that enables you to make those bets well and sustain them and have the right taste and selection criteria for them, that is really important. And I think you can think about that when you look and zoom out at various labs.
What types of bets are they making? How are they set up competitively relation to each other? And what's their cultural aspects tied to those things? I think that's an interesting analysis. And that's why at an industry level, it's different because that analysis just benefits from the aggregation of all the individual organizations.
Auren Hoffman (03:31.758) Where do you think like something else might become the bottleneck? Like, where do you think like the country's power might become the bottleneck? Or where do you think like some of these other things might come to the bottleneck to make these things happen?
Jason Kwon (03:42.574) So I think that there's certainly regulatory bottlenecks. You go from country to country, it's hard to get permitting to build new things, and depending on which country that you're in, then there is certainly power that's a bottleneck, which is related a little bit to the regulatory bottleneck too, because if you have more friction on that process, it's harder to build up the capacity faster. And certain countries are more renewable energy, pro-nuclear, and other countries are
a little bit more conservative about these things, and that also impacts the capacity on the power side. And so one interesting anecdote here is when we did a bunch of surveys around the energy question, sort of around when we were thinking about Stargate, it's very clear most of the world was building out energy capacity without taking into account the growth of AI, which you would assume is actually a pretty rational thing to do a couple of years ago because people weren't planning on this.
And so the amount of base load capacity people were planning on building over the next decade or so, it was just key to what they, yeah, population growth, what they expected.
Auren Hoffman (04:46.984) on their population growth and a few other things and maybe like you know electric cars are coming on or something or whatever.
Jason Kwon (04:52.92) Right. Yeah, it's just base rate of whatever economic growth they were expecting in relation to demographics and, you know, whatever they're already seeing in their economy. And so it was already not incorporating what power demands AI was going to bring. So, and then you take into account that was probably 18 months ago. And certainly a lot of people updated since then, but that still takes time to work through the system to get to adjusting the planning cycles around.
you know, new power generation capacity building. And so a good data point that kind of drives us home is the amount of total capacity in the US is about 1,200 gigawatts or something like that. And if you have some projections, I think there are some that say we'll need 50 gigawatts of additional capacity by 2030. That's like around 5 % of our total nationwide capacity. And that's significant, right? And so
Then you take into account how much new gigawatts of power capacity that gets built. It's a lot less than 50 per year. I don't know what the latest estimates are, but just getting a gigawatt cited and built in our experiences, that's a process. And it's going to take probably 18 months or so to complete that. Yeah.
Auren Hoffman (06:11.688) Can ask you something I don't totally understand about like power is. So I assume with like, if you're training AI models, you could almost guarantee you always want this power 24 seven. Whereas for most power, it's like even a factory, you're going up and down or, you know, or some other type of thing. Whereas I assume with the training model, you could just say, I don't know if that's true. Like you could say, I want this all the time and I will pay for it.
like all the time. and I can guarantee you this thing, or is it really, is it quite variable? Like there are, there's intense training where you need it. And then it kind of like levels off or something.
Jason Kwon (06:53.592) Yeah, so I think there's probably a difference between training and inference. And so you probably want something closer to constant uptime for inference just because you want to be able to serve. Training, you may be able to segment more because people have to prepare the training run. There's compute allocation and things like that. But I think the other factor that pushes on this is people are always looking for more compute capacity.
Auren Hoffman (07:13.837) Yeah.
Jason Kwon (07:23.268) And our experience has always been that whenever we have decided not to acquire some compute capacity, we later on regret it. I'm sure we're not the only lab. And that just kind of indicates that even if you could sort of do this load balancing that you're talking about, it's still at the end of the day you want more compute.
Auren Hoffman (07:47.104) if we have a point where like computes not the bottleneck and power is the bottleneck or something like or you know, we can't like way more efficient chips or who knows what will happen over the next x number of years. Like, is there also a point where you could be like, okay, we're just going to do this inference when when there's extra capacity on the grid, like there's points right now, even in California, where the energy price goes negative.
because there's just not enough people wanting to take from the load, even today that's happening. So could you have some sort of like start and stop type of thing, or is it just too hard? Like we got to run this thing for like a week at a time before where, and you know, it's hard to plan around that.
Jason Kwon (08:34.434) Yeah, so I think it would be probably difficult from the training side. Something that would be, would potentially be possible is if you are running a very large inference job, that's going to take a lot of compute. And I think this is going to be increasingly a type of workload that we see over some time span because of that inference scaling law that we were just talking about. So I think most people's typical day-to-day experience would
AI like ChetchPT or Claude or Gemini is you ask it a question and it responds instantly or within a couple of minutes. But if it is true that you're going to get better answers and better results on high value problems by running compute, inference compute, you could imagine a world where you're running it for days and people are willing to pay that much. So then, yeah.
Auren Hoffman (09:28.61) Yeah. And then you could also price it in a way of like, Hey, if you want to wait a little bit longer, we could optimize the when it's the better power or when, when these chips are more available or whatever it may be. Yeah.
Jason Kwon (09:38.756) Potentially. Yeah. Yeah. So I think that there you might see some interesting economic patterns emerge. Don't know yet though.
Auren Hoffman (09:47.148) Okay, interesting. And if you're only running it for hours rather than days, there might be times of the day where it is might actually be cheaper to run that thing or something like that. That's cool. What, you know, there right now, like a lot of these things are built on like the Nvidia roadmap. And there are some folks in the space like Google that have more custom hardware. Like, how do you see that as a
Jason Kwon (09:56.206) Yeah. Yeah.
Auren Hoffman (10:15.767) There's kind of this codependency with open AI and some of the other labs. Like, how do see that playing out?
Jason Kwon (10:21.38) Yeah, so I think the simple equation is just how much capacity, or sorry, not capacity, how much capability do you get per watt at some level of reliability? And whatever inputs into that makes sense from a GPU perspective, whether it's Nvidia or custom, that's always what you're kind of working with.
And then on specifically on the Nvidia question, it's just they've been a market leader for a really long time and it's a well-deserved position. And so I think it's not just us, but a lot of other labs are buying as much of it as they can because the quality is so good in relation to that equation. I think the second part of this is, yeah.
Auren Hoffman (11:13.446) And the key equation is per watt, like kind of like.
Jason Kwon (11:16.002) Yeah, like how much GPU capacity or capability are you getting?
Auren Hoffman (11:21.742) And how does the cost break down, let's say, over the life of a chip? How much of it is power versus how much of it is the actual chip?
Jason Kwon (11:32.574) in terms of, yeah, so there I'm probably not the best expert to comment on the exactual composition. I'd have to go talk to somebody and be like, hey, how does this exactly break down between energy? Yeah. And we'll talk about this at some point, probably later in the conversation too. But the nice thing about ChachiPT now is you can plug it into the org. And it's kind of like walking down the hall and talking to one of the experts too.
Auren Hoffman (11:44.674) probably ask Chad GPT the question.
Auren Hoffman (12:01.43) Yeah. Yeah. It's now, chat, GPT, we're kind of, making this where recently released GPT five. Like we're in that kind of like realm. you most excited about.
Jason Kwon (12:16.036) Yeah, so I think that there are two aspects to this. One has to do with accessibility, and the other has to do with really deep use cases. So on the accessibility question, one of the things that's, I think, not obvious to a lot of people is that most of the user base of Chatchpt has been not on state of the art models for a while, using 4.0 or variants of it. And they haven't been really interacting with O1 or O3.
And so GBT5 is really, I think, the first time.
Auren Hoffman (12:45.934) And that is because it just takes too long or the cost is too high.
Jason Kwon (12:50.506) Model picker, think, and the default generally being 4.0, and most people not going in there. Yeah. Yeah.
Auren Hoffman (12:54.828) Yeah. You just use the default and yeah, yeah, yeah. You should use the default because like, you're like, well, I really also sometimes you're like, do I really want to wait longer for this answer? Probably won't good enough or it was hard to know. Yeah.
Jason Kwon (13:04.856) Yeah. Yeah. And so especially if you are a casual user of Chatsy PT, it's sort of like, what is this model picker? What is this naming convention, which we've been teased about a lot.
Auren Hoffman (13:12.312) Yeah.
Auren Hoffman (13:16.135) I I use it like 24 seven. still don't understand. what is what like really hard for me to understand.
Jason Kwon (13:22.114) Yeah, and so I think that the fact that you have this interface now that sort of helps kind of automatically route and then some of that routing goes to these reasoning models, that is I think the first time a lot of people have exposure to this. And I think that accessibility of really this latest paradigm that was uncovered last year is really making its way to most people this year. And I think that's one of the things. And so it'll just be interesting to see what
Auren Hoffman (13:37.24) Yeah.
Jason Kwon (13:51.126) emerges out of 700 million people interacting with that capability. The other thing is now for the power users or at the enterprise level, people who are using this for hard intellectual labor or tasks or research, maybe to take a step back, a lot of common usages of chat GPT have to do with coaching. Like help me with this, walk me through this particular task, et cetera. And it's good at that.
But now you can ask, you know, you can have Chach-PT go and do some very deep research, like as the name of the product would suggest, but also, you know, get into some pretty extended dialogues or conversations on, you know, whether it's health questions, legal questions, finance questions, technical questions, it can go pretty deep. And then you can also have it work on a problem and walk away and then come back and it'll have a bunch of answers for you.
Auren Hoffman (14:43.843) Yeah.
Jason Kwon (14:50.922) And another example of this is on our health task benchmarks, it scored the highest out of all our models. So it would not shock me if it does better on all these other professional tasks as well that we don't necessarily have evals for. So I think what that also reveals is you have this breadth of accessibility, but then if you go down into specific domains, you can, think, now go even deeper than before because of this combination of it's a better base model,
It's got reasoning embedded into it. It's got model routing automatically to kind of make the cost reasoning quality trade off optimal for you. Yeah.
Auren Hoffman (15:32.046) And some of the API costs have gone down pretty dramatically too.
Jason Kwon (15:35.364) Yeah, yeah, that's right. And so I think all of these things will produce lots of new interesting use cases, just like if you go from GPT-3 to GPT-4, there was a bunch of new software applications and new startup that were formed around these tools that couldn't have existed, I think, during the GPT-3 time frame because it just wasn't deep enough, wasn't broad enough, and it cost too much for the things that you were getting out of the model.
Auren Hoffman (16:05.368) You know, there are certain cases where like ChatYBG still just hallucinates so much and it's like so frustrating for users. I mean, I understand like broadly why it happens, but why is it so hard to fix?
Jason Kwon (16:27.908) Well, I think that one of the reasons is user intent itself is ambiguous sometimes. Yeah. And so, you know, I could ask it, what's a describe for me the principle of free speech. And it could do a pretty good job there, but what was my intent? Did I want like a full on legal analysis? Did I want something casual? Did I want to joke?
Auren Hoffman (16:34.422) Yep, the questions are hard.
Jason Kwon (16:57.816) Was that being sarcastic?
Auren Hoffman (16:59.242) even with the intent, probably don't want it necessarily to tell you like a factual error or something, right? Yeah.
Jason Kwon (17:04.996) This is true. I think that, but it still triggers the question of, I be engaging system one or system two type thinking? And it's a little bit like when you talk to a person, like there are more errors when they do system one thinking. And this deliberative kind of reasoning does reduce the error rate. And then there's also the model has been trained to go look up information so that the inferences are not ungrounded anymore.
Auren Hoffman (17:20.814) Yeah.
Jason Kwon (17:32.066) when there is the ability to look up information and reason from that. And so the training on that is also dependent on understanding user intent signals. And this is why I think that you see variation in responses in the terms of user experience, where some people say, this model is great. And other people say, it's not working for me. I think it probably has to do with how they are.
asking questions or prompting it and what was their pattern of usage before versus now in relation to the model capabilities. And so if you're kind of using it as a coach before and it did very well with ambiguous questions, in this latest iteration, it might do exceptionally much better at requests that are very well-defined. But when they're very, very vague, it might actually do something different. And so it's just kind of
disjunct between expectations and how the model is actually processing what you want to do. And I think all these things will continue to improve because this is first time we've shipped something like this in this format. And this is probably, as I think Sam likes to say, whenever you do new things, this is as bad as it's going to be. It's only going to get better from here. We're just going to learn more in terms of user intent signals and all of that. And the training will continue.
Auren Hoffman (18:55.722) One of the things that is so, it's so weird about this whole AI moment is that as a user, my expectations for what it should do keep going up very, very fast. In fact, my expectations go up as fast as it actually is improving. whereas like normally when I interact with a product, whether it's a car or, you know, my phone or
whatever it is, even a piece of software, my expectations of that product grow very, very, very slowly over time. And so it's really hard for the AI to even ever hit my expectations because my expectations keep growing. Is that just like a common, weird, like human thing?
Jason Kwon (19:44.58) I think so actually. And it might be accentuated in this case because there are enough interactions that you have that seem like interacting with human intelligence, right? And then you start to expect that kind of same reliability that you're accustomed to. And then when it kind of goofs, you're disappointed. And I think the...
Auren Hoffman (19:58.242) Right, yeah.
Jason Kwon (20:12.088) case of innovation too has this kind of conditioning effect where in a few years, you know, going from GPT-3 to where we are now, you know, back during the GPT-3 days, you know, as much as we're just talking about hallucinations, it was, you know, there were many more back then. It didn't have the ability to look up information or process it, didn't have the ability to reason, but even back then it felt kind of interesting and new because you could kind of have this naturalistic conversation with, with, you know, with a chatbot that
you know, wouldn't veer off course like the prior versions of chat bots we had seen before. And then seeing stuff continue to progress really rapidly. So you kind of expect things to change and improve really quickly. And so then I think your human mind just kind of extrapolates to broad progress. And then when you see kind of pockets of capabilities that are not progressing at the same pace, I think you end up disappointed. Now, the flip side of this, I think, is this is actually why I'm one of the people that believe that
this technology actually has the potential, it's not predetermined, because I think we have agency in this outcome, but that it could create a lot of new jobs and new types of work. So we were just talking earlier about you could get vastly different results based on how you are interacting with the system. That's a skill in and of itself that's going to develop. People have gotten pretty good at understanding, some people have.
have gotten pretty good understanding, you know, where are the limits of the models in terms of reliability or where it's likely to hallucinate versus not and how to kind of minimize that outcome versus not. And other people understand how to apply it to various disciplines like STEM fields or healthcare and all that and get really, really good results or use it to perform advanced analysis, but then understand how to look for the particular areas where
they need to do the refinement because they understand a little bit about where things start to become a little less reliable in terms of the results it's giving you. And so those are all scale elements that come from this tool. And if it was perfect, you wouldn't have that. But because it is imperfect, it is another tool that has, I think, proficiency associated with it. And that itself, I think, is actually quite promising.
Jason Kwon (22:40.462) could be like a very high skilled type of work that evolves.
Auren Hoffman (22:46.538) One thing I use, one thing I use Chessy with you for all the time is like random tech support things. Like this isn't working or this has a bug or this software I'm having trouble with or my phone. need, whatever it might be. And it's quite good at most of those things, but like the one area of tech support, I feel like it's the worst is when I ask it about itself. Like, how do I use it for yourself or what? And you would just think like that would be like.
I often have to go to the actual chat, TPT tech support, which may have been AI interacting with me. don't know, but it's like, it's a separate interface where it's like, I'm often sending them an email. They're sending me an email back on, you know, Zen desk or whatever they use there. Like why, why, why is it that like, why can't that be incorporated into the product?
Jason Kwon (23:33.476) Yeah, so this is another good question. And this is, think, another instance where the intuitive extrapolation is actually, it kind of leads you astray a little bit. Because what's really going on in the hood is the surface area of Chatch-B-T is changing very rapidly. And the information, the content around that is not that widely dispersed online. And partly because it's
Auren Hoffman (23:51.808) Yeah, that's true. Okay, got it.
Jason Kwon (24:02.766) the rate of change is so high, and then the distribution of information because it's relatively new isn't that wide. So when...
Auren Hoffman (24:10.722) And are you worried about like training on your own internal stuff? then someone could like somehow you like attack it and get some weird information back or something.
Jason Kwon (24:18.82) I think it's not less that and more just, know, the product features go out and we're constantly iterating. so, you know, the documentation help might trail a little bit behind. And then the discussion around that online is also going to be trailing too. And so when you're training these models, you know, everybody's heard about the knowledge cutoff, that's not going to be part of the training mix because of the knowledge cutoff. And then
Auren Hoffman (24:25.932) Yeah, okay.
Jason Kwon (24:45.688) getting the model to a place where it can handle arbitrary questions about itself and reliably go to the help site, that itself is also a training challenge, right? Because you have to get users to kind of interact, to understand what they're asking.
Auren Hoffman (24:56.504) We'll right back.
Right. So because it's changing so fast, just makes it such like so much of a harder problem. Whereas like the, the, the problem I'm having, I'm experiencing on my iPhone, probably somebody else experienced three years ago or something like that or whatever. Okay. That, that actually makes a lot of sense to me. One of the things that there's this like, you know, people are talking about this kind of competition with China. Um, and you know, the, the Chinese models are almost certainly like training on
Jason Kwon (25:11.106) more stable problem yeah yeah
Auren Hoffman (25:29.742) every copyright copyrighted thing available. At least that's the rumor. don't, you know, I'm not, I'm not in there to go see that. Um, and that's, you know, obviously there's, there's different, uh, things about what us companies can do and what they can't do and what they're allowed to do. And what can you go inside the firewall outside, et cetera? Like, how do you, where are we going to have to evolve to stay competitive?
Jason Kwon (25:54.414) Yeah, yeah, like you, don't know for sure what they're doing. But I think if they are training on more than we have access to, that probably does give some kind of edge, at least for pre-training, where the volume of data still matters a lot, because it enables you to get the most out of the compute that you're applying. And the quality of the base model that comes out for pre-training still matters, even
though there's a lot of work that now happens on the post-training side with reinforcement learning applied to have the model perform better in terms of responses to human instructions and commands and particular types of knowledge tasks. I think the other thing that's kind of interesting here is that if you play out this dynamic where one part of the world trains less,
because of copyright and one part of the world trains without any guardrails because they don't really share that perspective. The natural outcome of this competitive equilibrium is that the party that can train on the most will train on the most. There's nothing stopping them. And so if AI keeps taking off, there's actually a competitive dynamic and advantage that occurs in perhaps content in that it's going to be less based on when you can copyright a thing.
and more based on how deep, wide, and quickly you can generate new content with the assistance of AI. And so this could take many forms. Yeah. Yeah, so it could be, it could take many forms, right? It could be you have a UGC platform that just is able to generate content at a pace that we have not seen before, and that can quickly grow because of that. It could mean
Auren Hoffman (27:27.384) recent creating like synthetic data or other types of things like non-human created data.
Auren Hoffman (27:43.276) Yeah.
Jason Kwon (27:47.404) something like there's a form of entertainment where there's a core piece of content that's created and then you're able to generate multiple variations with user engagement, fan engagement, what have you, with the assistance of AI. And that itself is potentially a very powerful economic opportunity if you're in the content industry. So then I think it becomes very important who is leading the AI competition because that will impact whether copyright IP has value or it doesn't.
Because in the US, training on copyrighted content is considered fair use, but not if you completely repeat it. But I don't think the PRC labs think about it the same way. So if the ecosystem around AI is centered more on the US lab approach, you will have this kind of competitive equilibrium that still has guardrails around the technology when it comes to original content.
Auren Hoffman (28:26.796) Yeah.
Jason Kwon (28:46.02) but if the ecosystem shifts the other way, then you're going to have basically a tech stack that and market structure around this that's probably oriented towards driving down the value of copyright IP. So I think this is the non-obvious part of this kind of geopolitical competition when it comes to the copyright question.
Auren Hoffman (29:06.926) And obviously there's all these internal docs that one could potentially train on if you get access to it. presume like Google already has access to, or Microsoft has already access to so many of these people's internal docs and internal emails and internal communications and stuff like that. And they may or may not have the right to train on it. But you know, if you auth it in, you might have like some right to do it or, or like, how does that like, how do you think all that like plays out?
Jason Kwon (29:36.834) Yeah, it's a good question.
So companies that have had a chance to build up large data asset pools, let's go for lack of a better word, they by default would seem advantaged. And they probably are in some sense. And I think there are going to be a bunch of interesting competition questions that come about over the next several years.
Auren Hoffman (30:06.318) There's also probably just like, now you have to deal with all the internal lawyers that are like debating it and screaming at each other too.
Jason Kwon (30:10.404) Yeah, yeah, yeah, it's like a whole new layer of complexity and perhaps compliance that people don't necessarily welcome, but it could appear. And if you remember about, I don't know, half a decade ago, there are all these questions about interoperability. Can you port the data from one system to another? And that is a way to trigger more competition, partly because it
it kind of reduces the power of network effects and switching costs. And you might see some kind of push related to that, like similar to that around the data question.
Auren Hoffman (30:49.486) I should be able to bring my data with me if I choose to in some sort of way.
Jason Kwon (30:54.468) Or it could be that if you have a large data asset pool and you are using it in a particular way for AI training, maybe you're required to share some of it. That could be an outcome. I don't know. It just, I think, depends on how some of the competition theory evolves over the next decade or so. I think there's also a very interesting question here too about there are pools of data where
know, companies have them, but it's customer data. And they may, you know, it seems like an adjunct, but it may not be because there's a massive trade off there when it comes to questions about customer trust, user trust and whatnot. And so the, I think that it'll be interesting to see how that is conceptualized. Right? So I think that,
before, again, going back to half a decade or a decade ago, those concerns crystallize around concepts like privacy. I'm not sure that that's the right mechanism for something like this, where it's not that your data or information is going to be revealed or that it's going to be used in a way to add target you or something like that. It is.
potentially being trained on has more, I think, to do with this question of what is your relationship with the company in terms of trust. And this might just actually be solved by the market, because if you're a company that decides to use that data in a particular way, there's probably a cost that you're going to pay when it comes to customer relationships. And maybe you decide to incur that cost.
In other cases, people running companies may value that so much that they would prefer not to pay that cost. And I think that it may not really require much more than market forces. We'll see.
Auren Hoffman (33:05.142) One thing that surprises me is that it's like a lot of these AI tools are not native to the apps. So if I'm in Google Sheets, I want to do something in Google Sheets or something like that. I'm already there or I'm in Excel or something. or like, it's very hard to you. Like, like if, if I want to use like the most powerful Gemini, thing,
It's not in there and I have to like, I have to go somewhere else. have to like copy my sheet and I have to like say, Hey, how do I do this thing? I'm trying to do this weird V look up this thing X like, and then it can then tell me and then I have to like go back and I have to like, and after, or I have to re describe it. Like, why is it so hard to get into the user experience that they're already used to?
Jason Kwon (33:35.556) Yeah.
Jason Kwon (33:58.436) So it's a little bit of speculation on my part because we are not necessarily dealing with those legacy products. I would think that it's probably a combination of there's probably shipping an org chart then, that product that Sheets that you're talking about or some analog to that in some other company. There's a PM that owns that. And they like their product to a certain way.
That's not necessarily the same PM that does AI integrations or AI innovation. So you get something that is a little bit kludgy. The other could be you have users who have been habituated to use it a particular way and you need to build some kind of transition and people are working out how to do that smoothly. And I think that this is why you see in various
AI applications, this whole notion that there's such a thing as generative AI search. You would think that really at the end of the day, people want to just get to the answer. And I think that that is actually what people want a lot of the times, but a lot of people are used to the search paradigm. So you kind of have to build this bridge. And I think that that is what you see, for example, with applications like Perplexity.
And it's a little bit of the motivation for why we have a search feature inside of Chatchapiti too. It's just, we recognize that for certain types of queries or interactions, people do want that old experience because they're used to it. But I think that over time, what you'll see probably is evolution of all this stuff and like the 16 year olds that are playing with this technology that are not really using search, they're using AI. Those behavioral patterns will change over and then you'll probably see software interfaces
Evolve to in accordance with that.
Auren Hoffman (35:56.174) The chat interface is very good for many things, right? And it's kind of good. Like if I'm talking to a human and I'm trying to get information, it's kind of that type of thing, whether it's text or I'm typing it in or on voice or stuff. there's other UIs that are also really good. Like the spreadsheet is an amazing UI that has been around for, I don't know, 40 plus years now.
and has really kind of dominated the way we think about things. And like our whole brain is wired in a way to think about spreadsheets in an interesting way today. And it'd be great if like there are other ways to benefit from AI beside that kind of chat interface way of thinking about
Jason Kwon (36:45.166) Yeah, so I think what would be interesting, is you're able to respond with this. So just to kind of riff off what you're saying, you have a spreadsheet and you're able to just respond not with the chat, but with another spreadsheet. And so maybe it'll reply back and say, what do you want to do with these two data sets? Do you want to join them? And to use SQL language, inner outer join, what have you. You want to remove duplicative rows.
all of that. And I think that those could be interesting avenues for people working on this to take this, is, OK, we certainly have the ability to chat through the data now with AI. But is there a way that you can use different formats of inputs and prompting? And you see a little bit of this in that you can upload files now and interact with them.
But what are some new sort of form factors of interaction in the prompting itself that might take advantage of the way in which the LLMs operate? And I think we're still in early days. think right now we're still pushing the capabilities of the models. But there are all these entrepreneurs that are fiddling with the applications. And that has been going on only for a couple of years. And so we're still early days in that.
Auren Hoffman (38:15.53) One thing is like in the Google search paradigm, there are all these people whose job it is to game the Google search results to be in their interest, right? SEO, all this other stuff. And today there's just so many people already whose job it is to game like the AI outputs to be in their favor.
And I imagine it's already creating this like very interesting cat and mouse game. Like Google's been used to for the last 25 years. Like from your side, like how is that playing out?
Jason Kwon (38:54.116) So I don't know that we track this really that closely. I think mostly we're focused on training the best models, pushing the capabilities, and then having in the search experience it trying to return relevant results that are then reasoned through by our reasoning engine.
then continuously refine there based on user signals. If you answer your question from thinking from first principles a little bit, in the long run, and this might be speculative, but I think if reasoning and agentic capabilities continue to advance, this gameability of search results
The way that people do it now, at least, might just be really, really difficult. It might be a new type of gaming. But if users want to not have results gained, and if that's what model providers also want, it should be easier to actually avoid this because you can now actually tell the system, find me something objective, avoid sites that look like SEO, analyze the results you're getting back,
to understand if there might be a financial bias or motivation and get me back 10 results from these types of sources, publications, independent bloggers, what have you, and synthesize them. Don't give me just one. Actually go through and then synthesize the results so I'm getting something that is net new. And again, there's skill involved here in terms of how to do this, but then also there's a desire that you don't want.
the gaming to occur. And so that's a capability that's now at people's fingertips that didn't necessarily exist in the search paradigm where you're kind of restricted by keywords and having to filter stuff out. And that is just a, I think that's a new thing that people will have to contend with if they're really trying to gain results in. If there's a way to do it, it won't be based on the old way.
Auren Hoffman (41:12.654) Like I have a human assistant and I'll often ask her to get me a hotel or get me a restaurant reservation or buy me a charger for my phone or something like that. And she does the research and I kind of trust her with being a, in some ways my agent. assume at some point I'll be at some of those queries I'll be able to ask AI to do that for me.
And I want to trust it with the results. And I imagine there'll be a lot of folks in the world who'd be incredibly incentive to try to influence those results to get me to go to a different hotel or different restaurant or different charger. so I imagine these games will go on forever.
Jason Kwon (41:58.776) Yeah, so I think the thing that's interesting, and this kind of goes back to the thing that we're talking about with being satisfied and dissatisfied and people continue to adapt. We're kind of answering the question in a different context, but I think people always figure out a new type of game to play. And it might not be in this case, though, that it's a system of gaming that
is necessarily misaligned with what the user wants. And so it could be the situation where the marketers that succeed here are actually doing the best job of actually matching the products that the user are actually intending to look for.
Auren Hoffman (42:32.95) sure. If it gets me the restaurant I want, I'm happy.
Jason Kwon (42:58.756) And what was gaming at one point is actually closer to now looks more like optimization. And so that would be a very interesting way to develop. But we'll see.
Auren Hoffman (43:16.374) One thing that is still seems so hard for whatever reason in the non AI world is good content recommendations. Showing up on Amazon and even though it has a history of like 30 years of what I bought, it still doesn't tell me good things to buy. I show up on Spotify and still doesn't give me good music to listen to. I show up on Netflix and the recommendations are just terrible.
why is that like in the new AI world? Maybe they haven't taken advantage of it, or maybe they don't want to put in a million tokens to, you know, in memory, but why is this so hard?
Jason Kwon (44:03.342) Yeah, it's hard for me to comment on people's dissatisfaction with Spotify and Netflix algorithms. But in our own sort of work on personalization, because that happens inside of Chatchabee-T2, one of the things that we found is just that what people want in terms of signals is not necessarily one-to-one with actually what they want.
And it's just, yeah, yeah. And it's a little bit of an art to figure out what are the right ways to have affordances inside the product that let the user give you reliable signals for what they actually did want. And we have this thumbs up, thumbs down, but that's very lossy, I think. Because why did they thumbs up?
Auren Hoffman (44:32.216) Like you might want to click on may not be what they want or something. Okay. Hmm.
Jason Kwon (45:01.412) because the model gassed you up. Was it a good answer? Or were you just feeling generous that day? And so that, think, is one part of it. And then the second part of it is even when the model gives you what you want, what you say you wanted, and this is especially true in consumer tech, what you say you wanted is not necessarily the same thing as what you actually wanted, because people prefer to falsify sometimes. And I think those things, I think, are
Auren Hoffman (45:04.032) Yeah. Yeah.
Jason Kwon (45:30.532) you know, probably more like permanent features of trying to understand what people want personalized things, you know, and how to personalize things to them. And that's probably why it's an enduring challenge.
Auren Hoffman (45:43.096) Those are possible. just like, it's just not in the DNA of those. Like I, I think the ads on Metta are better. as if you just think those as content recommendation type of things, like, and I assume it's just like more in the DNA there to use AI to, help with these, with these things than it is in Netflix today. And maybe that will change over time.
Jason Kwon (46:07.864) Yeah, perhaps.
Auren Hoffman (46:10.398) One, as we get into this global competition for AI, it kind of harps back to back in the day and like the forties of Operation Paperclip, like bringing these thousand German rocket scientists to the US after World War II to keep them out of the Soviet hands. should we do something similar for like these like amazing AI engineers? If you're sitting in, you know,
I'm at some random, you're sitting in Hungary or something and you may have lot of opportunities for should we like get them to the US with any scenario or.
Jason Kwon (46:53.592) Yeah, so in a short answer, I think, yeah, we should be figuring out a way to have as much AI talent come to the United States as possible. If we want to be a leader in this technology, because talent is one of the key ingredients in terms of continuing innovation.
Auren Hoffman (47:14.094) By the way, do you agree there's a relatively small number of just like ultra talented people in the AI field?
Jason Kwon (47:22.5) I do. Right now, that's certainly the condition. Something like that, yeah. I think that's the condition today. But I think as more talent comes in, as the industry matures and more people go into the field, that'll change too, again, market correction and all that. But I do think that we should be trying to find ways to collect more talent here. Assuming, of course, the objective is for the United States to
Auren Hoffman (47:24.844) say, let's say under 10,000 people or some sort of number that, yeah, something like that. that more than that. Yeah, that might change.
Jason Kwon (47:51.672) continue to lead an innovation here. And I think the other part of this, though, too, is how do you bring the talent over here so that it stays here? But the talent settles here. Yeah. Right.
Auren Hoffman (48:07.704) They don't just like come for a few years and go back to where they came from or go to the highest bidder or something.
Jason Kwon (48:13.412) and becomes part of the fabric of society. And I think that's important too, because it wasn't just, know, Paperclip brought a bunch of scientists over, those scientists came over and they became American citizens. Right.
Auren Hoffman (48:25.026) Yeah, they raised their family here, their kids are here, they stayed here, this was the best place for them to go.
Jason Kwon (48:30.04) Yeah, yeah, because so I think that there's it's not that the talent is very important and we should definitely work to bring the talent here. But there's a values question too. And how do you make sure that the two are, you evolve together, complement each other? Because I think that's what also creates durable, you know, sort of talent accumulation.
Auren Hoffman (48:51.63) But one of the things the US has going for it's good at assimilating people. It's got great schools. It's great for kids. There's a lot of other like great, it's very safe place, right? There's a lot of other things that has going for it where you show up to some other random place like you may not have, even if they're willing to pay you three times as much. I don't know. I probably not willing to move to North Korea or something like that.
Jason Kwon (49:18.436) Yeah, yeah, with you there on that one.
Auren Hoffman (49:21.806) Where else should we be thinking about, like, if we're really trying to make sure the US is, by far the dominant country for AI, like, what else should we be thinking about from, like, a US policy perspective?
Jason Kwon (49:41.006) Yeah, so I really love this question. people ask me this sometimes, and I always say the same thing, which is we could have a research advantage and we could have a technology innovation advantage. these advantages are measured in terms of time. You might be a year ahead, two years ahead, 18 months ahead, whatever. And that time element
only matter so much if you are actually broadly adopting the technology too. You could advance the science, but if people don't use it, but other people are behind, but even if they're behind, their population uses it, embraces it, gets all the leverage out of it, what have you done with your lead really? And so I think that from a policy perspective, and I think that the current administration is really trying hard to do this is how do you...
come up with ways to increase the pace of adoption and increase the efficiency of adoption and increase the human capital around the adoption so that people understand how to use the technology well. And I think then you get the most out of not only the technology, but actually the research lead that you have. And yeah.
Auren Hoffman (50:59.374) It's not that hard to use. It's not like AI is hard to use. And it's kind of like, you know, the phone, it's not like there was some government thing to teach you how to use a smartphone. Like everyone just like figured it out pretty fast. And I assume with AI, it's similar. Like it's just, there's so many benefits so quickly that once you start it and once you see other people using it, like you just jump in.
I know a bunch of engineers even like, you know, they were even six months ago, they weren't going to use AI to do their coding and they've all switched over the last six months. And a lot of marketers are similar and a lot of artists I know are similar. Like so many people like we'll start seeing other people using it. And once it's there, they start moving. So it's not like we need government being like, Hey, use this tool. It's amazing or something. Right.
Jason Kwon (51:49.102) Well, outside of government, maybe not so much, right? Because private sectors, they kind of have the marketed incentive push to adopt technology as soon as somebody else in their field does, right? At the governmental level, you still have, I think, things like old procurement processes that are a little bit slower, budgets that are written for a particular type of technology or tool that don't necessarily fit the new thing coming. Yeah.
Auren Hoffman (52:06.83) Yeah
Auren Hoffman (52:13.452) That was even true like when the iPhone came out like it took probably more than five years before anyone in the government could buy it like they were still using the older phones and stuff like that like that just happened right.
Jason Kwon (52:22.564) Right. Yeah. Yeah. So I think streamlining all of that and I, know, easier said than done, right? But I think that figuring out how to just to shrink the gap between when the technology hits the market to how does a particular, you know, government agency or constituency get the technology in diffuser through the organization. And along with the
the ability for people to make the most out of it in relation to their actual day-to-day workflows. And because it's one thing to kind of know how to use Chatch BT as a coach to help you learn things or do some research online and whatnot. It's another thing to say, okay, I'm going to deeply integrate it into my day-to-day work. And for example, get rid of some of this paperwork that I do. And it also means that probably there's a turnover of other legacy systems that you need to have.
that make it even more high leverage. And so an example of this is we have connectors inside of Chatch-a-PT, plugs into things like Gmail, Slack, Google Docs, all of that. But if you're an organization running on Lotus 123, for example, that's going to limit the leverage that you get from Chatch-a-PT. And so there's a transformational aspect to this. We'll see.
Auren Hoffman (53:42.402) Yeah. Yeah. Which is, which the only one still using that as the government, right? What, what, a couple of personal questions. What is something you personally wish AI could help you with, but it's just not there yet.
Jason Kwon (53:58.062) Yeah. So ambient learning is probably the thing that comes to mind for me. And it's because it comes out of this mild frustration dynamic that we were talking about before. So I ask it questions. I have it do things for me. but I wish there was a way for me to just work and there's some kind of observability, you know, of my work by Chatchabee T and then it prompts me every so often.
Auren Hoffman (54:22.392) Yeah.
Jason Kwon (54:26.372) you seem to be doing this type of work. And this is how I think you do it. And based on observing you, this is what it seems like a good result is and what a bad result is. And then I'm able to give a feedback. And so it's a little bit like if I were to bring a mentee and they're watching me and I, then they come to me and say, okay, it looks like I'm learning these things from you. Am I learning the right lessons? And I can kind of steer them. And then it gets to a point where it's able to do certain things, right?
Auren Hoffman (54:44.099) Yeah.
Auren Hoffman (54:54.678) some of those things.
Jason Kwon (54:56.56) And then it just pops up one day and says, okay, I think based on watching you work, this is like one thing that you do. And I think I know how to do it. Now I can just do it for you if you want me to from now on. Right. And I think once we hit that, that's like another step change in sort of productivity that you get and empowerment that you get from the tool.
Auren Hoffman (55:02.89) It's so cool.
Auren Hoffman (55:14.444) Yeah, that's super cool. All right, two questions we ask all of our guests. What is a conspiracy theory that you believe?
Jason Kwon (55:21.474) Yeah. So, I think, so cheating a little bit because I think you asked me this question once a long time ago, but, it wasn't like a group setting. I didn't get a chance to go. So I already have the answer in my head. it's, it's more humorous than serious, but, so I'm a basketball fan and, there was, there are these playoffs from way back in the early 2000s between the Kings and the Lakers. And there was like game six. and it was some crazy statistic where in like the third or fourth quarters, the free throw disparity was something like three to one.
And the Lakers squeaked out with one point victory or something like that. And then a few years later, it was revealed that the referee calling the game was gambling on the games or something like that. it makes you scratch your head a little bit. And so that's one that I kind of think about sometimes.
Auren Hoffman (56:08.526) I I think for sure there's somebody like, they're often tipping the scales to a better, like a team like the Lakers, the NBA. The NBA front office is tipping the scales to the Lakers or those types of teams. They just want the Lakers in the finals and not the Kings. This is going to be better for every team because they're going to get better. They're just going to get more people watching, et cetera.
Jason Kwon (56:38.126) Yeah, although that King's team, I gotta say, was a lot of fun to watch. yeah, was just, sometimes you wonder what might have happened. Yeah.
Auren Hoffman (56:50.208) Okay. I like, think that that's it. think that's an actual legitimate conspiracy. I like that conspiracy. All right. Last question. We ask all of our guests, what conventional wisdom or advice do you think is generally bad advice?
Jason Kwon (56:54.148) you
Jason Kwon (57:01.912) Yeah. So I think it's probably this advice where it's something along the lines of do what you love or follow your passions. And it's, I think it's because it's just the, you know, the assumption somewhere in this statement, at least the way that it lands with me is that intrinsic motivation comes from some kind of embedded part of your nature. It's just kind of who you are. And so you must love what you love and therefore you need to go with it. Right.
But I think of it more like it should be a mental game of creating intrinsic motivation from deciding you just want to do something. And that is for me, think how I think of the essence of agency. And so I tell people instead is love agency and figure out how to have more of it in your life.
And I also think that that's also part of this other thing that some people say, which is just, just love to learn and grow. because doing, you know, engaging in that way in your life, actually, it's just an extension of if I want to learn more, that enables me to have more agency over my life. And that's actually something I'm consciously doing. It's not something that I'm pursuing. know, self determination. just, I just get to decide.
Auren Hoffman (58:15.758) What do mean by agency? What do you mean by agency?
Jason Kwon (58:23.682) These are the things that I love. It's not something that I just, it happened to me. And it can be like all the way to like reinventing your identity to taking something that you don't like and actually figuring out how to like it so you can get it done. And so, I mean, other people might call it discipline. I don't know. But that's what I think about.
Auren Hoffman (58:29.537) Yeah.
Auren Hoffman (58:45.944) It is interesting because sometimes people say, I was born to do this. I'm sure there are some people who are born to do something like they're the prince of a country or something, and they were born, they're eventually going to become the king. But most of us, I don't think we're born to do anything particularly. We get to choose where we want to go in life.
Jason Kwon (59:08.45) Yeah, I think that's right. And I think that that's, that's the beautiful part of life. You get to choose. And I think that that is, that's really kind of what I'm getting at is you just get to decide who you are and what you love and what you're going to follow. And it doesn't have to be something that you feel like, like it, this has been decided for me and therefore I must do it. I think that that's a narrative. Sometimes we tell ourselves,
But I think, and that could be good because that could be a source of motivation, but it's also good to realize that that is a narrative that you tell yourself and you can decide to just rewrite it one day if you want.
Auren Hoffman (59:45.07) All right. That was actually very beautiful. It's a great way to end. Thank you, Jason Kwon for joining us. World of DaaS. I follow you at Jason Kwon on X. I definitely encourage our listeners to get you there. This has been super interesting and a ton of fun. So I really appreciate it.
Jason Kwon (01:00:00.548) Thanks, Auren. Great to be here.
Reply