FULL TRANSCRIPT
Good morning. Welcome to the briefing on Anthropic.
So why don't we get started My name is Eric Cantor. You might have noticed Slava Rubin is not here. I had to substitute in for him at the last minute. He had some last minute travel come up. I will do my best Slava impersonation. I'm here with a partner of ours, Jan-Erik Asplund, co-founder of Sacra the best private market research company.
At Vincent, we're also helping investors navigate private markets. And this name Anthropic that we're to talk about today has a ton of interest, both from our audience and just generally in the world as this LLM arms race seems to accelerate every day, including over the weekend, where we got new information that we're going to try to update you on on this call. So why we start with a quick overview of Anthropic? What does the business look like? What are some of the
kind of pros and cons in it. What does revenue look like? Fundraising history. Then I'm to drill down into some of the areas that investors raised as questions as we got ready for this briefing. And at the end, we'll open up to Q &A from everybody who's currently on the call. Feel free to drop your questions into the Q &A button right beneath you on the Zoom screen at any time. And we'll try to get to as many as we can. So let's dig on in. Before we start, just a reminder, this is not
financial advice, you should consult with your own experts and your own situation before making any investment decisions. With that in mind, let's dig in. have Jan-Erik start and just give us an overview of the company that we're going to be talking about today. Yeah. So Anthropic was founded by this guy, Dario Amadei and his sister and a bunch of other former OpenAI employees. So this is kind
core to the origin. They were at OpenAI for a lot of the sort of late stage advancements, pre-Chat GPT in building AI models. And they, as often happens, had their own ideas of sort of how to do this in a different way. And they splintered off and started their own company, Anthropic. And the initial framing of it was kind of that they were going to do AI safely with the implication that OpenAI was
sort of moving too fast and Anthropic would do it in a more human aligned way, which is sort of an interesting framing that I think still is relevant to the company in some ways, but, you know, worth looking into. They have a chat bot just like ChatGPT. There's this called Claude. Then they have a family of models similar to OpenAI that developers can use via API, all named Claude, and then a sort of poetic form, Opus, Sonnet, Haiku.
They're at about a billion in revenue, a run rate, or approaching that right now. yeah, growing really fast from like 150 at the end of 2023.
Revenue wise, right? mean, just to you didn't touch on it, is the business based in the US or is it global? it's based in the US. The founders are, you know, Italian Americans, think Daria. But no, it's yeah, they're not one of the French AI companies, which there are a few of those. Got it. Cool. Revenue wise. So this is an interesting kind of flip, whereas OpenAI is very heavily
concentrated in consumer revenue via subscriptions to chat GPT, $20 a month or now $200 a month. Anthropics revenue is very heavily tilted towards their API. So developers making an account with Anthropic using an API key, putting it into their own applications and then getting clogged responses over the API, which they can use in their app. So that is...
from some CNBC, know, leaked documents, it's over 80 % of revenue is via API. And the subscriptions to their chat bot are like 15 % of revenue, maybe something like that, which, you know, is also something we'll talk about later, how their chat interface has sort of not caught on the way that ChatGPT and others have.
Funding-wise, it's not OpenAI, but they have raised a lot of money, roughly 15 billion at this point over the weekend, where we just found out about Google investing another billion. So the main backers here are Amazon, which has really monopolized the stock. then you also do have investments from Microsoft, you have investments from Google, a bunch of other big corporates.
Lightspeed is leading their newest round, which is at a $60 billion valuation.
Sweet. yeah, so as of that round, Anthropic is actually at a 60x multiple, the top gray circle on the left there, which definitely raises the eyebrows. It's 60x, so much higher than, for example, OpenAI at 30. And obviously, a lot of the other big players in AI who are sort of in the 10 to 15 range in terms of revenue multiple. So something interesting to dig into as well.
So this is the fastest growing of these companies, all of which are raising like extraordinary amounts of capital in very short timeframes and moving super fast. what's, mean, maybe just thinking about how do they stack up to the competition? Where are they ahead or behind? You mentioned the chat usage being lower than OpenAI, but what else is notable in terms of where they, how they compete and maybe bring in, maybe it's a good time to mention the DeepSeek
news that was coming in at the end of last week as well. Yeah, I think that the DeepSeek model and the open sourcing of their method is something we should definitely talk about. think just to sort of directly look at Anthropic, they are essentially a clear number two behind OpenAI in terms of being used in production. And in 2024, they really gained a lot of market share.
mainly driven by the release of their newest model, now most recent model, Claude 3.5 Sonnet, October 4th, 2022. Terrible, or 2024, naming schemes are terrible in AI space. But the latest model essentially outperformed all of OpenAI's models at the time by a decent margin on both one, quote unquote, quantitative benchmarks.
and two, like qualitative user experience with a huge kind of shift towards using Claude's models over OpenAI's models for specific use cases. So one is kind of like language-based tasks, especially in a business enterprise environment where OpenAI models are typically just kind of produce, you know, I don't know, a lot of this is very subjective, but people have experiences with Claude.
and use Claude more because they feel the output is just sort of better. In terms of writing long form stuff, feels more like an actual human being wrote it, that kind of thing. And the other big one is coding, which I think a few slides back, we've really seen a huge explosion of Claude's models being used in coding use cases. So the top development environment for software engineers, is the top AI one is Cursor.
They were at about 65 million ARR at the end of last year. know, is really the default model in Cursor. Cursor is a huge Cloud user. They can barely sort of handle the inference requirements of serving Cloud. And it is not only on benchmarks, but also like qualitative developer adoption. seems to be that...
We can really just say that Clot has outmatched OpenAI's GPT-4 series of models on coding and language tasks. What sort of is less clear is that the newest OpenAI models now are better in some respects, but a Clot does still seem to have a pretty meaningful advantage with a lot of people on these use cases. I mean, have to, just as a subjective end user who's used all these tools and I have done some product spec, not really coding,
And I've done a lot of writing and distillation with all, with multiple models, let's say, not all of them. I do have to agree that Claude, Claude's latest model performs the best. But on the other hand, I do get the sense that anytime I see a feature or a little advantage from one of these, the other ones are going to copy it and jump ahead of it. I mean, this arms race is sort of characterized by one of them gets the lead and then the other one jumps ahead and so on. So is it reasonable to think that just because Claude is performing best
you know, December 31st, 2024, that it will maintain that dominance? Yeah, it's very hard to say so with certainty. But like you, I expected this to be a very leapfrogging dynamic. yet, OpenAI didn't release a model that could sort of beat it in terms of the way it responds to the user, the way it performs.
actually executing on coding tasks in an IDE. And instead they released this sort of reasoning model, which is like a totally different paradigm of model, which spends way more time thinking at the time of inference. And so it can be better for different kinds of tasks, but, you know, I mean, the latest Claude Sonnet model, I would say still has an advantage. think the question of whether it can sort of
maintain it is tricky. it does seem to me to be based on some secret sauce at Anthropic. It's not having more GPUs. So yeah, it's unclear. there does seem to be something significantly different about the way that they train models.
So we've talked a lot about the sort of, I guess I'd call the pure play LLM startups, right? OpenAI, Anthropic, maybe just touch on these other groupings like the Big Tech, right? At Google, Amazon, Microsoft all have models and sub-former competition here. And then also a little bit more of what we're seeing from this, you know, this deep seek entrance and, know, there may be many more of those as well. So just like the competitive landscape at a higher level.
Cool. Yeah, maybe we have a sort of, if you sort of look at the value chain of AI, we have a slide on this here. So yeah, you basically have the low level infrastructure companies, which their GPUs are what's being used to do the training of the models and then also to serve them to people via API and chat. your Nvidia is the sort of dominant one here, but you have a bunch of upstarts that are doing their own approaches. You have the clouds.
most notably Azure right now at Microsoft, but also Google AWS, which also do the sort of inference side and they have their own stockpiles of GPUs. So they are responsible for sort of, you know, serving the models to users and monetizing them through their own products that they have. And then you have the sort of model companies, OpenAI, Anthropic, you know, Google has its own company here, Gemini.
And then you have sort of the open source approach, which previous to this weekend was mainly thought of as being Meta's domain. But now there's sort of a Chinese competitor that has made a lot of waves. yeah, so maybe we just talk about DeepSeek briefly to start. What essentially happened with DeepSeek is that they released a model that they had
trained, the figure is like 5.5 million or something like that, that is at parity with GPT-4. which, I don't know if I had the exact number off the top of my head, but 100 million maybe. So there's a lot of uncertainty around some of the numbers. The 5.5 million is not an all in estimate. It's also likely that
Deepseek's models were trained off of outputs from frontier models like GPT-4. So there's some element of Deepseek couldn't have existed without these foundational models existing before them. However, whichever way you slice it, there does seem to be like a 45x efficiency improvement in the Deepseek approach, which they've open sourced. So it does raise some questions about
I would say like margins on inference and what are margins likely to look like if you have a new approach that is 45 times as efficient. And we think Anthropic OpenAI get like 75, 80 % margins on inference. And it's likely that if there's an approach that can be close to parity in a much cheaper way, then those margins could come down.
DeepSeek didn't have access to the GPUs that OpenAI, Anthropic, Amazon, Google, Azure have been using. Supposedly, of course, there's claims now that they do have GPUs. There's ways to structure and get around the ban on exports of chips, but it's unclear exactly what the situation is. But that's where they've landed in that open source category and really
frightened a lot of the people at Meta from all accounts. And then just to finish up, you have the top layer of apps, which is any consumer or business application that uses LLMs in some way. So you have a few really big players here already glean at a hundred million ARR doing a search for enterprise. have like Harvey, more of the vertical approach doing this for legal and they're at like 50. And then you have sort of the existing SaaS
product landscape, incorporating AI, have your intercoms, your notions. You're seeing a lot of this sort of product layer or application layer company cropping up. I'll stop there. Gleen and Harvey are both in our backlog of names that we wanna dig into in these briefings. So we will get to those. Probably not today, but some point in the next couple of months. But let me just spend a little more time on the deep seat thing, because I think it's on people's minds. We saw Nvidia stock knocked down like,
10 or 12 % this morning. We're getting a number of questions. I'm gonna try to blend the questions into one question so we can not get super stuck on it. So one question was, how does the emergence of cheap and competent open source models like DeepSeek affect closed source models ability to make profit? The other question, which is similar is, can you shed some light on why or how DeepSeek train their model with a fraction of compute compared to others? I think there's a question of how true that is, but maybe talk about that.
Also is DeepSeek running inference with less compute than Anthropic and OpenAI? Does Anthropic have its own data centers, chips, capex, and how will Anthropic do if we have a shift in understanding how expensive it is to run LLM? So I guess all of that kind of sums up to how true do you think all these assertions are about DeepSeek? And if they are true, even to some degree, like how does that change the internet economics for all of these players? Does it change the way that
this business will be monetized because one of the observations that we talked about in the open AI chat a few weeks ago is how much capital is just being guzzled in by these guys. And maybe that number can go down. Maybe that helps. I don't know. Or maybe everyone is now way too overpriced and raised way too much capital. Or maybe this thing coming out of China isn't what it says it is. So what are your thoughts on how this might transform this whole conversation?
Yeah, it's hard because we're on like page one of the book, you know, and then talking about what page 300 is going to be like, but to sort of speak on things that we do know and maybe try to lay out the landscape as I see it. Yes. So Nvidia stock price went down basically because of the implication that you don't need as many GPUs potentially to do training and inference on
very capable reasoning LLMs. So I can see that makes sense. I don't get too over-indexed on the stock price just because I know stock prices can be stock prices and these things were probably richly valued already and maybe people were looking for an excuse for it to come down. That said, there is an argument to be made because what DeepSeek basically did was they were limited to these inferior GPUs and
That's because of a Biden era export ban on GPUs. setting aside whether maybe they do have a stockpile as Musk claims and other people claim, they essentially, what it appears to be what happened is that they engineered a way to do model training on these inferior GPUs, basically, know, inspired by the limitation, right? And that's something we've seen time and time again in technology is those kinds of.
constraints lead to more creative engineering approaches. So there does appear definitely to be an element of that where, you know, American AI startups had unlimited capital. So it made sense to just scale up and buy as many GPUs as possible, whereas they didn't have that. So they went a different route and found an approach that seems to be much more efficient. So that's meaningful. I think the flip is sort of,
And the Microsoft CEO tweeted about this last night, seemingly up late, know, tweeting, but, you know, there's another argument, which is that if you find that suddenly you can do far more, you know, work using far fewer chips or far less powerful chips, then in a way that's quite bullish for Nvidia and maybe for all AI companies, because you suddenly reduce the barrier to entry.
into building AI and you suddenly you can do so much more with it. It's like the car, suddenly being able to build a car on an assembly line is a huge unlock for the car industry. Although people at the time thought this will destroy the auto industry, right? Being able to do it so cheaply. So I'm still stuck, cause I'm still reading about all this stuff and I'm still kind of stuck in between. Yes, this is incredibly cheap and efficient on the other hand.
You know, that could be the possibly the most bullish signal for the space of the whole. I'm just going to pause there and see, yeah, where you want to. Yeah. Yeah, I mean, I think your point stands, which is it's potentially transformative, but it's too early to say, let's see where the chips fall. So putting that aside for a second, if you're entropic, who's got all this momentum, fastest growing company of the leaders, what are your.
What are you waking up at three in the morning stressed about if you're the CEO? mean, what are the downsides for this business?
Yeah, there's definitely a big downside inherent in things like DeepSeek. If you go back a year, one could have foreseen that from somewhere in the world, you would have an open source attempt to basically drive the cost of inference down as close to zero as possible. And Meta tried to do this, but it turns out it came from a hedge fund in China. it was always likely to
be attempted slash happen? I think does it fundamentally change Anthropics business? Yeah, potentially. Potentially it does because if you need an API to do some kind of AI stuff in your app as a developer, it changes the calculus around choosing Anthropics suddenly. So open source was always gonna do this, but now you can have a fairly sophisticated
reasoning feature in your app, or on your backend or frontend where you can suddenly pay far less money to get the same kind of output. In a way, it's more aggressively competitive with OpenAI's reasoning model, which there's a lot of similarities between DeepSeek and OpenAI on that front, but it definitely has an impact on Anthropic. And I think the main thing is can they differentiate their
product enough to make it still be valuable to use. And I think that kind of points to how, you know, there's, LLM companies, there's models, and then there's product companies and open AI has been moving very aggressively in the direction of a product company. know, Chad GPT was the first product, but they're launching many other things, video generators, and they just released operator, which is sort of an agent that takes actions on your computer for you.
kind of suggesting that this is the way that these companies are going to have long-term defensibility is not just through a superior LLM, which is probably more easily commoditized and probably more through products. So I would think that for Claude who are professing to be focusing on business and enterprise, their main concern is how do we keep businesses using Claude? How do we make it a great developer experience? How do we make it such that
you want to use Claude's API, even though DeepSeek might be significantly cheaper, basically dirt cheap compared to Claude. And so I would say that's probably the main thing. think the other thing is, do we repurpose any of what's in DeepSeek's open source paper? mean, it would seem obvious that you try to learn from what they did.
Another question coming in, we've seen this a couple of questions on this, and I'll just raise it. How will Anthropic survive the copyright infringement case, which is likely to be won by the music owners, labels and music publishers, lawsuit, which sets it up for millions of dollars in damages and then will require a music licensing model that will appease all of the master owners, labels and publishers whose music Anthropic stole to make its LLM. So this this is probably a broader question than a lot of the companies in the space around.
the way they get their content is that a vulnerability either because of lawsuits or because they'll just have to pay up so much that it might not make sense to use some of that content. Yeah, yeah, yeah, it's interesting. I'm actually not super up on this, but what I've heard is that they are working with, music publishers are working with Anthropic on this. I think the incentive is there to, you know, collaborate.
on things that are not super essential to model dominance, which would be like song lyrics and stuff being included in outputs. And so, yeah, think it seems to me that the gravity is shifting towards peaceful resolution with how important AI is and the willingness. I think there's a willingness to sort of, for Anthropic and OpenAI has had similar things and Perplexity has had similar things and they've been.
sort of worked out, but I would be surprised if that was sort of the, you know, existential risk. I mean, one of the themes that came up in the open AI discussion again was, was there's so much money going in, You know, more money to just get the content and then the training and the chips and, you know, the idea of can these companies make money? And that was also tied to the question of whether the enterprise use cases, which is the bulk of the revenue are what I think we called experimentation or production.
are they really adding value? there's a lot, this agent idea, Which, which, Chad's LGBT is about to release something, I'm sure, unthrottable, the same. It seems like there's a lot of value placed on that, but it hasn't really been proven yet. So maybe with that, we shift into this conversation about the price, right? Cause you talked about 60 billion being the current price. So maybe let's just look at that a little more deeply in context of
multiples comps where that could go in the future as we start to look at, you know, forming an investment thesis on this name. Yeah, cool. So, I mean, they just announced, you know, the 60 billion dollar round, two billion dollar raise. So it's like triple valuation from their last public round. I will say.
That comes out to a price per share. The 60 billion comes out to a price per share of something like 93, $93 per share. What I've seen on the secondary market right now is roughly like $50. So a little bit less than half and that could be somewhat lagging, but it is up from their last public round. Yeah, so.
Price wise, it's interesting, know, the valuation has effectively almost tripled. They did this financing late last year where Amazon ended up doing the entire round. So there's still a fair amount of demand, which I think is important there. But obviously the valuation has gotten significantly ahead of the actual revenue, which
know, is like one billion or a little bit under. So 60 billion is like a 60x at least.
60X at least. is it expensive? 60X sells. it's trading 60 times like forward revenues, basically. Yeah, yeah, yeah. Exactly. So, you know, Dumbbell Open AI is multiple. I think it's definitely overpriced from a traditional standpoint of a company. But there's also an argument that, you know, all of this stuff is still undervalued from an opportunity standpoint.
and sort of what could be coming and what the ramp of adoption from enterprises is gonna look like and looks like now. So price-wise, yeah, it looks expensive. I think it's definitely arguable it's expensive, especially if you don't see the value, know, in cloud sort of subjective output. yeah, so just to start there, know, 60 billion is kind of $93 price per share.
It's a bit below that on the secondary market. Before we get into the different scenarios of where this company could go in the next three to five years, just philosophically, do you think that there's one winner? We talk about this in the OpenAI chat too. Are nine of these companies going to be gone and one's going to win or is there room for multiple winners in the space?
Yeah, I definitely think so. know, what I'm seeing a lot of in my personal experience and from how other people use LLMs at big organizations, there's a lot of sort of multiplexing going on where, you know, like we talked about, Claude has proven to be, you know, a really good model for coding and language tasks. But now you have these new models. You have O1 from OpenAI, you have R1 from DeepSeek, which are, they spend a lot more time thinking
It's more expensive. It takes a lot longer to get a response, but they're sort of better at what they say they are, reasoning. So you're seeing kind of differentiation between different kinds of products emerging in terms of what they're good for, how they work. You also have perplexity, which is very good for sort of real time search functionality. And another company there is XAI.
You're seeing different use cases emerging in the space, which I think is how I sort of see the market dynamics emerging is that you'll basically have different winners in different spaces. You could say maybe there's like a enterprise business use case, there's like a consumer use case. Within those, there could be smaller ones, like for example, real time search, what's the value of having live information from, whether it's the web.
or Twitter, I think that could potentially be a different segment entirely or kind of overlaps with the consumer segment. But I think that's what I see as far as on the model layer. I think multiple winners are likely because of that, because we haven't seen one company be able to do the best against across all of them. And then, I do think that the big cloud providers will have a strong business sort of the inference, you know, because
small companies like Anthropic and OpenAI can't serve hundreds of millions to billions of people with AI without these big cloud companies. Great. So we're going to wrap this up by getting into the scenario building. But before we do that, I just want to address a question that has come in a couple of times from the audience. I'm going to just put it to you, which is, is blockchain or crypto anything in the crypto space?
showing usefulness value relevance in the AI stack at this point or in the business model? Yeah, I haven't really seen anything super compelling to me. I haven't done too much work on it. I think there's a few emerging possible use cases in the future if you look at the question of trying to think of the word.
If you have an AI generated video, right? How do know, how do you feature people, you know, when these become indistinguishable from real videos, how do you tell the difference between, how do you tell the difference between AI video and a real video? I think there's some, you know, cryptographical solutions there potentially. There's some, yeah, there's some projects that are like, you know, sort of turning people's GPUs at home or CPUs at home into,
of tokenized assets that can be decentralized and used to train new LLMs, which probably is a boost for them to see that a Chinese company has been able to train a high quality model using inferior hardware. So there's some interesting plays. then obviously there's a whole long tail of meme coins. don't fully understand that are the meme coins attached to an AI agent.
and you can sort of give the agent, you know, a funds into its wallet and it can generate revenue maybe and share it with the people. So I, yeah, I don't know too much, but I see those are some of the sort of things that I have seen. Yeah, it's definitely gotten a lot of talk and the most compelling thing I've heard of is like when these stacks of AI computer, very decentralized, you will need some way for them to talk to each other in a validated way. So there's some,
crypto infrastructure, that's for interesting, Tau and blockchains like that. I agree with you, haven't seen it in use, but it is interesting and a number of people have asked about it, so I thought I'd raise it. So let's turn our attention to, where's this going? And it's really about the bull case and the bear case. So why don't you lay that out for us? First, I'm buying in at a $60 billion valuation today, like,
How could this go right? How can I 10X or better? What needs to happen for this to be a great investment? Yeah, I think in some ways the 10X vision is easier to see with Anthropic than kind of what the base case is. So I think the ultimate upside is that despite these deep-seq, despite all these models that can sort of, despite open source models, maybe being able to get up to
parity or close to parity with frontier models. The bull case is kind of that what Anthropic and OpenAI have and sort of lumping them in together is they have this, I mean, they have an incredible concentration of talent, the ability to attract the best talent by paying them millions a year and that they have,
so far released the best models and that they'll continue to do so. One of the interesting things that's come out about Anthropic, and this is sort of half rumor, but we know that the big AI labs are not releasing their best models anymore. They were in 2022, but what's sort of emerged out of the increasing amount of money that they're spending on compute is that their best models are actually internally held and used to
distill into smaller models, which they can serve with some level of margin to the general public. So like a GPT-4 is relatively cheap for OpenAI to actually serve and is decent at most queries, but it was trained off of a larger model internally. And the same seems to be true for the new CloudSonic model. And the idea is that these big companies like Anthropic and OpenAI
they're on the leading edge. They're doing the huge investment in training to produce the biggest models, the best models. And then they sort of make it available to the general public in this distilled format, which is basically just a smaller clone that has some of the qualities of the larger model. And meanwhile, they're still making progress on kind of the cutting edge of AI. So I think the bold case is that that essentially just continues.
You know, there's a couple of clear vectors on which Anthropic is so far better than OpenAI, at least from what we've seen sort of in the public. And I think the bulk case is that that continues and they build really the best kind of LLM API and or chat product for businesses, enterprises, coders. know, coding in particular, software development has been the biggest.
use case for AI so far in terms of revenue. So there's a lot of expansion vectors from there, know, medicine, defense, legal, there's a lot of other sort of vertical they can get into. You know, there's also kind of the hyperbole case, which is where, you know, because they're, because they are building the best models, you know, even if the gap is sort of shrinking, there's a world where, you know, they
achieve general intelligence, right? And then things just take off and get insane. And I think that's kind of part of the upside case that you would take a flyer on Anthropic 4 because of the possibility that this happens. And they are sort of the prime beneficiary. For me, the base case is more like things continue as they have for the most part, and you see increasing enterprise adoption.
which is, you know, it's still just sort of ramping up. You mentioned the experimental sort of experimental ARR element of it. Base cases that basically continues. Bare cases that, you know, there is no more sort of juice to squeeze. And we do see open source kind of, open source kind of proliferate and really take over most of the use cases for AI that exists.
If that happens, think that's very bad for Anthropic, not because they can't necessarily do the same thing and make their own thing much cheaper, but because they've raised so much money, it's such high valuation around sort of a different expectation.
Well put. mean, the base case, you're only talking about two and a half X off the current valuation, right? If we end up at 135 billion, though, you know, with this company growing five X over the last year, I'm imagining that that it's very conceivable that even without any major breakthroughs, like you exceed that base case. So let's talk about IPO. mean, is there,
Is there IPO in here somewhere? What are the conditions? What's the revenue? People are asking, when will it IPO? What revenue or evaluation would make sense to IPO? What can you say about that? Yeah. I feel like I say this on every time we talk about a company, it's the same story. It's like these companies, Anthropic and OpenAI especially have virtually unlimited demand.
for their stock access to private capital. Anthropic has sort of this extremely tight relationship with Amazon that makes it such that they're not really capital constrained. So it is really hard to say if they'd go public. In the next two to three years, could it happen? I think it's possible. It would, I think,
what you see from a lot of these companies is they want, you know, that sort of consistent revenue growth, predictable business. And it might be a while before we, we see that. So yeah, big question Mark. think the other potential outcome is an Amazon acquisition for sure. Because if you look at Amazon, you know, they own the infrastructure that, you know, a huge majority of the web runs on in AWS and they have,
know, tens of millions of developers that are in the AWS ecosystem. And so they are actually in a really great position in a way to sort of integrate Anthropic into their whole kind of product, product architecture and into all the products that have so much usage already. And that's just on the business side too. I mean, we know that they're building an AI version of Alexa.
to launch now next year, it was delayed, but there's going to be 71 million households that have some version of Alexa and Claude, which is the puddle-y friendly AI model is the perfect fit to be in a household assistant in a lot of ways. If they can figure out all the technical quirks, that's a huge...
unlock for anthropic to get distribution. It's also another sort of clear sign that there could be an Amazon kind of anthropic acquisition or something like that. I think that's maybe what I see as more likely than an actual IPO. I it's a good overview. mean, what you didn't mention that I'd probably add in there is that if AI has the impact, any fraction of the impact being priced, there seems to be a lot of signals that
countries are going to see this under a national security lens, which could mean, could mean a number of things. I it could mean that the companies that are already in here will get grandfathered by heavy regulations and have this unfair advantage. Could also mean that the willingness of the government to allow these companies to do things like go public and merge is limited. And you do have to, when you're investing in something that could be so, you know, aligned with national interests and in such big companies.
You do have to keep that in mind. So let's finish with this thought for investors. I think it's safe to say that you shouldn't be buying this stock thinking you're going to get some 10x payout in six months. It's going to be a long road like other private companies, even though they've already got a billion dollars of revenue. And you don't have any indications here that the IPO is going to happen in the next three years. So let's focus on share price and enterprise value. What do you think is a reasonable target for
what Anthropic could be worth three years from today, putting aside whether that's in the public markets or private market with some liquidity mechanism. Yeah. So I tried to estimate based on my base case, which I think, as you pointed out, is somewhat conservative, assuming a moderate growth rate and a 15x multiple, which is
depending on what happens tomorrow or the next week, all these assumptions could be thrown into question in either direction. would say, like my personally view and obviously not financial advice would be like, at that point, it could be like 200 a share, up from like 93 today. And that's, again, trying to be sort of intentionally grounded in like what we've seen so far in terms of valuation dynamics and
and how companies are priced. that just direction, that's like a $150 billion enterprise value. Yeah. I think that's a great over under for us to focus on. Well, this has been great. That was a great closing point. Yeah. And Eric, thank you so much for all the knowledge and work that went into this. Everybody on the call, thank you for joining.
have a great week. We'll see you on the next installation of Pre-IPO Briefings. Happy week. Thanks Eric. Take care.