I’ve been thinking about a couple things - how can I make publishing on Substack more of a regular habit, and do so in a way that’s easy and effortless like posting on Twitter? How can I do so in a relatively regular cadence in a way that also adds value to people reading it?
The idea I’ve come up with is to publish a weekly piece with links to things I found interesting in the prior week. The thesis being that in today’s age of media overload, it’s harder than ever to sift through the noise, so as someone who’s terminally online maybe my curation can help surface interesting things for readers.
It’ll be work in progress, so hopefully it’ll evolve as the weeks go by, but going back to the goal of making publishing more natural on here, I spend a lot of time reading (or listening/watching) things that are relevant to investment ideas and also just things that generally pique my curiosity. Hopefully there will be something for everyone and I’m also hoping it will encourage people to send me more interesting content that I can include in future pieces.
Colossus Profile of Josh Kushner
This brilliant piece by Jeremy Stern is deities worth a read. It’s an incredibly illuminating profile of one of the most unique and inspiring investors of our time and spans from Josh’ grandparents story as holocaust survivors, to his founding of Thrive and Oscar, to in depth stories from the firm’s investments in companies like Github, Stripe and OpenAI. I always love learning more about investors I admire and can’t recommend this piece enough.
There’s a lot of takeaways from the piece, so difficult to highlight any one in particular but what stands out to me about Josh and Thrive is the willingness to rethink what it means to be an investor and a fund from first principles. It is easy to think you have to fit into a box that the world already understands and far more difficult to stay committed to building the exact kind of firm that you think should exist. This piece is filled with the evidence of what makes Thrive unique and it’s definitely inspiring to learn about the success of a fund with such a unique view of what it could be (an investor, an incubator, a company itself, a true “capital partner” to their founders in every sense of the word.)
In 2010, moreover, the unknown Kushner brother’s unknown little firm was making a bunch of large but weird-sounding claims for itself, like that it was a stage-, geography-, and sector-agnostic venture firm that would concentrate all its investments in a very small number of companies; that it was not only an investment firm but also itself a company; that it incubated its own companies as well as invested in others; and that it didn’t just invest and incubate but functioned as a service provider, product creator, and embedded operational commando unit for founders.
Later on in the piece:
Cutler also introduced Kushner to Andy Golden, the legendary head of Princeton University’s endowment. Golden later recalled a happy hour for VCs in Cambridge in 2010, where he saw a six-foot-three, emo-looking kid in a black cardigan standing apart from the group, staring at the floor.
“I think that Andy saw more in me than I saw myself,” Kushner said. “He spent time talking to me about who I wanted to be and what I wanted to build.” One insight Golden vouchsafed to Kushner was that investment firms, as they scale, start to lose a sense of their identity and wind up focusing not on what they’re good at but on the size of their assets under management, in turn leading to a lower cost of capital, less ambitious people, more mediocrity, and lower returns.
In response, Kushner pitched him on his seemingly unwieldy idea for Thrive: an opportunistic vehicle agnostic to stage, sector, or geography, which viewed itself as an enabling technology for the world it wished to see, and which had the capacity to not just invest in companies but to build them. “It was seen as very controversial at the time,” Kushner told me. “I was not trying to be provocative. The idea for Thrive came purely from where we believed the world was moving. It was what felt right to us even though it had never been done before.”
To a writer and not a VC, hearing investors describe their firms as “controversial” and “provocative” can get tiresome, as if they doth protest an awareness of conventional respectability with too much poser-iconoclasm. When told this, Kushner replied, with his customary decorum: “At that moment in venture you were either an early stage firm or a later stage firm. You were either a software firm or a consumer firm. You were either a European firm or a US firm. The idea of having a fund that could build companies, invest in companies, invest in them early or late, and inside or outside the US, it was just deeply unconventional. I feel really grateful that Andy saw it and understood it.
I found this profile to be amazing - in depth, substantial, super well written and definitely inspiring for investors of all kind. It definitely makes a case that being a great investor and a great operator can be two sides of the same coin (something we discussed in our piece, Time ~= Money.) I think what Colossus is doing is super cool and definitely looking forward to reading future pieces from them.
Dan Sundhiem Cheeky Pint Interview
This interview was wide ranging and covered topics from large cap valuations and if we’re in an AI bubble, to vibe trading and why it’s tricky to short bad companies but the part I found most interesting is his discussion about public markets vs private markets. He highlights an interesting point about the effect that stock prices can have on employees (below excerpt from the transcript)
“So, you’ve got Ramp, Stripe, SpaceX, and, you know, maybe one day, of course, all these companies go public, but it would seem as if there’s like a lot of later stage private companies now. Well, like, why do you think that’s happening, and where do you think that’s going to go in a couple of years? Like, are public markets just going to kind of be the laggards, and all the new hot stuff will be private, or will it rebalance one day?
I mean, if I ran a private company like Stripe, I wouldn’t go public. I think the public markets, you know-
It’s kind of ironic because you’re a public markets investor.
Yeah. I think the public markets are kind of problematic at this point. Let’s just take Stripe, for example, and I won’t speak for John, but basically, Stripe grows earnings, cash flow at some amount, value compounds, and they do tender offers, and the tender offers are relatively in line with the value creation.
“Therefore, the people who are working at the company, and they’re creating that value, you get paid for that value because the stock price goes up in line with value creation. Now, what we see in public markets is you take your company public, and depending on what the retail crowd is doing at that day, the stock may trade at some insane value. Most people are high-fiving.
This is amazing. Our stock is trading to 2X where it should be. This is great.
We’re all rich. The problem with that is that you’ve now pulled forward a ton of value. So all the people working at the company now are being overpaid because they didn’t actually create this value.
The stock gave this value. Then the people who you’re hiring, and those people are probably more likely to just cash out because they just made too much money.
You’re robbing future employees to pay the current employees.
“Exactly. Then future employees now have to give them stock options that are as used at a stock price you don’t really believe in. So the stock is so volatile that you’re actually not being paid as an employee based on value creation.
You’re being paid arbitrarily based upon multiples which have nothing to do with the true intrinsic value of the company. So, I think it’s like, obviously, it’s bad to be undervalued as a company, because then you’re issuing stock to employees at too low of a value, and then they don’t appreciate it usually. But it’s pretty bad to be overvalued too.
Because if the stock doesn’t go up, they will definitely come back to you and ask for more options. If the stock goes up way more than it should, they’re not going to come back to you and be like, Oh, you know what? Hey, I made too much money.
So you end up having this asymmetric, I think it’s really not a healthy dynamic to be a public company.”
This is interesting commentary for a number of reasons. D1 is mainly a public market fund that is gearing up to launch a private equity arm. We have seen firms like Thrive, Coatue, General Catalyst and Lightspeed with similar plans that align with our thesis that in the future, inevitably, the best investors will find value across stages, whether the company is public or private.
The public is private debate is one that could have its own post altogether. Dan raises a valid point about the variance of the stock price potentially making it difficult to retain employees, but the other side of that argument is valid as well. There’s no reason that private markets are a “better” valuation. Another point to the contrary would be look at how aggressive Meta has been with their hiring of AI talent. If they weren’t a trusted public company with the ability to offer liquid stock to potential hires, would their hiring spree be as successful? So you join OpenAI and the valuation is more “stable” but you can only sell into tender offers or you join Meta, the stock has higher variance but is far more liquid. I’m not sure, but this topic is super important for understanding the future of capital markets.
I do think there’s a case to be made that healthy capital markets involve companies eventually going public, but at the same time, if you can access capital like SpaceX or Stripe, there’s not a whole lot of incentive to do so. There’s probably also an argument that the more good companies that go public, the better the US investor does on average, as those companies now have a chance to be included in major indices. Anyway, it’s definitely a thought provoking part of the interview and something I’m interested in following and potentially writing more about in the future.
Dwarkesh Karpathy Interview
When looking for information about frontier technology, I’m often reminded of the Picasso quote:
“When critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine.”
This interview is an awesome lens into how Andrej Karpathy thinks about AI and the current landscape tech, with discussions about agents, reinforcement learning, the path to AGI, how humans learn, the potential impacts of AI on GDP, self driving cars and lots more. I find conversations like these super valuable, especially when so much of the conversation amongst investors is whether or not AI is a bubble at the current moment in time.
In line with the bottleneck framework shared last week, the podcast kicks off with this question
Dwarkesh Patel 00:00:58
What do you think will take a decade to accomplish? What are the bottlenecks?
Andrej Karpathy 00:01:02
Actually making it work. When you’re talking about an agent, or what the labs have in mind and maybe what I have in mind as well, you should think of it almost like an employee or an intern that you would hire to work with you. For example, you work with some employees here. When would you prefer to have an agent like Claude or Codex do that work?
Currently, of course they can’t. What would it take for them to be able to do that? Why don’t you do it today? The reason you don’t do it today is because they just don’t work. They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer useand all this stuff.
They don’t do a lot of the things you’ve alluded to earlier. They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.
The next interesting excerpt:
Dwarkesh Patel 01:06:45
Or they’ll propose horizon length. Maybe they can do tasks that take a minute, they can do those autonomously. Then they can autonomously do tasks that take an hour, a human an hour, a human a week. How do you think about the relevant y-axis here? How should we think about how AI is making progress?
Andrej Karpathy 01:07:05
I have two answers to that. Number one, I’m almost tempted to reject the question entirely because I see this as an extension of computing. Have we talked about how to chart progress in computing, or how do you chart progress in computing since the 1970s or whatever? What is the y-axis? The whole question is funny from that perspective a little bit.
When people talk about AI and the original AGI and how we spoke about it when OpenAI started, AGI was a system you could go to that can do any economically valuable task at human performance or better. That was the definition. I was pretty happy with that at the time. I’ve stuck to that definition forever, and then people have made up all kinds of other definitions. But I like that definition.
The first concession that people make all the time is they just take out all the physical stuff because we’re just talking about digital knowledge work. That’s a pretty major concession compared to the original definition, which was any task a human can do. I can lift things, etc. AI can’t do that, obviously, but we’ll take it. What fraction of the economy are we taking away by saying, “Oh, only knowledge work?” I don’t know the numbers. I feel about 10% to 20%, if I had to guess, is only knowledge work, someone could work from home and perform tasks, something like that. It’s still a really large market. What is the size of the economy, and what is 10% or 20%? We’re still talking about a few trillion dollars, even in the US, of market share or work. So it’s still a very massive bucket.
Going back to the definition, what I would be looking for is to what extent is that definition true? Are there jobs or lots of tasks? If we think of tasks as not jobs but tasks. It’s difficult because the problem is society will refactor based on the tasks that make up jobs, based on what’s automatable or not. Today, what jobs are replaceable by AI? A good example recently was Geoff Hinton’s prediction that radiologists would not be a job anymore, and this turned out to be very wrong in a bunch of ways. Radiologists are alive and well and growing, even though computer vision is really, really good at recognizing all the different things that they have to recognize in images. It’s just a messy, complicated job with a lot of surfaces and dealing with patients and all this stuff in the context of it.
I don’t know that by that definition AI has made a huge dent yet. Some of the jobs that I would be looking for have some features that make it very amenable to automation earlier than later. As an example, call center employees often come up, and I think rightly so. Call center employees have a number of simplifying properties with respect to what’s automatable today. Their jobs are pretty simple. It’s a sequence of tasks, and every task looks similar. You take a phone call with a person, it’s 10 minutes of interaction or whatever it is, probably a bit longer. In my experience, a lot longer. You complete some task in some scheme, and you change some database entries around or something like that. So you keep repeating something over and over again, and that’s your job.
You do want to bring in the task horizon—how long it takes to perform a task—and then you want to also remove context. You’re not dealing with different parts of services of companies or other customers. It’s just the database, you, and a person you’re serving. It’s more closed, it’s more understandable, it’s purely digital. So I would be looking for those things.
But even there, I’m not looking at full automation yet. I’m looking for an autonomy slider. I expect that we are not going to instantly replace people. We’re going to be swapping in AIs that do 80% of the volume. They delegate 20% of the volume to humans, and humans are supervising teams of five AIs doing the call center work that’s more rote. I would be looking for new interfaces or new companies that provide some layer that allows you to manage some of these AIs that are not yet perfect. Then I would expect that across the economy. A lot of jobs are a lot harder than a call center employee.
I found his perspective to be insightful and refreshing. It was helpful to zoom out a bit amongst the tiresome bubble conversations and get a ton of substance in one sit down interview.
I’m also a huge fan of Dwarkesh in general and think he’s doing amazing work. I really enjoyed his piece about the AI buildout Thoughts on the AI buildout. It touches on a lot of key aspects of he race to build the best AI systems and current the bottlenecks for the arms race. Of particular interest is the reality facing the energy bottleneck
“For the last two decades, datacenter construction basically co-opted the power infrastructure left over from US deindustrialization. For AI CapEx to continue growing on its current trajectory, everyone upstream in the supply chain (from people making copper wire to turbines to transformers and switchgear) will need to expand production capacity.
The key issue is that these companies have 10-30 year depreciation cycles for their factories (compare that to 3 years for chips). Given their usual low margins, they need steady profits for decades, and they’ve been burned by bubbles before.
If there’s a financial overhang not just for fabs, but also for other datacenter components, could hyperscalers simply pay higher margins to accelerate capacity expansion? Especially given that chips are an overwhelming 60+% of the cost of a data center.
We did some back-of-the-envelope math on gas turbine manufacturers which seems to indicate that hyperscalers could pay to have their capacity expanded for a relatively small share of total datacenter cost. As @tylercowen says, do not underrate the elasticity of supply.”
As this tweet covering an interview with an employee working on Meta’s infrastructure points out, energy is the biggest constraint to the planned data center buildout
https://x.com/rihardjarc/status/1981721050086883834?s=46
An astute reply pointed out 4 key bottlenecks to watch:
1. Energy capacity (operational, 18-24 month lead time)
2. TSM 2nm supply (physical, multi-year constraint)
3. HBM premium pricing (financial, 3-5 year window per expert)
4. Storage infrastructure (SSD/HDD for inferencing)
That’s four separate bottlenecks. Even if you solve GPU supply, three others can choke deployment.
This is an area of interest that is definitely aligned with our overall framework of looking for bottlenecks and thresholds, and one I am actively researching.
Anthropic announced agreement to use Google’s TPUs
Expanding our use of Google Cloud TPUs and Services \ Anthropic
Today, we are announcing that we plan to expand our use of Google Cloud technologies, including up to one million TPUs, dramatically increasing our compute resources as we continue to push the boundaries of AI research and product development. The expansion is worth tens of billions of dollars and is expected to bring well over a gigawatt of capacity online in 2026.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO at Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood.”
Seems relevant because previously Anthropic was reported to be spending lots of money on AWS’ trainium, so an interesting story to follow both as it pertains to cloud revenue and alternative AI chip solutions to Nvidia’s dominance.
Oracle financing data centers
Banks are preparing to launch a $38 billion debt offering as soon as Monday that will help fund data centers tied to Oracle Corp. in what would be the largest such deal for artificial intelligence infrastructure to come to market, according to people with knowledge of the matter.
JPMorgan Chase & Co. and Mitsubishi UFJ Financial Group are among banks leading the deal, which is split across two separate senior secured credit facilities, said the people, who asked not to be identified when discussing private matters. One $23.25 billion package will go toward financing a data center in Texas and another $14.75 billion facility will help fund a project in Wisconsin, the people said.
Vantage Data Centers is developing both data centers, which are set to be used by Oracle to power OpenAI, Bloomberg has reported. The projects are part of Oracle’s broader effort to invest $500 billion in AI infrastructure alongside OpenAI, known as Stargate.
I found this interesting because it’s tangible development toward’s OpenAI’s super ambitious plan for Stargate and also just an eye popping number for the debt markets.
Origins of Efficiency
I started reading Origins of Efficiency by Brian Potter this week. I’m only a couple chapters in but it’s been amazing so far. The book opens with the story of how the development of Penicillin was as much about manufacturing and being able to produce the medicine at scale as it about the scientific breakthrough that occurred when Penicillin was first discovered.
About The Origins of Efficiency
Efficiency is the engine that powers human civilization. It’s the reason rates of famine have fallen precipitously, literacy has risen, and humans are living longer, healthier lives compared to preindustrial times. But where do improvements in production efficiency come from? In The Origins of Efficiency, Brian Potter argues that improving production efficiency-finding ways to produce goods and services in less time, with less labor, using fewer resources-is the force behind some of the biggest and most consequential changes in human history. The book is punctuated with examples of production efficiency in practice, including how high-yield manufacturing methods made penicillin the miracle drug that reduced battlefield infection deaths by 80 percent during World War II; the 100-year history of process improvements in incandescent light bulb produc-tion; and how automakers like Ford, Toyota, and Tesla developed innovative production methods that transformed not just the automotive industry but manufacturing as a whole. The Origins of Efficiency is a comprehensive companion for anyone seeking to understand how we arrived at this age of relative abundance-and how we can push efficiency improvements further into domains like housing, medicine, and education, where much work is left to be done.
Again this is the first iteration of a weekly “things I found interesting” type of piece so I imagine it will evolve as I do more of these. Feel free to give feedback or ideas about what you’d like to see more of. Ideally I’ll try to surface some niche stuff I find from being terminally online. I’m not sure about the total number of links to include. On one hand I want to provide some of my thoughts about each selected piece but on the other I do come across lots of good stuff throughout the week so maybe it will make sense to highlight a few in particular and then include a collection of other great stuff at the end of the piece. I do hope this will inspire readers to send me interesting things you come across, my dms are always open!



let's go!
*Deities instead of definitely in the beginning of the Kushner section
Looking forward to more of these!!