1. Let’s start with the basics—what is AI, what does the ecosystem look like, and who are the key players?
At a high level, the term AI typically refers to a computer building a system or solving a problem that normally requires human intelligence.
I think of the sector as breaking down into infrastructure—buildings, electricity, and chips—where the leaders there are companies such as NVIDIA and Broadcom on the semiconductor side and Celestica and Arista Networks on the networking side.
Then you have enablers. These include the foundational large language models, such as ChatGPT from Open AI, Claude from Anthropic, along the with cloud providers—the so-called hyperscalers, like Google, Microsoft, and Amazon that build their own models and host others models in their cloud infrastructure.
Then you have the applications that are built on this tech, including coding copilots, image/video generation, and software from companies like Adobe that are building this technology into their products. Finally, you have the beneficiaries like healthcare providers, banks, and financial institutions that are using AI to enhance efficiency.
What’s interesting and different in all of this is that the hyperscalers have started to sprawl out with capabilities and products that span the ecosystem.
2. What has been the biggest evolution in the AI story over the past few years?
I think the biggest evolution since the launch of ChatGPT came last year when Open AI introduced what is now called a reasoning model—a model that not only answered questions but critiqued and refined its own responses. The original models were just simulating the most likely answer to a given question. But they were easy to stump, and they made a lot of mistakes. The newer models are much more compute-intensive, and they take longer to think, but they produce much better results. So, that was a real step-function change, and the overall usage of these systems has grown exponentially.
3. What emerging uses of AI do you foresee that aren’t widely anticipated yet?
The big unlock I see coming is agentic AI that can do work on your behalf. Imagine giving ChatGPT access to Excel, Word, Outlook, FactSet, Bloomberg, and the internet so it can complete projects for you. It’s early to say this will work in 2026, but if these models can start using other software programs to achieve goals, there is a lot of potential for substantial growth in model use cases.
Preparing all data for AI is labor-intensive, but letting it interact with computers directly means it can learn, clean up information, and analyze much more efficiently. We haven’t seen this at scale yet, but I believe it will make AI far more useful for both enterprises and consumers. At home, it could be as simple as letting AI take control of your browser to book your vacation for you, not just plan it.
4. With that backdrop, are we in an AI stock bubble?
Most AI spending today shows positive, measurable return on invested capital across infrastructure and the cloud, and it meets strong demand. The infrastructure being built is producing attractive returns using reasonable assumptions, and AI is already impacting advertising, entertainment, and enterprise productivity in commercially viable ways.
As with any technology shift, there are always some areas we could call bubbles—maybe big, maybe small, or in specific spots. We saw things get inflated in 2021 and then correct in 2022, but overall, the current environment is not like the dot-com bubble. So, while there are pockets of hype and overexcitement, especially in private markets, where some of the financing may not fully understand the risks—I believe there are still tremendous investment opportunities in AI, provided the technology keeps advancing and delivering real value.
5. How do you define an AI “winner” in these early days, and who is leading?
To win in technology, you want to compete on something other than price. When you compete on price, your returns are limited; true winners build natural monopolies or near monopolies with a clear right to win and a moat.
In AI, I look at what each company brings to bear. Google has the lowest cost to serve because of custom hardware, a lot of data in important areas, and incredible distribution through its applications and hardware. Meta applies advertising technology where returns are among the highest worldwide. OpenAI has built enormous usage with ChatGPT—whether that remains as Gemini advances is something we’re watching. Anthropic has done an incredible job with engineering problems, seeing a lot of code generation directly on Claude and underpinning other providers, which may give it a natural edge.
But some companies that look like leaders today may end up also‑rans. Here, I’m talking about the so-called neocloud companies that rent GPU compute to the hyperscalers at inflated prices due to the current shortage. These companies may face economic challenges when supply catches up.
Meanwhile the hyperscalers have a durable advantage—the ability to redeploy capital and keep growing.
Technology’s edge: Benefiting from economies of scale
Trailing four quarters free-cash-flow margins (January 31, 1990–May 31, 2025)