In my earlier piece, The Power of Second-Order Thinking, I explored the idea that great decisions often come not from reacting to the obvious, but from anticipating the cascading effects that follow it. First-order thinking answers, "What happens now?" Second-order thinking asks, "What happens next because of what happens now?"
That mindset has never been more important than in today’s AI market.
The first-order opportunities in AI are increasingly clear—and increasingly crowded. Foundational model development, AI infra, training chips, data center buildouts, and even nuclear power projects are being rapidly capitalized by large venture firms and private equity funds. They are capital-intensive bets—obvious, necessary, and likely to be winner-take-most plays. But they’re also well beyond the reach of early-stage investors.
So where should angels and seed-stage VCs look?
I believe the answer lies in second-order AI opportunities: markets and behaviors that emerge downstream of large-scale model deployment. These are not marginal plays—they are the next wave of category-defining companies. And they will be built not by clinging to the model layer, but by observing what changes when AI becomes ambient, cheap, and embedded in everything.
This perspective aligns with insights from Sam Lessin's recent WTF VC Q2 2025 presentation*, though I’ve been independently developing these ideas for some time, where he discusses the evolving landscape of venture capital and the importance of identifying opportunities beyond the obvious.
Over the past few months, I’ve been thinking deeply—and investing accordingly—into seven second-order effects that I believe will define the next decade of AI-native markets.
Behavioral Realignment: AI Doesn’t Just Automate—It Rewires Us
AI doesn’t merely optimize existing processes—it fundamentally changes human behavior wherever it lands. Nowhere is this more evident than in education.
With AI agents embedded in learning environments, the teacher’s role is poised for a complete shift. Rather than delivering uniform instruction, teachers become human guardrails—focusing on cognitive growth, emotional resilience, and ensuring the safety and alignment of personalized learning journeys. Meanwhile, students interact directly with AI agents tailored to their pace, curiosity, and learning style. Instead of passively digesting textbooks, they’ll engage in dynamic, iterative conversations with AI—learning through dialogue, exploration, and immediate feedback loops. Their learning data will, in turn, evolve the agents themselves, creating recursive, individualized learning ecosystems.
This isn’t theoretical. It’s why I invested in Pathfinder, a platform that enables AI-native, child-led learning. Pathfinder empowers students to explore knowledge at their own pace while giving educators tools to guide and support them on a deeper emotional and developmental level. It’s a reimagining of what “education” means when interaction, not instruction, becomes the default interface.
With such democratized education models emerging, we’re heading toward a future of greater educational freedom. Homeschooling, microschooling, and personalized learning environments will no longer be fringe—they’ll be mainstream alternatives empowered by AI. Behavior realignment isn’t about incremental changes; it’s a reordering of the entire learning experience.
Stack Collapses Create Platform Gaps: From Vertical Orgs to Horizontal Agent Layers
Traditional vertical organizational structures weren’t designed for efficiency—they were compensations for human limitations. Limited bandwidth, narrow domain expertise, and siloed communication made stacking functions by department the only viable path.
But agentic AI obliterates those constraints. As autonomous, domain-specific agents become ubiquitous, entire operational stacks begin to collapse into horizontal layers of functionality—shared horizontally—across functions, across companies. You won’t need a dedicated finance team if a single payment agent can execute across business units. You won’t need siloed data entry staff when a universal interface agent can query, extract, and populate fields across the org. The same sales agent could handle multiple product lines, dynamically shifting tone and offering—perhaps even servicing accounts post-sale, mid-pitch.
This isn’t just theoretical. It’s one of the reasons I invested in Asteroid.ai. Their platform enables autonomous agents to plug directly into existing enterprise workflows—not by rebuilding entire systems, but by collapsing them. They make it possible to run sales, onboarding, support, and more through a single modular intelligence layer, cutting across vertical silos and unlocking new operational leverage.
Sam Lessin has spoken about collapsed stacks and bundled workflows; I believe platforms like Asteroid are the manifestation of that thesis in motion. But what often gets overlooked is the organizational rewiring that follows: when agents take over atomic tasks, the need for hierarchical management fades. What replaces it is a dynamic web of feedback loops—where coordination happens dynamically, driven by signal, not structure. It’s not just a workflow evolution; it’s a blueprint for how firms themselves will evolve.
Modularization of Talent: Agents Empower, But Edge Defines
AI is redefining what it means to be “talent.” We're entering an era where the most effective contributors are not departments or teams—but individuals amplified by powerful agent ecosystems. These node-like operators act as full-stack service providers, capable of deploying AI agents to execute tasks at scale.
This echoes Sam Lessin’s prediction of a future where agents act as atomic units of work—but what often goes unspoken is that the agent layer itself is becoming commoditized. What truly sets these individuals apart isn’t the agents they deploy—it’s the unreplicable edge they possess: proprietary datasets, privileged customer relationships, deep intuitive judgment, or the sheer speed to adapt in fast-moving environments.
These orchestrators won’t just be freelancers or founders. They’ll be embedded within companies or operate independently as high-leverage talent nodes—plug-and-play strategic assets who win not by doing everything themselves, but by knowing exactly what to deploy, when, and why. The talent layer shifts from execution to judgment, trust, and velocity—and that becomes the new moat.
New Bottlenecks & Moats: When AI Is Solved, Everything Else Becomes the Edge
As foundational models become increasingly commoditized, AI itself becomes the first-order solved utility. In this world, competitive differentiation shifts away from model performance and toward everything around the model.
This opens the field for a new generation of second-order moats:
Product Design & UX: As Alex Mackenzie notes in Cursor – Beyond the Hype, the real unlock isn’t in the raw intelligence—it’s in crafting workflows and interfaces that are natively AI-first yet deeply usable.
Customer Acquisition & Retention: In Revisiting Competitive Moats, Kyle Harrison lays out why distribution and stickiness matter more than ever when intelligence is ambient.
Product Velocity & Iteration Speed: Companies that learn and ship faster, often with internal agent leverage, will outmaneuver those optimizing for raw model performance alone.
Physical World Execution: Moats increasingly arise from assets AI can’t absorb—codified operations, proprietary logistics, capital-intensive infrastructure, regulatory barriers, and real-world resources.
My thesis is this: The farther away a moat is from the model, the more durable it becomes. As AI eats more of the world, many moats adjacent to models—like fine-tuning, prompt engineering, or wrapper layers—risk becoming obsolete. That’s why I believe the most resilient companies will be built on long-range defensibility, not proximity to the model. As investors, the ability to underwrite endurance will define success in the AI-native fund era.
Trust & Authenticity Become New Primitives
As LLM outputs flood every corner of the internet and enterprise workflows, the second-order consequence is clear: trust, explainability, and authenticity become premium goods. Intelligence is no longer rare—but confidence in that intelligence is.
This demand will go far beyond simple credential checks or digital watermarking. What’s emerging is an entirely new boundary layer between humans and AIs—a protocol for provenance, reliability, and source clarity. People will want to know: Was this advice generated by a model or a person? Can I trust this agent’s intent? Who vouches for the output’s integrity?
This boundary layer doesn’t really exist yet—but it will become essential. It’s not just a tooling opportunity; it’s a massive, untapped economic zone. As AI becomes pervasive, the cost of low-trust interactions increases, and the value of high-trust systems—whether rooted in credential verification, agent transparency, or emotional resonance—explodes.
The companies that define this trust stack will be the Stripe or Okta of the AI age, enabling entire ecosystems to function with clarity and safety in an era of algorithmic ambiguity.
Human Premium Experiences: The Return of Real Presence
In a world where digital interactions become ambient, commoditized, and infinite, authentic human presence becomes rare—and therefore, valuable. This shift won’t be limited to senior care. It will reshape how all age groups seek connection, meaning, and trust.
I once emailed Andrew Parker, CEO of Papa Inc., in the wake of the latest LLM breakthroughs, energized by what this meant for his business. Not because AI could augment his workforce—but because it would amplify the value of what AI could never offer: genuine, in-person attention. But why stop at elder care?
Imagine a future where human time is a token—a form of service and currency that can be exchanged across life stages and needs. Helping a child with schoolwork, mentoring someone starting a career, walking with someone through grief—these acts become the new luxury in an AI-dominated landscape.
We may soon see marketplaces emerge to trade in presence itself: fractionalized mentorship, micro-consulting, IRL emotional labor, community-based hospitality. Think of a "Cameo for Time," or “Uber for Wisdom,” with humans as providers of scarcity-grade care, insight, and companionship. In a world where software handles everything scalable, the next frontier may be everything that isn’t.
Interface Innovation Becomes the Differentiator
As AI becomes faster, smarter, and more pervasive, a strange bottleneck emerges—not in the model, but in the interface. The current default—text-in, text-out—is increasingly ill-suited to the richness and velocity of modern AI capabilities. It’s like connecting fiber-optic intelligence through a dial-up port.
This creates enormous pressure—and opportunity—for UI reinvention. The next wave may be voice-in, voice-out interfaces that feel natural, ambient, and conversational. But beyond voice lies an even broader horizon: brain-computer interfaces (BCIs). I believe Neuralink will be one of the pioneers defining this future—shaping not just the hardware layer, but the software protocols that will enable seamless, bidirectional communication between humans and machines.
Within this new interface architecture, entire second-order economies will emerge:
Security and encryption for safeguarding thought-level data
On-device processing to ensure low-latency, private execution
Data ownership frameworks that protect cognitive sovereignty
New tools for interpreting human cognition, emotion, and intention—transforming how we understand ourselves as much as how we interface with machines.
If foundational models are rapidly approaching asymptotic performance, then experience becomes the new frontier. The tools that win won’t just reason better—they’ll feel better. They'll allow humans to work, create, and connect with machines in ways that are intuitive, embodied, and deeply personal.
Interface innovation has long been considered a surface-layer problem. But in the AI-native era, I see it as a core infrastructure opportunity—a leverage point for building new categories, platforms, and protocols.
Together, these seven second-order domains form a kind of strategic map for early-stage AI investing—not centered on owning the infrastructure, but on owning the implications. The model layer may soon feel like plumbing: essential, yet undifferentiated. What matters most now is what gets built on top, around, and in response.
If we follow these ripple effects far enough, we’ll find not just new products—but new behaviors, markets, and systems of value waiting to emerge.
AI may be the most powerful general-purpose technology of our time—but power alone doesn’t make companies. What separates enduring businesses from fleeting hype is a deep understanding of system dynamics, behavioral inflection points, and structural leverage. That’s what second-order thinking offers.
For founders, this means asking not just what AI enables, but what it changes. What becomes newly possible—or newly valuable—once intelligence is no longer scarce?
For investors, it means resisting the pull of the obvious, and instead developing conviction in the ripple effects. The best early-stage opportunities in AI won’t be found at the model layer. They’ll be found in the new needs, behaviors, and bottlenecks that emerge once models are taken for granted.
Second-order thinking won’t give you all the answers—but it will help you ask the right questions. And in a world where everyone is chasing the same frontier, asking different questions may be the only true advantage left.
* : Note on the Presentation's Availability:
Please note that the DocSend link to Sam Lessin's presentation may not remain active indefinitely. If the link becomes unavailable, you might find summaries or discussions of the presentation in articles such as A Response to Sam Lessin’s 2025 Update
If you are a builder in these related fields, I would love to chat with you!
If you are an investor and would like to share your view, please let me know.
Written to think—to refine my thesis as part of “The Art of Investing,” my private investment notes—with the help of ChatGPT-4o.