The Three Moats of the AI Era: Why You Only Have 12 Months

Written by: Deep Thinking Circle

Have you noticed that everyone around using AI is doing the same thing? Prompt, accept, publish. Without judgment, without taste, just mechanically repeating the same actions like factory workers on an assembly line. Recently, I read an article by Silicon Valley entrepreneur Shann, who bluntly pointed out: 90% of people using AI are falling into this trap. They think mastering AI tools means mastering the future, but they don’t realize that the real competition has just begun. More importantly, Shann believes we only have about 12 months to build a true moat; once this window closes, standing out will become even harder. This resonated deeply with me because I’ve gone through a similar awakening process myself.

I remember about a year ago, when I first started using AI to build products and content, that feeling was addictive. The time from “I have an idea” to “it’s live” was almost zero. The projects I completed in three months exceeded what I had done in the past two years combined. But when I mustered the courage to review what I had published, I had to face a harsh truth: half of it was mediocre. Technically sound, fully functional, but completely forgettable. They looked like everything else because they were built the same way as everything else. Same prompts, same default settings, shallow understanding of “excellence.” I fell into the most common trap of the AI era: mistaking output volume for quality, equating rapid publishing with productivity, and thinking that doing more equals doing better. This realization made me pause and rethink: in an age where AI enables everyone to produce quickly, what is the true competitive advantage?

My new book, Going Global: Practical Strategies for Product Internationalization and Marketing, is about to be published. To thank all the readers who have supported Deep Thinking Circle, I’ve prepared a giveaway. You can get a free copy right away by filling out the form below. Due to limited supply from the publisher, I will select some respondents to receive the book. I can’t guarantee everyone will get one, so please understand.

The Flood of AI Slop and the Trust Crisis

“AI slop” has been named the word of 2025. Mentions of this term have skyrocketed ninefold, from 461,000 to 2.4 million. But numbers alone can’t fully capture the real feelings of consumers. You’ve probably seen this kind of content: LinkedIn posts that look like they were generated with mid-tier marketing prompts, landing pages with uniform gradient backgrounds and “Revolutionize Your Workflow” headlines, blog articles covering all angles of a topic but saying nothing substantial. These are technically fine, but they lack the most important thing: human touch.

Shann shared a particularly interesting study. Research from New York University and Emory University shows that AI-generated ads actually have a 19% higher click-through rate than human-made ads. Objectively, AI outputs are better by standard metrics. But once consumers learn these ads are AI-produced, their willingness to buy drops by 33%. This phenomenon is worth pondering: better quality output, yet people choose to reject it. Not because the content is bad, but because they can’t feel the human behind it. No one is making decisions here, no one cares enough to put their name on it. Consumers sense this absence, even if they can’t articulate exactly what’s wrong.

I’ve observed this phenomenon spreading across various fields. Statistics show that 80-90% of AI agent projects fail in production. Thousands of seemingly identical websites go live every day, with content that reads like a robot summarizing another robot’s output. The threshold for “functionality” has never been lower, which also means the threshold for “excellence” has never been more important. Functionality is now free; excellence still costs. That cost is measured in taste, attention, and the willingness to go beyond the first output. Consumer trust in AI-generated content has dropped by about 50%, not by accident, but as a natural reaction to this flood of content.

Three Moats: Capabilities AI Cannot Replace

Paul Graham once said: “In the age of AI, taste will become even more important. When anyone can produce anything, the real difference lies in what you choose to produce.” He’s right, but I believe taste alone isn’t enough. After a year of practice and observation, I’ve found that only three things can truly build a moat in the AI era: taste, distribution, and high agency.

Taste is knowing what’s good. It’s not an abstract concept but a judgment reflected in every decision. Distribution is getting good work in front of the people who care. In an age of information overload, being seen is itself a scarce ability. High agency is the willingness to figure things out proactively when no one tells you what to do. It’s a personality trait that determines whether you bypass obstacles or stop.

Why can’t AI replace these three? Because judgment comes from experience, trust from consistency, and internal drive from persistence when the path isn’t clear. Most people have a fundamental misconception about AI: it doesn’t level the playing field; it tilts the environment further in favor of those who understand how to use it. AI is like a mirror, reflecting how much you truly understand. Handed to someone without context, taste, or understanding of what they’re building, it produces generic outputs at scale. Given to someone who deeply understands their field and can evaluate outputs with trained eyes, it becomes the most powerful tool they’ve ever used. Same input, completely different results. The variable is always the human.

First Moat: Taste

Shann shared his awakening moment during the building process. When he looked back at his rapidly published works, he realized half were mediocre. So he did something most would skip: he stopped to learn. He spent hundreds of hours studying what truly makes something “good.” He read how other builders think, researched creators who consistently produce outstanding work. Not for the sake of being different, but because someone cares enough to make real decisions, not just accept whatever AI first gives. He studied website design, typography, spacing, visual hierarchy, analyzing what makes certain sites convert while thousands of similar ones fail. He read about storytelling, narrative tension, what makes people keep scrolling instead of bouncing.

This reminded me of my own experience. When building AI-driven marketing materials, I initially tried every tool I could find: Gamma, Chronicle, Beautiful.ai, etc. The outputs all had the same “okay” flavor—technically complete, visually clean, but forgettable. So I stopped looking for tools to do the work for me and started doing it myself. I spent days carefully studying the materials, not just reading but thinking. What story do these data tell? What makes people care? What’s the narrative thread that connects everything? I studied principles of presentation design, how information designers handle data density, how top conference talks build tension and release, how visual hierarchy guides the eye without explicit instructions. I clarified my division of labor: let Claude Opus 4.6 write storylines and copy, have Gemini generate visuals, and guide both with specific references, constraints, and examples of the feeling each part should evoke.

Why Does AI Default to Generalization? Leon Lin offers a brilliant explanation. He built a “taste skill” for Claude Code because he realized a fundamental trait of large language models (LLMs): they are probabilistic machines. Without strict rules, they statistically default to the most common patterns in training data. That’s why every AI-generated website looks the same: Inter font, purple gradients, rounded corners in a grid. It’s not that AI can’t do better; it’s that the most likely output is the average of everything it’s seen. Leon’s solution was a clear set of design rules within 400 tokens: specific fonts (Press Start 2P, VT323) instead of Inter or Roboto, specific colors (neon pink, electric blue, acid green) instead of default purple-blue tones, rules for actions, spatial composition, backgrounds, and a list of “what to avoid” to prevent AI from defaulting back to generic settings.

This “what to avoid” list is the real insight. Taste isn’t just knowing what you want; it’s knowing what to reject. It’s having opinions about default settings and being willing to overturn them. Most people accept any output because they lack a strong perception of what “better” should look like, so they don’t know what to push for. That’s why taste can’t be shortcut: you can’t learn it from tutorials. You acquire it through exposure, by observing thousands of examples, building an internal model of what works and what doesn’t. From studying typography until you can tell why a font pairing feels refined versus generic—even if you can’t fully articulate why. From reading enough great writing until you can sense when a sentence carries its weight versus just filling space.

I deeply realize that cultivating taste takes time and deliberate practice. Shann mentioned a new 80/20 rule: 80% of effort goes to AI, 20% to your taste. Let AI do what it’s good at—research, drafts, boilerplate code, structure, formatting, speed. That’s the 80%. Don’t resist it, don’t slow it down, don’t manually do what the machine can do in seconds. That’s wasting your most valuable resources: attention and judgment. But that last 20% is yours. That’s where you decide what to keep, what to delete. Rewriting the intro because AI gave you a safe choice, but safety doesn’t make people scroll. Replacing default components with truly fitting ones. Reviewing outputs and applying all your knowledge of what’s good in your field.

Most people invert this ratio. They spend 80% of their energy on prompts and tweaks, trying to get perfect output in one go, running the same prompt fifteen times with slight wording variations, hunting for the magic phrase that produces exactly what they want. Then they spend almost no time on curation and judgment. They optimize the wrong side of the equation. Quantity without quality is just movement. The internet is flooded with competent mediocrity—everything usable but nothing outstanding—because everyone stops at the same point.

Second Moat: Distribution

You can have the best product, the best content, the best design. But if no one sees it, it’s meaningless. This is a moat most builders, especially technical ones, severely underestimate. AI has lowered the barrier to creation, but it hasn’t touched the trust barrier. Creation is becoming commoditized; anyone can publish products, create content, run marketing campaigns. The obstacle to making is approaching zero. But what about trust? That remains as high as ever, or even higher, because the flood of AI content makes people more skeptical, not less. When everything can be AI-generated, trust in the human behind the work becomes a premium asset.

Shann pointed out a key difference: the gap between “vibe coded and published” and “someone actually uses and pays for it” is almost entirely about distribution. And the core of distribution is large-scale trust. Yes, you can generate fifty posts in an hour. You can automate outreach, repurpose content across platforms, schedule everything a month in advance. Some people post over a thousand AI-generated pieces daily across hundreds of accounts, yet their engagement approaches zero. Because quantity without quality is just noise—audiences can tell what’s mass-produced and what’s made for them.

The difference between good and bad content rarely lies in the information it contains. It’s whether the audience trusts the person who wrote it. Trust comes from consistency, recognizable voice, accumulated evidence that this person knows what they’re talking about—because they’ve shown their work over months or years. You can’t create this with prompts alone. Trust operates on a different clock. AI can compress days into minutes, but trust still takes months or years to build. No shortcuts, no hacks. You can’t code trust into a vibe.

I believe there’s an important distinction most overlook: passive audiences are commodities; followers are vanity metrics. Active communities are the real moat. Those who interact with your replies, share your work without being asked, come back daily because you’ve become part of their thinking process. You can’t create this with content calendars or scheduling tools. You earn it by doing something genuinely useful, speaking concretely rather than vaguely, honestly sharing what you know and don’t know, and showing up long enough for people to start paying attention. The true advantage of distribution in the AI era is using AI to handle logistics—formatting, repurposing, scheduling, analyzing—so you can focus all your energy on making what’s worth sharing even better.

Taste feeds back into distribution. If what you produce is truly good, people will start sharing it. They share because it makes them think, not because you ask them to. If your work is generic, no amount of posting frequency can save it. You’re just faster at putting more mediocrity in front of more people.

Third Moat: High Agency

This is a moat most underestimate, but perhaps the most crucial of the three. Taste can be cultivated, distribution can be built, but high agency is a personality trait that either drives everything else or blocks it. High agency is the willingness to figure things out without anyone handing you a tutorial. When faced with obstacles, finding ways around rather than stopping. Combining tools without instructions because you’re curious enough to try. When something doesn’t work, opening documentation and trying four different approaches before asking for help.

Replit’s CEO once said: “You don’t need coding experience. You need perseverance. You need to learn fast.” Coinbase’s CEO said something similar: their best employees often look unqualified on paper, but they are high-agency people who get things done without needing management oversight. Today, the most successful people aren’t the most qualified or technically skilled—they’re those who act without asking for permission. Non-developers can launch Chrome extensions, SaaS products, and full mobile apps over a weekend because they’re curious enough to open tools and tinker, rather than waiting for perfect courses or perfect timing.

AI is a multiplier, not a balancer. This might be the biggest misconception about these tools right now. People talk about AI democratizing access and leveling the playing field. Technically true, but practically misleading. Multipliers amplify whatever you bring to them. Curiosity plus AI equals ten times leverage—you move faster, learn faster, build faster, and course-correct more quickly. Passive plus AI equals zero. Zero times ten is still zero.

In practice, high agency looks like this: instead of asking “How do I do this?” you ask “What if I try this?” and actually try it. Before posting a question, before searching for answers, you experiment. You fail, learn from failure, and try again with new insights. The willingness to engage with uncertainty rather than retreat is what separates those building real things from those just consuming content about building.

You see this in people who don’t just code with Claude but go to X, Reddit, communities, and source code—studying what top builders are actually doing. They reverse-engineer why some products feel better than AI’s default settings. They learn underlying frameworks instead of copying prompts. They ask Claude to critique their work, use AI to challenge their assumptions rather than just confirm them. High-agency people treat patience as a strategic asset. Others rush to release the first usable thing, creating opportunities for those willing to go deep. When the market is flooded with speed and superficiality, slow and thorough become competitive advantages.

The biggest misconception about AI now is that it’s a shortcut. It’s a speed multiplier. Applying it to poor judgment only accelerates you toward mistakes. It won’t save you from building the wrong thing; it will help you build the wrong thing faster. Among the three moats, high agency is perhaps the hardest to fake. AI can approximate most execution layers: code, design, copy, research. But what it can’t do is the drive to figure things out when everything’s unclear and no one tells you what’s next. That must come from you—it’s the foundation that makes the other two possible.

The Window Is Closing

Right now, most people using AI are lazy about it. I say this not to be harsh, but because it’s an observable fact. The default behavior: prompt, accept, publish. They hardly edit, rarely apply judgment, and almost never inject taste. The result is a growing ocean of competent, forgettable, indistinguishable outputs.

This won’t last forever. As AI improves, tools become more intuitive, and more people understand craft, the gap between lazy AI use and deliberate use will narrow. Today, just building these three moats puts you ahead of 95% of people using the same tools. The window will close, but for now, it’s still open.

I’ve observed a phenomenon: your audience is drowning in AI slop. Every scroll is a wall of generic outputs—looking, sounding, feeling the same. Cultivating taste—knowing what’s worth making, earning trust over time to build real distribution, and maintaining enough high agency to keep figuring things out when others accept default—will make you stand out immediately. Not because you’re faster, or have better tools, or discovered some secret prompt, but because you’re doing what almost no one is willing to do: caring about what happens after AI finishes.

Shann’s timeframe is 12 months. I believe he’s right. After 12 months, having taste won’t be rare; it will be expected. Distribution will be harder to establish because everyone will try. Those who start now gain a compounding early advantage. This isn’t about artificially creating scarcity or urgency; it’s the reality of the technology adoption curve. Early adopters build infrastructure, accumulate expertise, earn trust. Latecomers will have to compete in a more crowded space.

My advice is simple: build all three moats. Taste knows what’s worth making, distribution makes it seen, and high agency keeps you moving when everything’s unclear. That’s how you create things people truly remember. Others will publish faster and wonder why no one cares. Tools are just tools; what truly matters is what you do with them and how much of yourself you invest in the process.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin