← Writing
GTM

From Founder-Led Chaos to Repeatable GTM

2025-09-13

Most early-stage companies do not fail because nobody works hard enough. They fail because the team mistakes activity for learning.

A founder gets a warm intro. A big logo takes a meeting. A prospect asks for a feature. A developer signs up with a personal email. Someone from a target account reads three blog posts, joins a webinar, or forks a GitHub repo. Each of those moments feels like momentum, and sometimes it is. But in the early days, the real question is not, "Can we turn this into a deal?"

The better question is, "Can we turn this into a repeatable motion?"

That distinction has shaped most of my work as a founder and GTM operator. I have spent a lot of time in the messy middle where the product exists, the market is real, the customers are interested, but the company has not yet figured out who buys, why they buy, what creates urgency, and what can be repeated without heroic effort.

That was the core lesson from building AutoCloud. We started with a deck and a strong belief that large enterprises were going to struggle with cloud complexity across AWS, Azure, and Google Cloud. The pain was real. Teams were dealing with security risk, governance gaps, cloud sprawl, unclear ownership, and an increasingly fragmented infrastructure footprint. We were not wrong about the problem.

But being right about the problem is not the same as having a scalable GTM motion.

In the earliest days, founder-led selling was the only motion available. We sold through relationships, consulting history, credibility, and sheer persistence. We had enough domain expertise to earn meetings with serious companies, and we had enough conviction to get buyers excited about where the product could go. That founder-led energy helped us win early customers, raise capital, and build a real product.

It also created the first trap.

When you are selling to large enterprises, especially as a small startup, every successful meeting feels like validation. If a company like Wells Fargo, HSBC, BlackRock, Paramount, or Standard Industries wants to talk, you take the meeting. If they say the problem matters, you listen. If they say, "We love this, but could it also do this?" you want to say yes.

Sometimes saying yes is the right thing to do. Early customers should shape the product. But if every customer pulls the roadmap in a different direction, you are not building a company. You are building a collection of custom projects with a SaaS wrapper around them.

That was one of the hardest lessons from AutoCloud. We had real enterprise interest, real technical problems, and real contracts. But the strategic question was always whether the next deal made the motion clearer or more confusing.

A big logo can be dangerous if it teaches you the wrong lesson. It can make you believe that enterprise demand is repeatable when what you actually have is a bespoke executive relationship, a one-off use case, or a customer-specific roadmap. The logo looks like evidence. Sometimes it is. Sometimes it is noise.

The job in early GTM is to separate the two.

Building GTM from zero means designing a learning system

When I think about building a GTM motion from zero, I do not think the first step is hiring reps, buying tools, or writing a sales playbook. Those things matter later. The first step is building a learning system.

At AutoCloud, that meant using every customer conversation to answer a few basic questions:

In the beginning, those answers were not obvious. Cloud complexity can be felt by platform teams, security teams, DevOps teams, architecture teams, FinOps teams, and executives. Each group describes the pain differently. A CISO might care about risk and attack surface. A platform leader might care about standardization and developer velocity. A CIO might care about governance and operating model. A DevOps team might care about reducing manual toil.

That creates a positioning challenge. If you try to speak to everyone, you end up being specific to no one.

The breakthrough in early GTM is finding the wedge where the pain, buyer, workflow, and product advantage line up. At AutoCloud, some of the strongest wedges came from concrete problems like cloud asset visibility, infrastructure governance, self-service provisioning, and cloud risk analysis. Those were not abstract platform ideas. They were specific operational pains that created urgency.

One example was Blueprints. A customer needed a better way for developers to provision approved cloud infrastructure without waiting on DevOps for every request. The pain was not "cloud management" in the abstract. The pain was that manual provisioning was slow, inconsistent, and risky. DevOps was becoming a bottleneck, and developers needed a safer self-service path.

The product answer was to create an abstraction on top of Infrastructure as Code. Blueprints packaged approved Terraform scripts, policy enforcement, tagging, role-based access, analytics, logging, and a cleaner user interface so developers could request infrastructure in a controlled way.

The important GTM lesson was not just that we built a product. It was that we found a repeatable pain pattern: enterprises wanted developer velocity, but they could not sacrifice governance and security to get it. That tension was a real wedge.

This is how zero-to-one GTM should work. You start with messy customer conversations. You listen for repeated pain. You translate that pain into a product wedge. Then you test whether that wedge can become a repeatable sales motion.

ICP is not a spreadsheet exercise

A lot of companies treat ICP like a demographic exercise. Industry, company size, employee count, cloud spend, funding stage, tech stack. Those attributes matter, but they are not enough.

A real ICP is behavioral.

The best-fit customers are not just the ones that look right on paper. They are the ones with the right combination of pain, urgency, authority, budget, implementation feasibility, and repeatability.

That is how I would define ICP in an early-stage environment. I would not just ask, "Who could buy this?" I would ask, "Who is experiencing the problem in a way that makes them likely to act, and who teaches us something we can reuse?"

That second part matters. Early customers should not only produce revenue. They should produce learning.

If I were joining an early GTM team today, I would use the warm lead pool, founder network, inbound interest, and existing customer conversations as a research lab. I would look for the highest-quality opportunities, not simply the largest ones. That means looking at:

The repeatability filter is the one many early teams underweight. A large customer with a strange use case can consume the entire company. A smaller customer with a clean, urgent, repeatable pain can teach you how to build the motion.

This does not mean ignoring enterprise logos. It means being honest about what each opportunity represents. Some deals are revenue. Some are learning. The best early deals are both.

Signals are not strategy until they become workflow

Once a company has some traction, the next problem is usually not a lack of data. It is too many disconnected signals.

A prospect attends a webinar. Someone downloads a guide. A developer signs up with a Gmail address. A target account visits the pricing page. A user forks a repo. A company hires three platform engineers. A team starts using a competing tool. A champion changes jobs. A customer starts inviting more users.

Individually, none of these signals is a strategy. Together, they can become one.

That has been a recurring theme in my GTM work after AutoCloud as well. In advisory work, and in conversations around GTM engineering roles, I have spent a lot of time thinking about how to connect enrichment, CRM data, product signals, intent data, and rep workflows into something useful.

The goal is not to buy more tools. The goal is to help the GTM team focus.

For example, a sales team might have demo signups, webinar attendees, personal-email leads, CRM records, LinkedIn data, website activity, and product usage all sitting in different systems. Reps can technically access all of it, but they do not have a clear answer to the only question that matters: "Who should I spend time on today, and why?"

That is where GTM engineering becomes powerful.

The work is part data, part systems, part messaging, and part sales judgment. You enrich the lead. You identify the company. You map the likely persona. You look for account fit. You connect the behavior to a likely pain. You route the account correctly. You write messaging that reflects the signal. Then you measure whether the signal actually converts.

A good signal-based motion does not just say, "This account is active." It says, "This account is likely dealing with this problem, this person is probably the right entry point, and this is the most relevant reason to reach out now."

That last phrase — "reason to reach out now" — is the difference between automation and noise.

I have seen this across multiple contexts. In startup growth motions, a book of accounts is often too large for reps to work evenly. You need prioritization. In developer tools, signups often come from individual users before a buyer is obvious. You need enrichment and account mapping. In enterprise sales, a warm intro is only useful if it connects to pain, budget, and timing. You need qualification.

The best GTM systems turn scattered evidence into focused action.

A data point is not a deal

One of the biggest mistakes in sales is treating a signal as more meaningful than it is.

A signup is not a deal. A meeting is not a deal. A warm intro is not a deal. A usage spike is not a deal. A positive technical conversation is not a deal. Even a big logo saying, "This is interesting," is not a deal.

A signal becomes a deal only when you can connect it to a business pain, a buyer, a timeline, and a reason to act.

That is why my approach to warm leads and early opportunities is pretty disciplined. I want to understand what triggered the conversation. Did something change internally? Is there a new initiative? Did an existing process break? Is there a compliance deadline? Is the team replacing another tool? Is there executive pressure? Did a customer issue force the problem into focus?

Then I want to understand the workflow behind the signal. If a developer signs up for a tool, what were they trying to build? Was it a side project, an evaluation, or part of an active company initiative? If someone from a target account downloads content about a technical problem, is that problem already funded, or are they just learning? If an executive takes a meeting, are they curious, or do they own an urgent business outcome?

The commercial work starts when you can move from signal to context.

From there, I like to qualify across five dimensions:

That last question is especially important for early-stage companies. Not every deal is worth winning if it drags the product, team, and roadmap away from the market you are trying to build.

The best early GTM people are not just closers. They are filters. They know how to turn strong signals into pipeline, but they also know when a shiny opportunity is actually a distraction.

Developer growth is still GTM, but the motion is different

Developer-focused products make all of this more interesting because the first user and the economic buyer are often not the same person.

A developer might discover the product through GitHub, documentation, a tutorial, a Reddit thread, Hacker News, Stack Overflow, a framework integration, or a very specific Google search. They may not want to talk to sales. They may not even know whether the company will buy the tool yet. They just want to solve a problem.

That means developer growth has to start with usefulness, not persuasion.

At AutoCloud, we saw this with CloudGraph, our open-source project. The idea was to give engineers a way to query and understand cloud infrastructure more easily. It gave the community something useful before we asked for anything in return. It also created a different kind of signal. Stars, forks, issues, inbound questions, and community engagement became clues about which problems resonated.

The broader lesson is that developer GTM works best when you meet developers in the workflows they already have. You do not force them into your funnel. You show up where they already search, build, debug, compare, and learn.

For a developer product, I would think about growth as a system with several connected pieces:

The content strategy should not be generic thought leadership. It should be problem-specific and implementation-oriented. For example, in an AI developer product, useful content might be about adding memory to an agent framework, comparing agent memory to RAG, building persistent context for customer support agents, or handling personalization across sessions. The point is to align content with a real developer job to be done.

This is also where GTM and product blur together. A tutorial can be a growth asset, but it can also reveal whether the product is easy to adopt. A starter repo can drive signups, but it can also expose gaps in onboarding. A framework integration can create distribution, but it can also clarify positioning.

Developer growth is not separate from GTM. It is GTM adapted to a buyer journey where trust is earned through usefulness before a sales conversation ever happens.

The through-line: turn ambiguity into repeatability

The common thread across these experiences is the same: early GTM is the work of turning ambiguity into repeatability.

At AutoCloud, that meant moving from founder-led selling and broad cloud pain to sharper wedges around governance, security, provisioning, and risk. In GTM engineering work, it means turning scattered signals into focused workflows. In sales, it means turning warm leads into qualified opportunities by connecting them to pain, impact, authority, timing, and fit. In developer growth, it means turning individual usefulness into adoption, then adoption into a commercial motion.

That is the kind of work I like most.

I like being close enough to the customer to hear the real pain, close enough to the product to understand what is possible, and close enough to revenue to know whether the market is actually responding. I like the moment when a messy set of conversations starts to become a pattern. I like figuring out which early signals matter and which ones are noise. I like building the system that helps a team stop guessing.

Because in the early days, the job is not simply to generate activity.

The job is to learn faster than the market can confuse you.

And once you learn what repeats, you can build the motion around it.