Coleman Management Advisors

On May 5, 2026, AMD reported Q1 2026 revenue of $10.3 billion, a 38% year-over-year jump that beat consensus estimates by more than $450 million and sent the stock up roughly 18% the next session. The headline number, however, is not the real story. The story is the Data Center segment, which alone produced $5.8 billion in revenue, up 57% year-over-year, with CEO Lisa Su telling analysts she has “strong and increasing confidence” in reaching tens of billions in data center AI revenue next year. For the CEO of a $5M to $100M company, this is a signal — not about chips, but about timing.

AMD’s Q1 2026 numbers, in plain English

AMD’s Q1 was not a modest beat. Non-GAAP diluted earnings per share landed at $1.37, gross margin expanded to 55% on a non-GAAP basis, and operating income reached $2.5 billion. These are the kind of numbers that reset analyst models for the rest of the year and pull forward enterprise capex commitments that would otherwise have stretched into 2027.

The composition of revenue is what matters most. The Data Center segment grew 57% year-over-year on the strength of EPYC CPUs and Instinct GPUs sold into hyperscale and enterprise AI build-outs. Client and gaming revenue grew too, but it was the Data Center number that pulled the company’s entire valuation higher in a single trading day. For Q2 2026, Lisa Su guided to roughly $11.2 billion in revenue, implying another 46% year-over-year jump.

For a mid-market CEO, the temptation is to skim these numbers and move on — semiconductor results feel like Wall Street noise. That would be a mistake. AMD’s Q1 is the cleanest real-time signal available about how fast enterprise AI infrastructure spending is actually expanding, and the curve is steeper than most operating plans assume.

Why Lisa Su’s “tens of billions” call matters more than the revenue beat

On the earnings call, Lisa Su told analysts she has strong and increasing confidence that AMD will reach tens of billions of dollars in data center AI revenue next year. That single sentence is more important than any number in the Q1 release. Forward guidance from chipmakers is the leading indicator for enterprise capex because chip orders are placed six to twelve months before the underlying compute capacity actually shows up in customer data centers.

What Su is effectively saying is that her customers — the hyperscalers, sovereign clouds, and large enterprises — have already committed to spending at a level that would more than double AMD’s AI segment from where it sits today. Nvidia’s own outlook implies the same trajectory, only larger. Combined, the two companies’ order books are a leading indicator that production capacity for enterprise AI applications will be roughly two to three times higher by mid-2027 than it is today.

For a $5M to $100M revenue company, the implication is not “go buy GPUs.” It is that the cost, availability, and capability of production-grade AI tooling will continue to improve faster than most planning cycles assume. A decision made today based on what AI can do in May 2026 will look obsolete by Q1 2027. That accelerating curve has to be priced into hiring, product, and operating decisions right now.

The strategic signal for $5M to $100M companies

The most consequential lesson from AMD’s quarter is not about technology — it is about compounding deployment speed. Companies that locked in their AI tooling decisions in 2024 are already on their third generation of internal applications, while companies that have been waiting for the dust to settle are starting from scratch with vendors who have eighteen months of customer learnings baked into their products. The gap between the two cohorts is widening every quarter, and AMD’s Q1 confirms the underlying compute curve will not slow.

This is the part most boards still get wrong. They treat AI investment as a single yes-or-no decision tied to a single budget line. In practice, the companies pulling away from their competitors are treating AI deployment as a continuous operating capability — staffed, measured, and iterated on the same cadence as sales or finance. They are not betting on a model or a platform; they are betting on the rate at which their organization can absorb new compute capacity. That absorption rate, not the raw spend, is what determines who actually benefits from the AMD-and-Nvidia capex wave.

For Coleman Management Advisors’ clients, this shows up as one clear question: does your operating model have a named owner, a budget envelope, and a 90-day deployment cycle for AI tooling? If the answer is no, the next twelve months of compute price declines will accrue to your competitors, not to you. Our AI Automation Suite is built specifically around this 90-day deployment cadence because that is the operating rhythm the AMD and Nvidia capex curve is actually rewarding.

Where mid-market operators are getting AI infrastructure wrong

The most common AI strategy mistake at the $5M to $100M level is over-indexing on the infrastructure layer when the leverage actually lives at the workflow layer. Operators read about AMD or Nvidia, assume the next move is some kind of capital build-out, and either freeze or chase a vendor commitment that does not match their actual operating problem. The Data Center segment growing 57% does not mean every mid-market company needs its own private AI stack. It means the underlying capacity to run AI cheaply, on demand, is exploding — and the right response is to figure out which workflows in your business should consume that capacity first.

The workflow lens reframes the decision entirely. Instead of asking whether to buy AI infrastructure, operators should be asking which three operational chokepoints — quoting, scheduling, customer support, accounts receivable, hiring screening, vendor evaluation — would deliver compounding margin if they were partially automated. That short list is knowable in a week and almost always sits inside operations and finance rather than IT. AMD’s Q1 simply confirms that the cost to act on that list keeps falling.

The second mistake is treating the build-or-buy decision as binary. In practice, mid-market companies should be buying the foundation model and building the workflow integration, because that is where defensible operating advantage lives. Companies that try to build at the model layer waste capital; companies that try to buy at the workflow layer lose the proprietary process that gives them margin in the first place.

The decision framework: build, buy, or wait

If AMD’s Q1 2026 results forced a CEO to make one decision before the next board meeting, the right one is to establish a named AI operating owner — a single executive responsible for a 90-day rolling list of workflow automation experiments, with a budget envelope and a clear measure of success tied to margin or revenue per FTE. Without that ownership, every additional dollar AMD and Nvidia spend on capacity translates into competitive advantage for someone else.

Waiting is almost never the right answer at this point in the curve. Waiting made sense in 2023 when model capabilities were unstable and vendor lock-in was a real risk. In 2026, the foundation model layer has commoditized to the point where switching costs are low, and the AMD result confirms compute pricing will continue to fall through 2027. The cost of waiting is no longer caution — it is opportunity cost compounding at roughly the same rate AMD’s data center segment is growing.

The companies Coleman Management Advisors works with that have made the most progress in the last twelve months share three traits: a single accountable AI owner, a workflow-first deployment model, and a quarterly review cadence that treats AI tooling like a P&L line rather than a project. If you want to map where your business sits on that curve and what the next ninety days should look like, contact our team and we will run the diagnostic.

Leave a Reply

Discover more from Coleman Management Advisors

Subscribe now to keep reading and get access to the full archive.

Continue reading