What Samsung's numbers actually show

Samsung's semiconductor division reported operating profit of approximately 9.8 trillion won (roughly £5.6 billion) for the first quarter of 2026, a near 49-fold increase on the same period a year earlier, according to the Guardian. Group operating profit reached a record 14.4 trillion won, with the chip business accounting for more than two-thirds of the total.

The numbers reflect a structural shift, not a cyclical bounce. The boom in AI datacentre construction has driven Samsung and its peers to reallocate production capacity towards advanced memory products, particularly high-bandwidth memory (HBM), which Nvidia uses in its AI accelerators. That reallocation has come at the expense of conventional DRAM and NAND output, tightening supply across the entire memory market.

Samsung is not alone. SK Hynix and Micron, the other two major memory manufacturers, have followed the same playbook, redirecting fabrication lines towards HBM to serve hyperscaler customers building out AI infrastructure at pace.

Why the chip shortage is set to persist

Samsung itself has warned that the supply shortage will not ease soon. The company expects conditions to worsen into 2027 as datacentre buildout continues to outstrip chipmakers' ability to add capacity, as reported by the Guardian.

Industry body SEMI has forecast that global semiconductor capital expenditure will exceed $200 billion in 2026. That figure sounds enormous, but it is not translating into proportional supply growth in memory. SEMI's projections indicate that memory supply growth is expected to lag demand by 15 to 20 percentage points, sustaining price increases across both DRAM and NAND product lines.

Several factors explain the gap. Building a new fabrication facility takes two to three years from ground-breaking to volume production. Existing fabs are being retooled for HBM, which uses more silicon per unit and involves complex packaging processes that limit throughput. Meanwhile, Nvidia's accelerator roadmap, spanning Blackwell and its successors, continues to demand ever-larger quantities of HBM from all three suppliers.

The result is a market where even aggressive capital spending cannot close the supply-demand gap within the next 18 months.

The knock-on cost for mid-market IT buyers

For hyperscalers such as Microsoft, Google, and Amazon, the shortage is manageable. They negotiate long-term supply agreements directly with Samsung, SK Hynix, and Micron, often pre-paying or co-investing in capacity. Mid-market firms have no such bargaining position.

Rising memory chip prices feed directly into the cost of servers, storage arrays, and networking equipment. Any business planning a conventional IT refresh cycle, let alone an investment in on-premise AI infrastructure, faces higher component costs and longer lead times. A server that might have been quoted at a certain price in late 2025 could carry a materially higher bill of materials by the second half of 2026, with delivery timelines stretching further.

The effect is not limited to organisations buying AI hardware. Because Samsung and its competitors have shifted conventional DRAM lines to HBM production, even standard memory modules used in everyday enterprise servers are becoming scarcer and more expensive. Firms procuring laptops, workstations, or cloud instances in volume will feel the pressure.

For UK businesses specifically, currency adds another variable. Sterling's movements against the South Korean won and the US dollar, in which most semiconductor contracts are denominated, can amplify or dampen the price impact. Finance directors budgeting in pounds need to account for both the underlying commodity price trend and foreign exchange exposure.

What operators can do now

The shortage is structural, and no single mid-market buyer can influence it. But there are practical steps that reduce exposure.

Lock in procurement early

Organisations with planned hardware refreshes in late 2026 or 2027 should engage suppliers now. Waiting until the point of need risks longer lead times and higher spot pricing. Framework agreements with IT resellers that include price-hold clauses, even partial ones, offer some insulation.

Reassess build-versus-buy for AI workloads

Firms considering on-premise AI infrastructure should weigh the cost of purchasing GPU servers, which contain significant quantities of HBM, against consuming AI capacity through cloud providers. Hyperscalers have already secured supply; their pricing may rise, but availability is more predictable than the open hardware market.

Extend existing hardware lifecycles

Where feasible, deferring a full refresh by 12 months and investing in incremental upgrades, such as memory expansion or storage tiering, can bridge the gap until supply conditions improve. This is not cost-free, but it avoids buying at the peak of a shortage.

Build chip-cost assumptions into capital plans

Finance teams should model IT capital expenditure scenarios that include memory price increases of 15 to 20 per cent year on year, in line with the supply-demand gap that SEMI has identified. Treating component costs as static in a three-year plan is no longer realistic.

Samsung's record quarter is a symptom, not the story. The story is a multi-year reordering of semiconductor supply chains around AI demand, and every business that buys hardware is exposed to the consequences.