Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, Nvidia began as a graphics chip company targeting the PC gaming market. The founding premise was that specialised parallel processing hardware would outperform general-purpose CPUs for visual computing tasks. That architectural bet proved correct, and the GPU became the standard for high-performance graphics.

The inflection point that reshaped the company's trajectory came in the early 2010s, when researchers discovered that Nvidia's CUDA programming platform, launched in 2006, allowed GPUs to accelerate machine learning workloads far beyond what CPUs could manage. The company had not originally designed CUDA for AI, but it moved quickly to position its hardware and software stack as the infrastructure layer for deep learning. That repositioning proved consequential: as demand for AI training and inference scaled, Nvidia's data centre business grew to rival and then surpass its gaming revenues.

Today Nvidia occupies a structurally significant position in the AI supply chain. Its GPU architectures underpin the majority of large-scale model training globally, and its software ecosystem creates meaningful switching costs for research and enterprise customers alike. The company is listed on Nasdaq and has, at points in recent years, ranked among the largest companies by market capitalisation in the world.

For operators and founders, Nvidia is worth watching for a specific reason: it demonstrates how a hardware company can build durable platform economics typically associated with software. The CUDA ecosystem, not the chip alone, is what sustains margin and customer retention. That lesson, that proprietary developer tooling can lock in a market as effectively as any application layer, is directly relevant to anyone building infrastructure businesses in AI or deep tech.