AI infrastructure has been sold as a chip story, but that is only half true. Reuters reported in March that Solidigm, the U.S.-based storage arm of SK Hynix, warned AI’s growing hunger for data could tighten supplies of storage drives over the next few years. That matters because AI systems do not just need fast processors and memory. They also need huge amounts of storage to hold, move, and retrieve data efficiently.
The reason this matters now is scale. Reuters said Nvidia expects AI systems rolling out later this year to need about 35% more storage capacity than earlier models. That is a big jump in a part of the stack most people ignore because it is not as flashy as GPUs. But storage is where the data actually lives, and if supply gets tight there, the whole infrastructure chain gets more expensive.

What Solidigm is actually warning about
This is not a vague future risk. Reuters reported that a Solidigm executive said AI demand for data could create tight supplies for storage drives, especially as training and inference systems keep expanding. The same report linked this to broader concerns already visible in memory markets, where SK Group’s chairman has also warned that shortages of high-bandwidth memory could last until 2030.
That is the uncomfortable part people avoid. AI demand is not creating one bottleneck. It is creating a stack of bottlenecks: GPUs, HBM, power, cooling, and now potentially storage. Pretending storage is a side issue is lazy. If large AI deployments need more data, then SSDs and related storage components stop being background hardware and start becoming a capacity constraint.
Why storage matters more than it sounds
Storage matters because AI systems are moving from model training hype toward real deployment. That means companies need infrastructure that can continuously feed large datasets into models and support retrieval, inference, and enterprise workloads at scale. Reuters noted that Jensen Huang emphasized the importance of moving data faster between storage and chips, which shows this is not just a back-office cost issue. It is a performance issue too.
There is also a supply-chain angle. Reuters has already reported that AI demand is straining the semiconductor ecosystem more broadly, including advanced packaging, testing, and high-end components. That does not prove every storage product is in shortage today, but it does show the industry is under rising infrastructure pressure across multiple layers at once.
The simple breakdown
| Storage pressure point | Verified detail | Why it matters |
|---|---|---|
| AI systems later in 2026 | Nvidia expects about 35% more storage capacity | New AI hardware needs more data capacity, not just more compute. |
| Solidigm warning | AI data demand could cause tight storage-chip supplies | Storage may become the next real bottleneck. |
| Broader memory pressure | SK Group chairman warned HBM shortages may last until 2030 | AI is already stretching adjacent chip markets. |
| Industry response | Solidigm plans higher-density drives and more output | Suppliers are trying to expand, but may still struggle to keep up. |
Why this could get expensive fast
Storage bottlenecks do not get the same headlines as GPU shortages, but they can still raise costs sharply. Reuters reported in January that memory-chip shortages were already squeezing supply across sectors and lifting prices, partly because manufacturers were diverting more capacity toward AI-linked products. If AI demand keeps redirecting manufacturing resources, storage-related components could become more expensive and harder to secure.
That matters for hyperscalers and enterprise buyers because the AI buildout is already capital-intensive. Reuters separately reported that Microsoft, Amazon, Alphabet, and Meta are expected to spend about $635 billion on AI infrastructure in 2026. When budgets are already that large, another infrastructure bottleneck is not a minor annoyance. It is a margin problem.
What companies should be watching
A few things matter more than the hype:
- AI demand is no longer stressing only compute chips; storage demand is rising too.
- Larger AI systems need more capacity to store and move data, not just more model horsepower.
- Suppliers are expanding output, but industry warnings suggest supply may still stay tight.
- Rising costs in memory and infrastructure can spill into broader AI deployment economics.
Conclusion
AI’s next problem might be storage because the industry is learning the obvious lesson too slowly: compute is useless without data infrastructure that can keep up. Reuters’ March reporting from Solidigm suggests storage-chip supplies could tighten as AI systems demand more capacity and faster data movement. The blunt takeaway is simple. AI is not just a GPU race anymore. It is becoming a full-stack supply problem, and storage is one of the next weak spots to watch.
FAQs
Why is storage becoming a bigger AI issue?
Because newer AI systems need much more data capacity, and Reuters reported Nvidia expects some upcoming systems to require about 35% more storage than earlier ones.
Who warned about tight storage supplies?
A Solidigm executive told Reuters that AI’s demand for data could cause tight supplies for storage drives over the coming years.
Is this only about SSDs?
The Reuters reporting focused on storage drives and the broader pressure from AI data needs, which affects the storage layer more generally, not just one marketing label.
Why does this matter for AI costs?
Because storage bottlenecks can raise infrastructure costs and complicate deployments at a time when Big Tech is already spending hundreds of billions on AI systems.