
I have used six-month candlesticks for this chart to highlight the significance of the chart breakout from almost 20 years of consolidation. Note also the steeply rising moving averages. Everything tells me that SK hynix (lower case intentional) is having a day in the sun moment and is an exciting share to own.
Everybody, often including me, is looking for this AI infrastructure boom to end, but there is nothing in either charts or fundamentals to suggest that is happening. The facts remain strongly positive.
SK hynix is a Korean company which, like SanDisk, is in the right place at the right time. This is what they say is happening to them.
As the AI era accelerates, SK hynix is evolving from a memory supplier into a “Full Stack AI Memory Creator.” This represents our commitment to go beyond meeting customer demands — to become a co-designer, co-architect, and ecosystem partner that drives breakthroughs in AI computing together with our customers. By integrating advanced DRAM and NAND technologies, SK hynix will continue to build the core value of the AI era through its full-stack AI memory innovations.
Business is booming.
SK hynix reported record-breaking FY2025 results, with Q4 revenue reaching KRW 32.83 trillion (up 66% YoY) and operating profit hitting KRW 19.2 trillion, driven by massive AI demand for HBM3E and HBM4, according to the SK hynix Q4 2025 earnings summary. Full-year 2025 operating profit exceeded 47 trillion won, supported by premium DRAM and NAND growth.
Everything points to a company on fire. The story for SK hynix is much like the one powering SanDisk shares higher, but SK hynix is bigger and very much an industry leader.
SK Hynix is thriving due to its dominant 62% market share in High Bandwidth Memory (HBM), which is crucial for AI chips, particularly as the primary supplier to NVIDIA. Explosive AI-driven demand, high-value contracts, and supply shortages have led to record profits and long-term supply agreements, positioning it as a key beneficiary of AI infrastructure expansion.
Key Drivers of Success:
- HBM Market Leadership: SK Hynix currently leads the HBM market, a specialized, high-profit DRAM essential for AI accelerators, with significant orders from top-tier tech firms like NVIDIA.
- NVIDIA Partnership: The company has strategically tailored its HBM products to match NVIDIA’s GPUs, making it an indispensable part of the AI hardware supply chain.
- Capacity and Shortages: Demand for HBM is vastly outstripping supply, leading to a shortage that enables higher pricing and better profitability. The 2026 HBM production capacity is already fully booked.
- Long-Term Agreements: SK Hynix is pivoting from volatile, short-term contracts to 3- to 5-year long-term agreements (LTAs) with major clients like Google and Microsoft, ensuring stable future revenue.
- Rapid Innovation: The company is advancing with next-generation technology, developing and shipping HBM4 and sixth-generation DRAM (1cnm) to maintain its competitive edge.
- Strong Conventional Memory Market: Along with HBM, the demand for traditional server DRAM and NAND flash has increased, providing a broader base for its record profitability.
So why is memory technology so important to the AI boom.
High Bandwidth Memory (HBM) is critical to the AI boom because it solves the “memory wall” bottleneck, providing the extreme data transfer speeds required to feed powerful GPUs and AI accelerators. By stacking DRAM chips vertically (3D structure) near the processor, HBM increases bandwidth, reduces latency, and enhances energy efficiency compared to traditional memory, making it essential for training large models and fast inference.
Why HBM is Essential for AI:
- Solving the Memory Bottleneck: AI algorithms, particularly large language models (LLMs), require immense, fast data access. Traditional memory (like DDR) cannot keep up with AI processors, forcing them to wait for data, which creates a massive performance bottleneck.
- 3D Stacking and High Throughput: HBM vertically stacks DRAM dies—connected via through-silicon vias (TSVs)—directly on or next to the processor. This structure enables a massive data highway, moving data faster and more efficiently than side-by-side memory configurations.
- Lower Power Consumption: HBM offers a lower power profile than traditional memory, allowing for improved energy efficiency within AI servers and data centers, which reduces thermal management issues.
- Unmatched Performance for AI Workloads: It provides the necessary data-transfer throughput for accelerated computing in AI inference and training, preventing AI chips from running under capacity.
- Compact Footprint: Because of its vertical design, HBM takes up less physical space on the circuit board, making it easier to integrate into high-density GPU accelerators.
The explosive demand for AI has made HBM, particularly generations like HBM3 and HBM3E, a central constraint in AI hardware production.
Investors are already looking for this boom to end, but it seems it may have only just begun.
Share Recommendations
SK hynix 000660
Strategy – Hardware Beats Software
AI is eating software’s lunch, but it can only do that if its voracious demand for data is powered by ever faster computing power. The result is that a long boom in software companies’ shares has come to an end, while hardware stocks are booming.
Many portfolios are misaligned with current market conditions, so there will be massive rotation in the stock market, affecting individual shares and ETFs.