Rising against the trend! A major breakthrough news in storage chips! Institutions: A buying opportunity

robot
Abstract generation in progress

Google’s paper on a new algorithm has severely impacted storage chip stocks!

On Friday, against the backdrop of a collective plunge in major U.S. stock indices, U.S. storage chip stocks rose against the trend. During the trading session, SanDisk briefly rose over 5%, and Micron Technology rose over 3%. By the close, SanDisk was up 2.10%, Micron Technology up 0.50%, Seagate Technology up 0.34%, and Western Digital up 0.73%. The previous day, these stocks had already faced a large-scale sell-off. At Thursday’s close, SanDisk plummeted over 11%, Seagate Technology fell over 8%, Western Digital dropped over 7%, and Micron Technology fell nearly 7%.

Analysts indicated that the significant drop in storage chip stocks on Thursday might have been due to a market misinterpretation. The ultra-efficient AI memory compression algorithm TurboQuant mentioned in Google’s paper only applies to the key-value cache during the inference stage and does not affect the high bandwidth memory (HBM) occupied by model weights, nor is it related to AI training tasks.

Other analysts claimed that advanced compression technology merely reduces bottlenecks and does not destroy the demand for DRAM/flash memory. Investors may have taken profits from Google’s news, but the consumption of memory remains very robust. The short-term pullback in memory stocks is an “entry opportunity,” not a turning point for stock prices.

Storage Chip Stocks Hit by Google’s New Algorithm

The AI market “ghost story” is back, as Google has publicly released research on a new algorithm that can significantly reduce memory usage, resulting in a severe decline in storage chip stocks recently.

On Thursday, SanDisk dropped over 11%, Micron Technology fell nearly 7%, SK Hynix dropped over 6%, Samsung Electronics fell nearly 5%, and Kioxia dropped nearly 6%. Estimates indicate that the market value of major global memory giants evaporated by more than $90 billion in a single day on Thursday. On Friday, in the U.S. stock market, storage chip stocks rose against the trend, with SanDisk up over 2% and Micron Technology up 0.50%.

In the past few months, storage chip companies performed strongly due to a surge in investment in AI infrastructure leading to supply shortages, which caused chip prices to skyrocket and profits to grow. As of this Wednesday, the stock prices of SK Hynix and Samsung Electronics have risen over 50% this year, while Kioxia’s stock price has more than doubled.

The trigger for this decline was the paper “TurboQuant” that Google Research plans to officially present at the International Conference on Learning Representations (ICLR 2026). The Google team claims that through two innovative technologies, PolarQuant (Polar Coordinate Quantization) and QJL (Quantization JL Transform), they achieved a compression of the KV Cache to 3-bit precision with “zero loss,” reducing memory usage by at least six times. This algorithm reportedly achieved up to an 8-fold performance improvement on the H100 GPU accelerator compared to unquantized key values.

Google promoted this research on the X platform this week, although the research was initially published last year. Investors may worry that this will reduce the demand for memory from hyperscale data center operators, thereby lowering the prices of components also used in smartphones and consumer electronics.

Institutions: Market May Be Misreading

Morgan Stanley stated in a recent report that the market may be misreading the situation. The technology only applies to the key-value cache during the inference stage and does not affect the high bandwidth memory (HBM) occupied by model weights, nor is it related to AI training tasks. Analysts emphasized that the so-called “6-fold compression” does not represent a reduction in total storage demand, but rather an increase in throughput per GPU through efficiency improvements.

Morgan Stanley analyst Shawn Kim pointed out that Google’s research should have a more positive impact on the industry, as it affects a key bottleneck. The technology improves the efficiency of the so-called key-value cache used for inference (i.e., running AI models). He wrote, “If models can run with significantly reduced memory demands without losing performance, then the service cost per query will drop significantly, making AI deployment more profitable.” Kim noted that TurboQuant is favorable for hyperscale enterprises considering return on investment opportunities. In the long run, this could also benefit memory manufacturers, as “lower costs per token can lead to higher product adoption demand.”

Morgan Stanley invoked the “Jevons Paradox” from economics to explain the long-term effects: while technological efficiency improvements lower unit costs, they often lead to overall demand expansion due to reduced usage thresholds.

KC Rajkumar, an analyst at Lynx Equity Strategies, pointed out that some media reports may have exaggerated aspects. Current inference models widely use 4-bit quantized data, and Google’s so-called “8-fold performance improvement” is based on comparisons with outdated 32-bit models. “However, due to extreme supply tightness, this will hardly reduce the demand for memory and flash storage over the next 3-5 years.” Rajkumar wrote that advanced compression technology merely reduces bottlenecks and does not destroy the demand for DRAM/flash memory.

Wells Fargo analyst Andrew Rocha noted that the existence of compression algorithms has never fundamentally changed the overall scale of hardware procurement. By significantly lowering the service cost per query, such technologies can allow models that could only run on expensive cloud clusters to migrate to local environments, effectively lowering the threshold for AI scaling deployment.

The four hyperscale companies, led by Amazon and Google, plan to spend about $650 billion this year on building data centers and purchasing Nvidia’s AI accelerators and related storage chips. SK Group Chairman Chey Tae-won recently stated that the tight supply of storage chips will persist until 2030.

From a supply chain perspective, server DRAM demand is expected to grow by 39% in 2026, and HBM demand is projected to increase by 58% annually. The optimization effects of TurboQuant may be drowned out by industry growth waves.

Mizuho technology expert Jordan Klein believes that the current pullback in memory stocks is more like an “entry opportunity” rather than a turning point for stock prices. Klein wrote in a report that after experiencing a strong rise in 2025 and early 2026, bulls in memory stocks began to waver. Although the memory industry is known for its severe cyclical fluctuations, he emphasized that the recent sell-off aligns with a familiar pattern.

Mizuho stated that such sell-offs occur every few months and are not a signal of a market peak or a reason for selling. In fact, buying on dips can be profitable.

(Source: Securities China)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin