Spotlight Interview With SK Hynix
What do you think will be the next big breakthrough in memory technology for AI applications?
"As AI applications, particularly those based on large language models (LLMs), continue to advance, we see that HBM remains vital because of its high bandwidth. However, we still face challenges in managing vast amount of data and its movement during AI inference. That’s where PIM comes in as a breakthrough solution. By integrating processing directly within memory, we can reduce data movement and improve both latency and energy efficiency.
We believe PIM is set to significantly boost AI performance and efficiency, helping us overcome the limitations of current memory technologies."
How do you see the balance between compute and memory evolving for AI workloads?
"As GPU infrastructure has advanced, we’ve seen computing progress more rapidly than memory. But with the recent surge in AI workloads, the demands on memory have grown significantly and become more diverse, tailored to specific tasks.
Because of this, we’re starting to see a shift towards more integrated and efficient architectures that better balance compute and memory. Just like HBM has become more prominent, we expect PIM to naturally expand as well, offering us exceptional performance and energy efficiency, especially in AI inference."
What do you think the future of PIM will be like?
"As AI applications continue to expand not only in datacenter but also on-device, we’re likely to see PIM technology evolve towards integration with LPDDR memory.
This will be especially beneficial for on-device AI, where smaller batch sizes, limited area, and energy efficiency are crucial.
And even in datacenter, we believe the next generation of PIM’s power efficiency could provide significant advantages, making it a valuable technology across a broad range of AI applications."
Want to hear more from SK Hynix? Register here for your ticket to the AI Hardware & Edge AI Summit!