Discover how a new vulnerability in chips endangers AI Privacy systems, exposing sensitive data and security risks.
Table of Contents
A recent study by researchers at North Carolina State University has revealed a flashpoint of exposure in artificial intelligence (AI) systems, this time not from the software but from the hardware itself. The vulnerability, dubbed GATEBLEED, allows attackers to extract information about the training data used by an AI model, even without having direct access to its memory. This finding raises new concerns around data privacy and security in environments increasingly dependent on specialized accelerators.
What’s striking about this threat is its ability to take advantage of a chip design feature, known as power gating, which is essentially a technique to save energy by shutting down parts of the processor when they’re not in use. But, as if it were about leaving footprints in the sand when walking barefoot, this function leaves measurable traces in execution times that can be analyzed to deduce if certain data was present during the training of the model.
How this exploit threatens AI privacy and exposes critical vulnerabilities in modern chip technology
The technique described in the study, which will be officially presented at the IEEE/ACM MICRO 2025 conference, is based on the fluctuations in energy consumption in the chip’s accelerators, generated by the use of power gating. Although this mechanism seeks efficiency, the changes it introduces in processing times act as a covert communication channel. Through this channel, malicious software, without administrator privileges, can deduce with great precision what data was used to train a model.
This has profound implications: for example, it could be used to check whether a company has used data without consent or protected by rights, exposing it to litigation and reputational damage. In addition, it could reveal private information about users whose data was included in the training sets.
The attack can even identify which sub model or “expert” within an AI architecture made a decision, a relevant detail in complex systems such as Mixture of Experts models, which distribute tasks across multiple specialized modules. This information, in the wrong hands, could be used to develop more sophisticated attacks or to manipulate the behavior of the system.

Limitations of current defenses
Unlike other common threats in the digital realm, such as code flaws or misconfigurations, GATEBLEED cannot be effectively addressed with traditional software solutions. This is because the source of the problem is in the physical design of the chip. Experts warn that today’s software-centric defenses simply aren’t enough.
Modifying the chip design to block these types of attacks would involve redesigning key components, which is not only costly but can also affect performance, power consumption, and delivery times for new products. Other possible solutions, such as microcode updates or operating system-level tweaks, might help, but they don’t completely eliminate the risk.
It’s as if the problem is in the foundation of an already built house: you can reinforce the walls or put alarms on the doors, but if the crack is at the base, the risk persists.
Legal and ethical implications
The discovery of GATEBLEED not only poses technical challenges but also strikes a chord with the legal and ethical side. If an attacker can prove that a model was trained on unauthorized data, it could open a Pandora’s box in terms of lawsuits and regulations. This possibility would be a game-changer for many tech companies, which should be able to clearly demonstrate the provenance and correct use of training data.
Additionally, these types of vulnerabilities could fuel users’ concerns about how their personal data is used in AI systems. While many models claim to anonymize and protect information, GATEBLEED suggests that it might be possible to reconstruct some of that information from subtle clues in the hardware’s behavior.
The challenge of protecting the entire AI value chain
In recent years, the security of AI systems has focused primarily on software, from protecting models against tampering to preventing the use of falsified data. But this new finding forces us to look further, towards comprehensive protection that includes hardware, infrastructure, and training methods.
Cybersecurity specialists are already warning about the need to adopt a transversal approach, combining technical audits and collaboration between chip manufacturers, cloud service providers, and model developers, as well as clear regulation on liability in case of failures.
In simple words, AI is not only the brain that processes information but also the body that supports it. If that body has weak spots, no matter how small they may seem, they can become entry routes for threats that compromise the entire system.
Be a part of over Success!
- Stay ahead of the curve with the latest tech trends, gadgets, and innovations! Newsletter
- Follow me on Medium for more insights
- Write for Us on Technoluting (Medium)
Disclaimer : Some links in this article are affiliate links. We may earn a small commission at no extra cost to you. Thank you for supporting us!







