LLMs prone to data poisoning and prompt injection risks, UK authority warns
The UK’s National Cyber Security Centre (NCSC) is warning organisations to be wary of the imminent cyber risks associated with the integration of Large Language Models (LLMs) — such as ChatGPT — into their business, products, or services. In a set of blog posts, the NCSC emphasised that the global tech community doesn’t yet fully grasp LLMs’ capabilities, weaknesses, and (most importantly) vulnerabilities. “You could say our understanding of LLMs is still ‘in beta’,’’ the authority said. One of the most extensively reported security weaknesses of existing LLMs is their susceptibility to malicious “prompt injection” attacks. These occur when a…
This story continues at The Next Web
from LatestTechyTalks
Comments
Post a Comment