British authorities highlight cybersecurity risks associated with AI chatbots.

British officials are sounding a cautionary note regarding the integration of AI-driven chatbots into businesses, warning that recent research has highlighted their vulnerability to manipulation for harmful purposes.

In forthcoming blog posts, the National Cyber Security Centre (NCSC) of the United Kingdom has expressed concerns about the security implications associated with algorithms that can generate human-like interactions, often referred to as large language models (LLMs).

These AI-powered tools are gaining early adoption as chatbots, with potential applications extending beyond internet searches to encompass customer service and sales calls. The NCSC points out that this expansion could introduce risks, particularly when LLMs are integrated into various aspects of an organization’s business processes. Researchers have discovered vulnerabilities in chatbots, demonstrating how they can be tricked into executing rogue commands or circumventing their built-in safeguards.

For example, a bank’s AI-driven chatbot might inadvertently facilitate an unauthorized transaction if a hacker crafts their query cleverly.

The NCSC emphasizes the need for caution when implementing services that rely on LLMs, likening the approach to the careful handling of beta software releases. They suggest organizations should refrain from allowing these tools to conduct transactions on behalf of customers and should exercise a degree of skepticism and oversight.

Around the world, authorities are grappling with the proliferation of LLMs, including models like OpenAI’s ChatGPT, which are being integrated into diverse services, from sales to customer support. The security implications of AI are a growing concern, with U.S. and Canadian authorities acknowledging the adoption of this technology by hackers.