The rapid rise of AI is a phenomenon to behold. But behind its allure are well-kept secrets some of which are slowly coming to light. At Black Hat USA 2024, Zenity, a company that designs security controls for enterprise chatbots, claimed that all defaults in Microsoft Copilot Studio, the drag-and-drop tool for building bespoke AI assistants, are insecure.
Copilot has proven its worth as a very handy enterprise tool with clever, low-code capabilities. Copilot Studio extends Microsoft’s AI technology to customer relationship management (CRM) and enterprise resource planning (ERP) where non-technical administrators can take advantage of ready-made models to create custom conversational bots for internal use.
Fine-tuned with internal business data, these bots are extremely useful for targeting specific enterprise use cases. But they come with serious holes in the security controls.
Zenity CTO, Michael Bargury, in an interview with The Register, said that building a safe Copilot Studio bot is tricky. The platform’s loose controls make the chatbots vulnerable by design.
If successful, an adversary can potentially take command of a chatbot and provide false outputs for its enterprise users, while harvesting credentials from the backend.
Zenity claimed that they have found over 3000 Copilot bots online that were meant to strictly be used inside organizations . Due to Copilot Studio’s weak security defaults, these bots are discoverable on the Internet, and accessible to the public.
In the presentation, Zenity showed how using malicious code injections, these chatbots can be made to expose sensitive company information.
Stephen Foskett and Tom Hollingsworth go over this in last week’s Rundown where they talk about the dangers of relying on chatbots without putting safety controls in place. Check it out to learn why businesses should be wary, and what needs to be done before deploying a copilot in the environment.