Skip to main content

10 Ways To Prevent Shadow AI Disaster

By July 11, 2024Article

As contributing writer Mary Pratt, shares in her CIO article, just as it was with shadow IT of yore, there’s no one-and-done solution that can prevent the use of unsanctioned AI technologies or the possible consequences of their use.

However, CIOs can adopt various strategies to help eliminate the use of unsanctioned AI, prevent disasters, and limit the blast radius if something does go awry. Here, IT leaders share 10 ways that CIO can do so.

Unsanctioned AI in the workplace is putting company data, systems, and business relationships at risk.

Here’s are 10 ways to pivot employees’ AI curiosity toward acceptable use — and organizational value.

  1.  SET AN ACCEPTABLE USE POLICY FOR AI

    A big first step is working with other executives to create an acceptable use policy that outlines when, where, and how AI can be used and reiterating the organization’s overall prohibitions against using tech that has not been approved by IT, says David Kuo, executive director of privacy compliance at Wells Fargo and a member of the Emerging Trends Working Group at the nonprofit governance association ISACA. Sounds obvious but most organizations don’t yet have one. 
  2. BUILD AWARENESS ABOUT THE RISKS AND CONSEQUENCES

    Kuo acknowledges the limits of Step 1: “You can set an acceptable use policy but people are going to break the rules.” So warn them about what can happen.

    “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. Outline the risks associated with AI in general as well as the heightened risks that come with the unsanctioned use of the technology.

    Kuo adds: “It can’t be one-time training, and it can’t just say ‘Don’t do this.’ You have to educate your workforce. Tell them the problems that you might have with [shadow AI] and the consequences of their bad behavior.”

  3. MANAGE EXPECTATIONS

    Although AI adoption is rapidly rising, research shows that confidence in harnessing the power of intelligent technologies has gone down among executives, says Fawad Bajwa, global AI practice leader at Russell Reynolds Associates, a leadership advisory firm. Bajwa believes the decline is due in part to a mismatch between expectations for AI and what it actually can deliver.

    He advises CIOs to educate on where, when, how, and to what extent AI can deliver value.

    “Having that alignment across the organization on what you want to achieve will allow you to calibrate the confidence,” he says. That in turn could keep workers from chasing AI solutions on their own in the hopes of finding a panacea to all their problems.

  4. REVIEW AND BEEF UP ACCESS CONTROL

    One of the biggest risks around AI is data leakage, says Krishna Prasad, chief strategy officer and CIO at UST, a digital transformation solutions company.

    Sure, that risk exists with planned AI deployments, but in those cases CIOs can work with business, data and security colleagues to mitigate risks. But they don’t have the same risk review and mitigation opportunities when workers deploy AI without their involvement, thereby upping the chances that sensitive data could be exposed.

    To help head off such scenarios, Prasad advises tech, data, and security teams to review their data access policies and controls as well as their overall data loss prevention program and data monitoring capabilities to ensure they’re robust enough to prevent leakage with unsanctioned AI deployments.

  5. BLOCK ACCESS TO AI TOOLS

    Another step that can help, Kuo says: blacklisting AI tools, such as OpenAI’s ChatGPT, and use firewall rules to prevent employees from using company systems to access. Have a firewall rule to prevent those tools from being accessed by company systems.

  6. ENLIST ALLIES IN THE EFFORT

    CIOs shouldn’t be the only ones working to prevent shadow AI, Kuo says. They should be enlisting their C-suite colleagues — who all have a stake in protecting the organization against any negative consequences — and get them onboard with educating their staffers on the risks of using AI tools that go against official IT procurement and AI use policies.

    “Better protection takes a village,” Kuo adds.

  7. CREATE AN IT AI ROADMAP THAT DRIVES ORGANIZATIONAL PRIORITIES, STRATEGIES

    Employees typically bring in technologies that they think can help them do their jobs better, not because they’re trying to hurt their employers. So CIOs can reduce the demand for unsanctioned AI by delivering the AI capabilities that best help workers achieve the priorities set for their roles.

    Bajwa says CIOs should see this as an opportunity to lead their organizations into future successes by devising AI roadmaps that not only align to business priorities but actually shape strategies. “This is a business redefining moment,” Bajwa says.

  8. DON’T BE THE DEPARTMENT OF ‘NO’

    Executive advisers say CIOs (and their C-suite colleagues) can’t drag their feet on AI adoption because it hurts the organization’s competitiveness and ups the chances of shadow AI. Yet that’s happening to some degree in many places, according to Genpact and HFS Research. Their May 2024 report revealed that 45% of organizations have adopted a “wait and watch” stance on genAI and 23% are “deniers” who are skeptical of genA.

  9. EMPOWER WORKERS TO USE AI AS THEY WANT

    ISACA’s March survey found that 80% believe many jobs will be modified because of AI. If that’s the case, give workers the tools to use AI to make the modifications that will improve their jobs, says Beatriz Sanz Sáiz, global data and AI leader at EY Consulting.

    She advises CIOs to give workers throughout their organizations (not just in IT) the tools and training to create or co-create with IT some of their own intelligent assistants. She also advises CIOs to build a flexible technology stack so they can quickly support and enable such efforts as well as pivot to new large language models (LLMs) and other intelligent components as worker demands arise — thereby making employees more likely to turn to IT (rather than external sources) to build solutions.

  10. BE OPEN TO NEW, INNOVATIVE USES

    AI isn’t new, but the quickly escalating rate of adoption is showing more of its problems and potentials. CIOs who want to help their organizations harness the potentials (without all the problems) should be open-minded about new ways of using AI so employees don’t feel they need to go it alone.

    Bajwa offers an example around AI hallucinations: Yes, hallucinations have gotten a nearly universal bad rap, but Bajwa points out that hallucinations could be useful in creative spaces such as marketing.

    “Hallucinations can come up with ideas that none of us have thought about before,” he says.

Thanks to  Mary K. Pratt, contributing writer at CIO for this article and information. The full article can be read, here.