Shadow AI – Rising from the Penumbra

FOTOKITA / Shutterstock

In the last couple of years, AI has taken over our organizations. While the benefits of using AI to augment your organization’s ability to achieve its business goals are abundant, the risks associated with unmanaged AI use are many. To add further strain on the use of AI within organizations, sometimes it sneaks its way in. Enter shadow AI.

Many authors over the past while have written about shadow AI; for this article, we’re going to focus on using policy, procedures, and guidelines to de-risk AI at organizations, and manage cases where AI use has sprung up organically amongst users and where AI is the engine running certain aspects of COTS (commercial off-the-shelf) solutions. 

Policy, Procedure, and Guidelines 

Our most potent tool to manage AI is the trusty policy, which is backed by procedures and guidelines. Even if your organization is not typically managed by policy, writing out what a policy framework would look like is a helpful exercise. Key things we need to define and document as it relates to governing AI in our organizations are as follows: 

  • AI Owner: Ultimately, who in your organization is responsible for the operation and results of the AI implementation? 
  • AI Model: What is the model we’re running, the specific LLM or the algorithm at play.  Also, how was it trained? Is there bias in the model? 
  • Risk: What is the risk to our organization when it comes to the results of the AI model? Will we recommend a product to a customer who isn’t interesting? Is there potential reputational risk if bias isn’t considered?  Could we end up in the news or fined because of our use of AI?  
  • Security: Do the results of AI stay local, is our interaction with AI and its results being shared and used to train that model for other sources?  
  • Use: Is this AI tool embedded in an existing solution?  What business processes is it being used in?   

There are many other factors we could consider, and you’d want to be specific about what you need to document. When writing a policy, you want to focus on the high-level beliefs of your organization. When you support that policy with procedures, you want to include the rules related to how that policy is put in place, that’s where the above documentation rules become a critical factor. What exactly do we document and how do we make that available in our business, but also, how do we ensure that all our AI usage follows our documentation procedures? Putting in place the rigour to do these things will save your organization. 

Shadow AI – The Call Is Coming From Inside the House 

One of the impacts of shadow AI observed in organizations is when well-meaning employees implement their own AI solution without necessarily divulging that use to the organization. Depending on the use case, this can be using something like ChatGPT to help write a proposal or analyze data.  While an LLM can do a fantastic job of tasks like these, the risks to the organization are many. Is there confidential information in the prompt or response? Providing clear procedures and guidelines for folks within our organization is key to preventing issues to hidden AI use. Some organizations choose to simply turn off all access entirely. This will not prevent the use of AI however, as users happen to be a crafty lot, the shadows can still creep up on us. People bring phones to work these days, along with many other ways to get to resources outside the corporate firewall. Instead of prohibiting the activity and forcing it underground, having clear policies, procedures, and guidelines in place to help and enable will do a lot more to de-risk AI in an organization. 

Shadow AI – It Was Here All Along 

Another area of shadow AI is when it is an undocumented (or under-documented) feature in Commercial off-the-shelf (COTS) software solutions. In various AI governance frameworks, including an AI review in the requisition process is present, you may want to consider something like this for your organization’s policy. If left unconsidered, some software solutions could be using your data to train a model without your knowledge. Depending on the nature of your business, this could vary in risk from a slight annoyance to massive fines. Also, having a review in place for commercial software can also discover cases where a tool may advertise something is AI powered, but it is merely a clever algorithm that does no actual learning or training at all, which would highlight a vendor that claims their solution does more than it really does when implemented. 

Conclusion 

Ultimately, a policy framework is going to provide your organization clarity with how AI is governed. That clarity is going to help de-risk AI use at your organization and help in cases where AI has snuck into the operations of your business. Your call to action is simple — “What do we need to do to help our employees de-risk the use of AI at our organization?” Work on some rigor and formalization of use of AI tools for your business and help everyone achieve their goals the safest most efficient way possible. 

Share this post

Mark Horseman

Mark Horseman

Mark is an IT professional with nearly 20 years of experience and acts as the data evangelist for DATAVERSITY. Mark moved into Data Quality, Master Data Management, and Data Governance early in his career and has been working extensively in Data Management since 2005.