Data Is Risky Business: Designing Effective Governance for Data and AI

The CIA has some interesting things to say about data governance in their “Simple Sabotage Field Manual.” So much so that I think this historic document from the era of the Office of Strategic Services, the precursor to the CIA, should be mandatory reading for every data governance practitioner. 

Seriously. 

The Backstory 

The Office of Strategic Services (OSS)’s “Simple Sabotage Field Manual” was written in 1944 as a guide for agents of the OSS, the precursor to the modern CIA, who were operating behind the lines in Nazi occupied countries. It was intended to help these operatives train “citizen saboteurs” to sabotage and disrupt the enemy, but in ways that had plausible deniability and seemed to be just part of how things were done. But done badly. 

The document was declassified in 2008 and really should have been included in every data governance how-to book and training course since then. Because it describes several practices for sabotaging organisations that we need to be mindful of when developing our data and AI governance frameworks. And it describes behaviours that we need to be conscious of that may require oversight and controls to mitigate their impact in the operation of our governance models.

It’s not that data governance leaders or data stewards are intentionally setting out to sabotage the organisation. But sometimes it might look like that to concerned stakeholders. The OSS understood that governance structures can be weaponised against governance outcomes. The most resilient data and AI governance frameworks are therefore not those with the most process, but those designed with an explicit awareness of how legitimate-looking proceduralism can hollow out accountability.

The Field Manual 

While the whole field manual is worth a read from a historical perspective, the section on “General Interference with Organisations and Production” is where we will focus our attention for this article. While the text is an artefact of its time, many of the elements in this section will be immediately familiar, and others have clear descendants in the modern data environment. 

Under the sub-heading of “Organizations and Conferences,” the field manual makes several key recommendations which we need to consider in the design of data and AI governance frameworks.

Insist on doing everything through “channels.” Never permit shortcuts to be taken to expedite decisions.

From a saboteur’s perspective, this is a clever and prudent approach to gumming up the works. From a data governance perspective, it’s an example of “form over function” and actually serves to undermine the resilience and effectiveness of governance in practice.  

It’s important to consider our “in case of emergency” procedures when designing governance processes and escalation paths. Staff acting as data stewards need to have the capacity to make effective decisions drawing on contextual, local, and relational knowledge. When we design our frameworks, we need to ensure that the structures we are putting in place can constitute and support this situated agency by providing data stewards with the mandate, information, and “safe space” to exercise judgement and take appropriate actions. 

What the OSS recognised was that by directing everything through “channels” they could turn the procedural scaffolding for decision-making on itself, causing delay and diluting accountability for decision making. In modern working environments, people may wish to delay changes or actions, or they may simply have learned to navigate the system in a way that ensures accountability for a decision, if it eventually gets made, doesn’t land back on their desk.

Neither option is helpful for the cause of good governance of data. 

When possible, refer all matters to committees for “further study and consideration.” Attempt to make committees as large as possible.

The proliferation of parallel governance frameworks for data, AI, privacy, and ethics in organisations. The creation of siloed approaches to addressing these overlapping mandates can lead to unclear decision rights, require consensus across a large number of stakeholders, and lead to unclear escalation paths for issues. 

This creates the appearance of governance and oversight without effective substance, leading to action without accountability and the delay or indefinite deferral of consequential decisions. 

It may be conceptually tempting, and often politically expedient, to treat a novel requirement or technology as something which requires a novel governance forum. But, in practice, this serves to dilute focus.  

When my company Castlebridge works with clients designing data governance frameworks, one of our core design principles is NAPM (not another pointless meeting). This creates a focus on what is needed to make meetings and forums effective, and ensures clarity on the justification for any forums that are added.

Demand written orders. Misunderstand orders. Ask endless questions or engage in long correspondence about such orders. 

From a data and AI governance perspective, this tactic from the CIA’s “Field Manual” is something that organisations need to consider in their design of data governance frameworks and policies. If policies are written at too high a level of abstraction, or use terminology or concepts that are not clearly explained, then staff can find themselves having to seek clarification before taking actions.  

This can then lead to data governance teams having their time consumed answering questions, delays in decision-making arising that are attributed to the “drag” created by governance, and people potentially adopting unsanctioned or undocumented “work arounds” or simply not acting until they have received an answer. Whether through work arounds undermining the objective of the governance framework or through passive aggressive delays being blamed on the governance framework, governance is perceived as a failure.

Smart data and AI governance teams realise this and invest time and effort “walking the talk” of data definition and documentation to help make sure that terms are explained, processes are understood, and frequently asked questions are documented and codified. This needs to be a planned for activity as part of the development of your governance framework. The goal should be to reduce the “attack surface” for intentional or coincidental sabotage of your governance framework by people who are “just asking questions.” 

Multiply procedures and clearances involved in issuing instructions. See that three people have to approve everything where one would do. 

This recommendation from the OSS manual encapsulates the structural sabotage of accountability through the diffusion of accountability and responsibility. When an approval chain is long with multiple overlapping parties with overlapping but poorly defined authority, the practical outcome is that nobody owns a decision or an outcome.  

While regulatory mandates such as the Accountability Principle in GDPR can be seen as an attempt to counter this diffusion by mandating accountability, poorly implemented governance structures for Data and AI can reproduce this symptom. If we add to this the “let’s kick that to a committee” sabotage tactic or the “I’m waiting for clarification” tactic, the potential to compound delay and dent the credibility of governance multiples. 

Again, this tactic targets weaknesses in governance design that are introduced when the governance system is over-engineered from a structuralist perspective and overly constrains interpretivist situated agency of decision makers. A well-defined governance structure will ensure clarity on decision rights and responsibilities and clarity on the relative authority of decision makers. It will also identify categories of decision that absolutely require multiple sign offs as opposed to those which can be delegated to a committee of one. 

The Meta-Pattern 

The OSS / CIA understood that governance STRUCTURE can be weaponised against governance OUTCOMES. When the structure is not adaptive or where it has been defined too rigidly it is easier for those in the “resistance” who might object to the application of oversight to their functions to gum up the works by simply following the procedure and asking a few questions. 

A resilient data and AI governance framework is one where there is a requisite level of structure that recognises how proceduralism can hollow out accountability if it robs decision makers of agency and accountability. It’s not simply a question of whether the framework exists on paper, but whether there is the capacity and incentive for the human actors in the system to make it function as intended

Conclusion 

Data and AI Governance systems are, at the end of the day, systems. Like all systems they need to be tested to see how they might operate under stress. And like all systems there are historic precedents for modes of failure that have been experienced or exploited by people acting against the system. 

When designing the structure for data or AI governance in your organisation, it’s worth it to ask how might a malicious actor or someone resisting the new regime try to fight back against the system by playing within its rules to an excessively literal extent.

Go take a look at the “Simple Field Sabotage Manual” and read up on how the OSS advised employees to disrupt organisations. Consider what safeguards you might put in place detect or mitigate intentional or accidental acts of sabotage.

When was the last time you tried a PenTest on your Data Governance system? 

Share this post

Daragh O Brien

Daragh O Brien

Daragh O Brien is a data management consultant and educator based in Ireland. He’s the founder and managing director of Castlebridge. He also lectures on data protection and data governance at UCD Sutherland School of Law, the Smurfit Graduate School of Business, and at the Law Society of Ireland. He is a Fellow of the Irish Computer Society, a Fellow of Information Privacy with the IAPP, and has previously served on the boards of two international professional bodies. He also is a volunteer contributor to the Leaders’ Data Group (www.dataleaders.org) and a member of the Strategic Advisory Council to the School of Business in NUI Maynooth. He is the co-author of Ethical Data & Information Management: Concepts, Tools, and Methods, published in 2018 by Kogan Page, as well as contributing to works such as the DAMA DMBOK and other books on various data management topics.

scroll to top