Weighing the Risks and Rewards of AI

During the CDM Media’s September 2023 Houston CDO and CIO/CISO Summit, I joined a group of business and IT leaders across various industries to share perspectives and best practices. I also participated in an executive dinner and roundtable to focus on artificial intelligence (AI). There is a general sense that while most organizations have not yet fully embraced or understood AI, we are at an inflection point, and soon, everyone will move up the adoption curve.

Thanks to a diverse area of expertise among the audience, we heard many points of view that others likely haven’t considered. Since most attendees had IT or analytics backgrounds, we were supposedly more informed than other business peers. Yet, it was both concerning and reassuring to observe that even among this group, we still grappled with the same fundamental questions, issues, and challenges concerning AI. There is tremendous variety in how businesses weigh the who, what, when, and how their organization should adopt AI. Everyone is looking for a common and practical framework for non-technical business leaders to make fully informed decisions on the risks and rewards of AI.

AI Risks and Rewards

An organization should not be quick to adopt generative AI for many reasons. AI can inadvertently violate privacy, amplify bias, and lead to incorrect conclusions or information. Given its novelty, not all risks are yet known and measurable, and there’s no clear regulation to govern it. Two characteristics of AI risk make it unique: the difficulty in explaining models and the hallucination problems (where an AI output seems so believable to humans even when it is false). These raise the question of AI trustworthiness.

Others believe that they should fully embrace AI because the potential benefits will be significant and disruptive of traditional business models. AI will create internal process efficiencies at an order of magnitude level. It will unlock new insights to increase market reach and improve product quality exponentially. Even though there are still significant risks and compliance issues to consider, the ease of access and public fascination with AI creates a fear-of-missing-out (FOMO) effect if they don’t adopt AI.

Whether, when, and how an organization should allow or adopt AI are not one-size-fits-all questions. The answer depends highly on industry, organizational maturity, risk appetite, and applicable regulations. Regardless of the company’s current position on AI, everyone agrees that, eventually, every company will come to adopt AI and that the most prudent thing to do is to seriously invest in data governance, security, and privacy capabilities before AI proliferates.

Senior Management and Board Questions

The consensus among business leaders is that AI is undoubtedly beneficial to every function within a company in every industry. The use cases and degree of that benefit may vary, but since virtually all organizations are at ground level of AI adoption, there are only upsides to capitalizing on AI. And while the risks of bias, privacy, and trustworthiness are immense, every organization deals with the same challenges. It’s only a matter of time before AI capabilities become commoditized.

What top-of-mind questions are executive teams and their boards asking – or should they be asking – as they try to decide the strategic impact of AI for their organizations? Here are the common themes that we gathered from the roundtable group.

1. Can AI bring transformational benefits to my organization? How will it change my industry and competitive landscape?

Because AI needs a significant amount of training data as fuel, AI will benefit the companies that can build the scale and capability to manage big data, i.e., companies like FAANG (Facebook / Meta, Amazon, Apple, Netflix, and Google / Alphabet). Even for large enterprise companies, this is a tall order. For small and medium companies, it seems even less likely that they can reap enormous benefits from AI. Furthermore, AI may alter the competitive landscape to gravitate to a monopolistic or oligopolistic structure. In every industry segment, it will favor the fewer and larger firms that can gain dominance in consolidating data and analytical capabilities.

Hence, the more relevant question for the board and executives is this: In a future where AI will become pervasive, what business model and operating framework will enable us to capitalize on it? What can companies do to stay competitive? One area worth consideration is “Privacy-Preserving Data Sharing and Analytics” (PPDSA), a national strategy proposed by the White House Office of Science and Technology Standards in March 2023.

PPDSA includes techniques such as differential privacy, homomorphic encryption, synthetic data, secure multiparty computation, and federated learning. They allow companies to explore, use, and share data securely and privately without giving it away in a raw, readable, and reusable form. Organizations can create partnerships and a data marketplace to enrich their training data, which enables them to produce AI models with far greater stability, accuracy, and confidentiality. PPDSA allows all organizations to increase speed and scale to discover and access training data for AI. This is important especially for smaller and medium enterprises to compete against the big data incumbents.

2. Does my organization have what it takes to implement AI and drive proper adoption?

According to a survey by IBM, a majority (~75%) of the chief executive officers believe their organization is ready, but only 29% of the senior or middle management think the same. The reality is that most organizations are behind and have much room to improve their people’s AI skills. Many companies took smaller steps and aimed for quick wins, such as creating a data literacy program or a center of excellence.

Even if an organization does not yet have senior executive buy-in to invest in AI, employees are hungry for education and skills development in AI. Investment in data and AI training would increase your employees’ satisfaction and keep your top talents. Those who do not invest in AI education risk losing talented team members and keeping unmotivated ones.

3. Even if we see AI as a critical and urgent risk factor, other more pressing issues exist to address, such as cybersecurity, diversity and inclusion, macroeconomic instability, or geopolitical risks. How should I prioritize and synergize my effort to address AI with other strategic risks?

Indeed, issues such as cybersecurity should be on top of mind for boards and executives. But they are not mutually exclusive. As mentioned above, privacy-preserving data sharing and analytics (PPDSA) technologies can help strengthen your cybersecurity posture while enhancing your ability to win with AI. PPDSA techniques involve an intersection of risk, data science, and analytical skills. Therefore, it is a great way to promote cross-collaboration and knowledge transfer between your risk and data teams.

Investing in PPDSA education and solutions will help your cybersecurity, risk, governance, and compliance teams improve their analytical acumen. On the other hand, your data science and business intelligence teams will gain a higher appreciation for risk and compliance frameworks as they work on PPDSA initiatives. It can catalyze conversations at the board and executive governance level to address cybersecurity and AI in a unified platform.

AI is a great way to stimulate and educate our organization on bias and fairness. Many of the fundamental root causes of machine learning bias are congruent and analogous to human bias — invalid assumptions, lack of data, untested hypotheses, and flawed interpretations. Business leaders can approach the topic of AI bias and use it as a springboard to have scientific and rational discussions about the issue of management and human bias. At a time when issues such as diversity and inclusion can be polarizing in companies and industries, this can be an effective tool.

Conclusion

In conclusion, AI must be considered with the utmost care at the highest level of corporate governance. Whether your executive team is on the pro or the con camp, you have nothing to lose and much to gain from choosing to elevate and tackle the topic head-on. The debates around AI will push organizations to improve their cybersecurity and privacy controls. It will also drive waves around talent development and employee engagement issues. The only losing position here is choosing to bury our heads and not face the issue of AI with boldness.

Image used under license from Shutterstock.com

Share this post

David Hendrawirawan

David Hendrawirawan

David Hendrawirawan currently teaches AI governance, cybersecurity, and data privacy at the University of Texas at Austin. He also serves on the board of directors of HTRI and owns a computer programming school for children. He is an active mentor, speaker, and author in professional communities such as Founder Institute, DAMA, and IAPP. Previously, at Deloitte and Grant Thornton, David helped clients improve business outcomes using data and analytics, mitigate cyber and privacy risks, and ensure regulatory compliance. At Dell, he played strategic roles in IT enterprise architecture, finance transformation, customer data strategy, and governance.

scroll to top