ChatGPT and the Commoditization of Expertise

There’s been a lot of Chat about ChatGPT, the new AI chatbot being developed by a company called OpenAI. Indeed, people seem to be talking about little else, as far as I can see. So, let me put in my two cents worth here, for public consumption and consideration.

First, for the one or two people reading this who may not know what ChatGPT is, let me explain. In fact, let me let ChatGPT introduce itself! Opening a web browser, I go to chat.openai.com, click in the text box at the bottom of the screen and type:

What is ChatGPT?

Which generates the response:

ChatGPT is an AI-powered language model developed by OpenAI. It uses deep learning techniques to generate human-like text based on the input it receives. It has been trained on a large corpus of text data and can respond to a wide range of prompts, from answering questions to generating creative writing.

ChatGPT is essentially a Natural Language Processor (NLP). It’s designed to parse, understand, interpret and create natural language text for any application that needs this capability. If you ask ChatGPT “What can ChatGPT be used for?” it will give you several potential applications, including:

  • The ability to create chatbots, or “virtual assistants,” that can interact with customers in a human-like manner and help answer questions or direct them to useful information (like FAQs).
  • The ability to generate text based on user prompts, including news articles, stories and poetry.
  • The ability to analyze text and generate a “sentiment analysis,” categorizing the sentiment of a given text as positive, negative or neutral.
  • The ability to translate text from one language to another.
  • The ability to generate answers to asked questions.

It’s this last point that appears to be generating the most controversy. ChatGPT can indeed generate answers to questions, but these answers are often problematic. ChatGPT gets its information from a “knowledge base” that consists of a compendium of textual material culled from the Internet prior to 2022. This material is not necessarily fact-based and is likely to be biased and self-serving. ChatGPT does no actual fact-checking. Even worse, ChatGPT has a disturbing tendency to simply make up facts as it feels it needs them and may even make up fraudulent attributions to support these made-up “facts”. This is referred to as “hallucination”.

Some online web sites, including CNET and Bankrate, have discovered this limitation the hard way. After publishing dozens of articles generated by ChatGPT, the sites have had to remove the articles after factual errors were found. Here’s a personal example: When I ask ChatGPT for a list of books written by Larry Burns, it will return a list of titles, none of which were written by me or anyone else named Larry Burns, and none of my own titles are on the list. ChatGPT will also tell you that I’ve published articles in the Wall Street Journal, the New York Times, and Forbes, none of which is true.

Aside from not being able to trust the veracity of ChatGPT’s answers, there is also the question of how valuable those answers might be. Since the answers are derived from an Internet-sourced knowledge base, it’s likely that they would reflect nothing more than the conventional wisdom on any given subject, and that only in the most general terms. It’s been observed that the more specific a ChatGPT answer is, the more likely it is to be wrong. And, of course, ChatGPT’s answers are likely to be out-of-date, as it references a knowledge base last updated at least a year ago.

But all this ignores a larger and more fundamental issue: AI bots have no knowledge of the real world, and no real-life experience. They cannot tell a truth from a falsehood, or right from wrong. They are incapable of logic or reason. All they “know” is what their users tell them (which may not be true), and what they can glean from a textual analysis of their knowledge base. So, an AI bot like ChatGPT is not likely to give you a truthful, cogently-reasoned, well-researched answer to a question. Instead, it will give you generated text suggesting what an answer to that question might look like.

Another way of putting this is that ChatGPT’s textual output should be regarded as nothing more than a template for an answer, rather than the answer itself. It’s still up to the (human) user to fill in the blanks with well-researched arguments, references, citations, and factual data.

Which brings us to the main point of this article: There’s a great deal of discussion taking place about whether (or when) AI bots might replace human workers in a number of different jobs and industries. It’s been suggested, for example, that AI might someday replace software developers, web designers, authors, editors, proofreaders, customer service people, loan officers, and even lawyers and doctors. Will it? Probably not, and almost certainly not without some degree of human involvement and assistance. There are a number of issues to be resolved with AI, including:

  • Liability. AI bots make entertaining toys and are even useful in situations where they can be used without undue risk. But brace yourself for the lawsuits when a customer service bot gives a customer a wrong answer, or a bot-created news article exposes a publisher to a libel suit, or a bot-generated legal contract exposes someone to a financial penalty, or a bot-generated computer program causes a major service interruption, or bot-generated financial transactions impact stock valuations (this has already happened, in October of 1987).
  • Appropriateness. As AI bots become more mainstream, there is increasing risk that technology will be asked to make decisions that are inappropriate for machines to make. Years ago, a State government embarked on a multi-million-dollar project to use AI to determine which individuals and families qualified for welfare benefits. The project was a dismal failure. In this same State, a computer algorithm that determined sentencing for prisoners was found to have released several dangerous offenders years early. Do we really want machines deciding whose kids go to bed hungry at night? Or diagnosing medical conditions and recommending treatment? Or determining sentencing for prisoners? Or making investment decisions? The answer is: probably not without some degree of human oversight.
  • Manipulation. As people are forced to interact more and more with AI-based chatbots (e.g., in customer service scenarios, applications and Internet searches), they will become more vulnerable to being manipulated or deceived by the controllers of this technology. Louis Rosenberg describes this as “the deliberate manipulation of individuals through the targeted deployment of agenda-driven conversational AI systems that persuade users through convincing interactive dialog.”[i] Rosenberg goes on to warn that “we will soon find ourselves in personalized conversations with AI-driven spokespeople that are (a) indistinguishable from authentic humans, (b) inspire more trust than real people, and (c) could be deployed by corporations or state actors to pursue a specific conversational agenda, whether it’s to convince people to buy a particular product or believe a particular piece of misinformation.” We have already noted that AI does not understand the real world; it cannot distinguish truth from lies or right from wrong. The danger is that AI can appear capable of such things. Even Blake Lemoine, a software engineer at Google, has advocated for the “sentience” of AI bots.
  • Disinformation. The capability of AI-based bots to create, publish and spread disinformation is alarming a great number of people. Dr. Jeffrey Funk, for one, has warned that, supplied with questions loaded with disinformation, ChatGPT can produce convincing grammatically correct content within seconds, a content that is much harder to identify as misinformation. And bots could spread such disinformation across the Internet in a matter of seconds.
  • Productivity. The whole point of AI, presumably, is to increase human productivity, enabling more work (and higher-valued work) to be done by the same number of people. But, as Dr. Funk has noted, automation has not significantly increased worker productivity. Productivity increased by an average of 2.7% from 1948 to 1986, but by only 2% from 1987 to 2022. At the same time, people now spend an average of 6 hours a day on time-wasting activities such as surfing the web, playing games, watching videos, and blathering on social media.[ii]
  • Insight. AI bots might be able to recombine and regurgitate existing data in various creative ways (they can produce articles, web pages, and even poetry), but they are not capable of intuitive insight; that is, the creation of new knowledge from existing knowledge. For example, in my new book Data Model Storytelling, I explore the process of data modeling by drawing upon concepts from a number of different (and seemingly unrelated) disciplines, including Agile, Human-Centered Design, Cognitive Behavioral Therapy, Business Process Reengineering, Stakeholder Economics, storytelling (and other creative arts) and Native American Shamanism. This is something that only the human brain can do, and it provides a value that AI can’t match.
  • Intellectual Property. Another issue involves AI’s tendency to misuse intellectual property. Although ChatGPT will tell you (just ask it!) that it respects intellectual property and cites sources, several people have commented on ChatGPT’s tendency to regurgitate material from published sources without attribution or acknowledgement. In my own case, I’ve noticed that ChatGPT returns material from my book Building the Agile Database (e.g., the definitions of “Logical-Physical Divide” and “Virtual Data Layer,” terms that I coined and are used in the book) without attribution or citation. Getty Images is currently suing at least one AI vendor for allowing their product to use copyrighted images without attribution.

But the larger question, the real elephant in the room, isn’t so much whether or why or how AI may replace human workers, but why certain vested interests want it to. Over the years, we have seen an increasing push to replace experienced and highly-paid human knowledge workers with lower-paid “resources” such as contractors, temps and interns. Now there is a push to replace those same knowledge workers with AI bots, which don’t need to be paid, don’t take vacation or sick leave, and don’t have car trouble. This is what I’ve come to refer to as “the commoditization of expertise.” In other words, where automation is used not to increase worker productivity, but to increase corporate profits and shareholder returns by turning knowledge and expertise into disposable commodities.

I’m not against automation per se; in fact, one of the tenets of my approach to Agile Data (as explained in my book Building the Agile Database) is: “Automate as much of the database development process as possible.”[iii] But the point of automation should never be to replace human thinking, but to allow human thinking to be directed toward higher-value activities.

Commoditization in any form leads only to mediocrity, and the replacement of higher-valued products and services with lower-valued ones. Go to any grocery store, and you’ll increasingly see higher-value name brand products pulled from the shelves and replaced with lower-value generic “store brands”. This is what we can expect from AI, as it becomes a more prevalent part of more and more technologies: generic thought, generic ideas, and generic expertise. The sort of C- work turned in by mediocre students who will use ChatGPT to write their assignments so they won’t have to be bothered with thinking about or understanding anything.

To the extent that ChatGPT and similar AI bots contribute to fueling our intellectual “race to the bottom” in the pursuit of higher corporate profits, they (and their creators) will be doing an immense disservice to humankind.


[i] Rosenberg, Louis. “The Profound Danger of Conversational AI”. VentureBeat blog post, February 4, 2023. https://venturebeat-com.cdn.ampproject.org/c/s/venturebeat.com/ai/the-profound-danger-of-conversational-ai/amp/.

[ii] Funk, Dr. Jeffrey and Gary Smith. “Large Language Models Can Entertain But Are They Useful?”. Mind Matters News (online blog). January 16, 2023. https://mindmatters.ai/2023/01/large-language-models-can-entertain-but-are-they-useful/.

[iii] Burns, Larry. Building the Agile Database (New Jersey: Technics Publications, 2011), pp. 70-76.

Share this post

Larry Burns

Larry Burns

Larry Burns has worked in IT for more than 40 years as a data architect, database developer, DBA, data modeler, application developer, consultant, and teacher. He holds a B.S. in Mathematics from the University of Washington, and a Master’s degree in Software Engineering from Seattle University. He most recently worked for a global Fortune 200 company as a Data and BI Architect and Data Engineer (i.e., data modeler). He contributed material on Database Development and Database Operations Management to the first edition of DAMA International’s Data Management Body of Knowledge (DAMA-DMBOK) and is a former instructor and advisor in the certificate program for Data Resource Management at the University of Washington in Seattle. He has written numerous articles for TDAN.com and DMReview.com and is the author of Building the Agile Database (Technics Publications LLC, 2011), Growing Business Intelligence (Technics Publications LLC, 2016), and Data Model Storytelling (Technics Publications LLC, 2021).

scroll to top