Do We Expect Too Much or Too Little of AI?

Generative artificial intelligence (AI) is garnering a lot of attention these days. Although impressive, it isn’t perfect and as a result, we run the risk of underestimating the potential impact of this swiftly improving environment.

Recently, CBC published, We Asked an AI Questions about New Brunswick. Some of the Answers May Surprise You, an article about artificial intelligence (AI). It’s a cute read that introduces people to ChatGPT and what you can do with it.

For the article, CBC asked a series of questions and the answers the AI gave were pretty good… but not perfect. I particularly liked the confusion between Kiefer Sutherland and Donald Sutherland (his dad). To be fair, many people would also be confused and likely don’t even know they’re related or even Canadian. Be that as it may, the AI was close.

Do We Expect Too Much from AI?

My gut feel is that the AI got about 90% of it right. Certainly, the answers provided would have given someone a very good start at an article or an essay. With just a bit of work, someone could easily clean it up and put it into their own voice. In my mind, the CBC article did a great job of presenting the potential of this technology in a manner that is easily digestible by the public. Kudos to them.

But as I said, the AI didn’t produce a perfect answer. It was much better than what I would expect from my daughter, who is currently in Grade 7, and roughly what I’d expect from someone in Grade 10 or 11. This tells me that ChatGPT has effectively added one more challenge for teachers. In this case, it is to ensure that the essays being handed in by their students are actually written by them and not an AI. Granted, teachers already have issues with kids copying material from the web or paying others to write essays for them. The ease of use and cost of ChatGPT, which is currently free, takes the potential for cheating to a whole new level.

Do We Underestimate the Potential of AI?

The CBC article gently pokes fun at ChatGPT because some of the answers it gave are wrong. In some ways, the article is pointing out that AI isn’t there yet, particularly for professional writing. This is comforting if you make a living from writing, something that has become harder and harder over the years, but is this false comfort? Yes, it likely is.

The answers weren’t perfect, but they were pretty good, a lot better than what older AI technologies were capable of. ChatGPT is currently based on GPT-3, which was developed using 175 billion parameters. The more parameters you use to train an AI model, generally the better your product. The earlier GPT-2 model was trained using 345 million parameters, and although it produced astounding results it really wasn’t ready for primetime. GPT-3 on the other hand, clearly is.

I believe that ChatGPT is a warning of things to come. People are using AI to write letters, poems, stories, and even cover letters for resumes. The results may not be at the level of what professionals would produce, but they’re certainly at the level that average people are capable of. That’s really impressive, which is why AI is getting so much attention by the general public lately. 

Looking into the Future

GPT-4 is due to be released later this year and it is projected to use over a trillion parameters. I’m guessing that GPT-4 will be a devastating game changer for writers. Similarly, we are seeing other “creatives”, in particular artists and musicians, also being impacted by AI technologies. But it’s not just creatives, AI is being applied in HR, in project management, in software development, and many other white-collar professions. The world is changing, and all of these fun platforms such as ChatGPT, Midjourney, and DALL-E that we’re playing with now might prove to be early warnings of dire things to come. Are we ignoring the canary in the coal mine?

Share this post

Scott Ambler

Scott Ambler

Scott is the Vice President, Chief Scientist of Disciplined Agile at Project Management Institute. Scott leads the evolution of the Disciplined Agile (DA) tool kit and is an international keynote speaker. Scott is the (co)-creator of the Disciplined Agile (DA) tool kit as well as the Agile Modeling (AM) and Agile Data (AD) methodologies. He is the (co-)author of several books, including Choose Your WoW!, Refactoring Databases, Agile Modeling, Agile Database Techniques, and The Object Primer 3rd Edition. Scott blogs regularly at ProjectManagement.com and he can be contacted via PMI.org.

scroll to top