Unethical and reliable: Why I'm not allowing ChatGPT in my firm

Scott Zoldi, chief analytics officer at FICO, is deeply sceptical about the talents of the infamous chatbot
Scott Zoldi
ChatGPT app icon

ChatGPT isn’t assisting or enhancing human creativity, it is regurgitating a configuration of the AI data it was trained on.

An AI chatbot that can pass the content exam given by a Wharton business school professor. It brings to mind a diabetes drug that can help celebrities (including Elon Musk) lose dramatic amounts of weight seemingly effortlessly. ChatGPT and Ozempic, respectively, are battling it out to be the most-hyped panaceas of 2023. But are they truly panaceas? And even if they are, should we allow ourselves to rely on them?

What’s Old Is New Again

Neither OpenAI’s ChatGPT technology (Chat Generative Pre-trained Transformer) nor Ozempic is exactly new. The current version of ChatGPT, version 3, is an upgrade to the previous version, ChatGPT-2, which was released in 2019. Ozempic (and related semaglutide drugs Wegovy and Mounjaro) have been around for years. Ozempic was first approved by the U.S. Food and Drug Administration to treat Type II diabetes in 2017, and in 2021 for chronic weight management.

And what exactly is ChatGPT?

When asked to describe itself, ChatGPT says:

  • ChatGPT is an AI language model developed by OpenAI, which is capable of generating human-like text based on the input it is given. OpenAI’s model is trained on a large corpus of text data and can generate responses to questions, summarize long texts, write stories and much more. It is often used in conversational AI applications to simulate a human-like conversation with users.

On the surface, both of these magical elixirs seem irresistible. Who wouldn’t want to have an AI chatbot write content or code? Who wouldn’t want to lose unwanted weight without feeling hungry? ChatGPT and Ozempic are two wildly different things, but they both appeal to one of humanity’s basest instincts: to get something for nothing. And that’s exactly why I believe any reliance on either of them is wrong.

Cheaters Gonna Cheat with ChatGPT

I’m not exactly a big Taylor Swift fan, but as she catchily nailed it, “Fakers gonna fake.” Likewise, cheaters gonna cheat with OpenAI’s ChatGPT. Cheaters have always been able to cheat on exams and essays, whether it was by mail-ordering old term papers pre-internet or today in myriad digital ways. In that regard, ChatGPT is just the newest tool that allows cheaters to cheat, at a staggering scale.

Call me old-fashioned, but I believe cheating is wrong. So does every learning institution, most of which have a code of honor and ethics. If you use ChatGPT to write your paper or create other work products you present as of your own creation, you are cheating. Cheating is wrong, and furthermore, the person you hurt most when you cheat is yourself. Ultimately, whether in work or in life, cheaters will flame out miserably because they don’t have the skills that the cheating prevented them from learning.

There’s A Reason We Need Ethical AI

I’m not a medical doctor, so I can’t authoritatively comment on the long-term effects of off-label use of Ozempic. But I am a Ph.D. data scientist, and I can say that ChatGPT is positioned as an overly aggressive assistant; technologists have long talked about the value of AI as assisting and enhancing, not replacing and dumbifying the human race. ChatGPT isn’t assisting or enhancing human creativity, it is regurgitating a configuration of the data it was trained on. This does not equate to intelligence. There are many reported instances of the mistakes ChatGPT frequently makes under the guise of authority.

The reality is that neither ChatGPT nor any AI has a conscience. FICO today uses generative AI already to produce synthetic training data for robustness / scenario testing and robotic process automations (RPAs) in certain customer-facing interactions in areas like fraud case management and collections. The RPAs are built in a way that is ethical, explainable, responsible and deterministic – absolutely essential qualities when using AI to respond to any human-impacted financial question or circumstance. Without careful management, these AI-driven decisions and interactions can quickly become calloused, full of mistakes, unpredictable and unethical. 

Putting ChatGPT into Perspective

I believe that ChatGPT is a cool toy to have fun with. It’s effective at finding and fixing bugs in computer code. Schools are even trying to figure out how to teach with it instead of blocking students’ access. ChatGPT is also a dangerous AI drug for the mind which, used improperly, saps creative intelligence. It reminds me of the fact that Steve Jobs did not let his kids use iPads.

Furthermore, without auditability or regulatory guardrails around it, ChatGPT is not safe to use in customer-facing decisioning.

I am not supporting ChatGPT at FICO given its lack of auditability, interpretability and explainability – this is not the right technology for our company and the financial decisions we enable that affect customers. And I am not the only one with this point of view. Even Sam Altman, the CEO of Open AI, agrees; “Currently, it’s a mistake to use ChatGPT for important tasks. The system is a glimpse of progress; in terms of robustness and reliability, there is still much work to be done,” he wrote. Full stop.

The world will adapt to ChatGPT just as it has every other technology. Colleges will change their curricula and testing methods; I foresee a return of blue book written exams and oral exams. As for me? I will continue to write blogs based on my original thoughts, without a chatbot.

Written by
Scott Zoldi