Skip to main content

ChatGPT: A blessing or a curse?

June 26, 2023

By Carl E. Keller Jr., CPA, Ph.D.; J. Conrad Naegle Jr., CPA, Ph.D.; Gregory P. Tapis, CPA, CISA, Ph.D.

OpenAI recently released its latest version of ChatGPT with both a free and an upgraded “Plus” subscription version. The latest version, currently available with the subscription service, is often referred to as GPT-4, which stands for Generative Pre-trained Transformer-Version 4. Even though OpenAI only released its beta version of the chatbot in November 2022, it took a mere few months for the software application to acquire more than 100 million users, far outpacing the growth of TikTok, Instagram, Facebook or any other consumer-based application in history. That level of growth and interest is expected to continue. OpenAI (which is financially backed by Microsoft) is not the only company developing this type of software. Meta has its LLaMA program and Google has its LaMDA software (to be offered using the name “Bard”); both are expected to be serious contenders in the chatbot market in the near future.

Many business professionals find they are receiving emails, notifications, or media stories about the software on a daily basis. The interest by finance professionals is certainly understandable. The application can be seen to aid in creativity, learning, and productivity. Thus, many entities such as Morgan Stanley and Stripe are starting to adopt the technology. Other firms, such as JPMorgan and Verizon, are banning or restricting the use of ChatGPT because they are concerned about protecting sensitive client information, proprietary code or data. The same behavior of enthusiastically adopting
or severely restricting ChatGPT is also exhibited by education and government entities, which has been noted in the popular press.

Background, Benefits and Drawbacks
How does ChatGPT work? Using a neural network background, ChatGPT is a large multimodal generative language processing model that accepts texts and images, and provides text answers. The model completed training in early 2022, based on vast amounts of data (over 570 GB of data or more than 300 billion words) that was cut off in 2021. The initial training data included a variety of text sources such as books, articles, Wikipedia, and databases found on the internet. The engineers of the system designed the rules/algorithms the system uses in conjunction with the data to form an answer to a given question. As the system processes the queries and data, it forms a response, which it will then express in text form (although OpenAI is also developing software to produce music and images as well). However, somewhere in the process, ChatGPT is “learning,” thus the chatbot’s responses to the same question may change over time.

This change in answers is based in part on the setup of the software. The learning/training of ChatGPT is not based on the application memorizing information; rather it has been designed to be a “reasoning engine,” not a fact database. Although it can be used simply as a fact-finding research tool, the artificial intelligence (AI) component of the system is what makes it special. Given a query, such as a request for the chatbot to write a three-page report on the story of Romeo and Juliet, the system will provide a unique, well-written paper.

However, these special features of ChatGPT also cause a variety of problems. Generative AI and large language models like ChatGPT require massive amounts of computing power to run. Even more computing power is needed if you want the system to use the latest source data available online. Another issue is the misinformation problem. ChatGPT can be fed misinformation, and because the application doesn’t have a truth filter, this may cause its responses to be incorrect. Also, if someone wanted to mislead other people, they could use this application to spread misinformation. A related issue is the “hallucination problem.” Even when the chatbot is fed correct information, ChatGPT will sometimes act on its own initiative, using its deductive reasoning, produce incorrect conclusions and state them as facts in a very professional report. In other words, ChatGPT sometimes makes stuff up and presents its fictional output as the truth. For example, ChatGPT has been known to create a nonexistent court case, complete with a citation in correct format, when researching a tax problem.

Why All the Buzz?
So, if the AI software has these problems, why is everyone so excited about it? Could a person just use Google to get the same information? The answer is yes and no. The information is out on the internet, so you would have access to it. What ChatGPT does for you is that it will “read” the articles, separate out the pertinent facts, apply some logical reasoning, and provide a written answer. Furthermore, the application’s usefulness is not just in researching literature and databases for
English papers and executive reports; the application is now being used to create its own poems, songs, and even to write computing code! The software has been cross-trained with several computer languages and applications. Thus, the chatbot can even write some basic Excel formulas correctly.

But the real excitement derives from the fact that ChatGPT is getting better all the time. The software is now in its fourth major iteration, and there have been some minor revisions as well. According to OpenAI, GPT-4 is 40 percent more likely to produce accurate information than with ChatGPT 3.5, its previous version. OpenAI self-reports the chatbot’s results on several standard exams. Perhaps its biggest improvement is on the Uniform Bar Exam, where the company estimates that GPT-4 scored in the 90th percentile, up from the 10th percentile with GPT3.5. Although the bar exam is oriented
to the usage of language, the company also estimates that GPT-4’s score on the Graduate Record Examination (GRE) quantitative section would be in the 80th percentile, where GPT-3.5 would have placed in the 25th percentile. The improvements are not expected to stop anytime soon. One of the reasons the company offers a free version of the application (simply by setting up an account at https://chat.openai.com/chat) is that, as individuals use the software, OpenAI can find bugs, inconsistencies, and other problems to fix for future generations of the application. As ChatGPT and other AI chatbots improve over time, one can assume that even more individuals and entities will start using the software. In fact, in some areas of life, they will not be able to avoid it.

Implications for Education and Business
As we enter this new age of using AI software, and even though OpenAI and other companies are working to overcome the technical problems mentioned earlier, businesses should be concerned about problems caused by the applications that can’t be solved by the software creators. For example, educators can see how the software can be used to help in preparing classes and for providing far better tutoring for individual students than anything currently available. However, ChatGPT has received far greater attention for how it might help students cheat on anything conducted outside of a proctored classroom. Online exams, research essays, or computer projects could be aided or written by the chatbot and presented as the student’s unaided work. Thus, when a firm makes a future hire of a university graduate, will the firm be getting the student who performed at the desired level based upon their own individual learning and effort, or will they be getting a student who used an AI chatbot extensively to achieve their GPA? If they get the latter, would you not expect the new hire (and maybe even your current employees) to be tempted to skip CPE or firm training courses, and pass any type of online exam by using something like ChatGPT?

As the world continues to get more complex, particularly in the financial services industries, the accounting professional is usually expected to have developed higher level critical thinking skills to perform their job successfully. Educators and society will need to make sure that students are taught how to think for themselves. If they become accountants, they will need to make difficult decisions involving unique situations, while upholding their personal ethical standards, as well as the ethical standards of their company and their profession. It is widely accepted that the AI chatbots have a long way to go in using/learning an acceptable ethical framework, so currently everyone is assuming a human will still be making the decision, with the chatbot simply providing help. However, if a staff worker tries to cut a corner (“be more productive”) and uses a chatbot to write their analysis or report, could any decision based in part on that report be affected not only logically but also legally and ethically?

If your firm decides to utilize an AI chatbot, then it must exercise a great deal of care when it comes to proprietary information. Obviously, you would have to make sure the chatbot doesn’t select something from its database (e.g., patents, trademarks, or copyrights) that is owned by another entity, and then include it in work that your company later presents as its own. However, a different problem occurs with an application like ChatGPT. As your firm’s employees ask it questions and carry on conversations with the software, ChatGPT is learning. Therefore, if you are using the chatbot to form a new innovative tax strategy for some of your clients, you may be unwittingly exposing your thoughts to your competitors or to your clients. Currently, ChatGPT is a closed system in that its original data has been selected, and it is not searching the internet for the latest information on a given topic. However, anyone who is a ChatGPT client of OpenAI is accessing the same AI system, so it appears that ChatGPT could currently share anything in its database or that it has learned to anyone with access who asks the right questions. That is why OpenAI instructs any user who creates an account to not share any sensitive information when using the software. This issue becomes a greater problem if ChatGPT becomes web-enabled, which the company hopes to accomplish in the future.

To overcome this issue, some companies may make use of an application programming interface (API). An API allows one software application to make use of the functionality of another software application. For example, the search engine on a company’s website could make use of ChatGPT’s language model to provide answers to questions, but be limited to specific activities, topics, or conversations. In the same way, the companies that are currently using ChatGPT are limiting its use by employees. The app is being used as a supplemental tool in financial service industries, both in completing repetitive tasks and for learning. One example that has appeared in the media is that of a
bank manager telling his inexperienced internal auditors to use the software like CliffsNotes, but to be aware that the software may not be telling them everything they need to know. No matter if your company is adopting the technology or resisting the use of the software, most agree that AI chatbots
(including ChatGPT) are not the best for critical thinking, making ethical judgments, or protecting sensitive or proprietary information at this time.

Carl Keller is an associate professor at Missouri State University. carlkeller@missiouristate.edu
Conrad Naegle is an assistant professor at Missouri State University. jconradnaeglejr@MissouriState.edu
Gregory Tapis is an assistant professor at Missouri State University. gregorytapis@missouristate.edu

Reprinted with permission of Missouri Society of CPAs.