Editor’s note: this is the second in a two-part analysis on Ethical AI, Explainable AI, and why these are needed amid the rush to embrace and use Generative AI.
In the first part of this analysis, I made the case that as AI matures, and as Generative AI puts the technology into so many more hands while sparking so much new AI development, the need to focus on Ethical AI and Explainable AI is gaining momentum.
In this second installment, I delve more into Generative AI and why it is so rapidly advancing the need for ethics and explainability.
I’ll start out with an assertion: Ethical AI and Explainable AI must serve as foundational elements of Generative AI initiatives going forward.
To hear practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, and cybersecurity, make sure to register for your on-demand pass to Acceleration Economy’s Generative AI Digital Summit.
Ethical and Explainable AI are fundamental to properly approaching, governing, and capitalizing on the vast potential presented by Generative AI. These two elements also help guide you through the noise as many companies have prefixed AI with “generative” in the hopes of gaining attention or selling you new software.
What Generative AI Is
I’d like to take a step back and describe what Generative AI is, which will help you understand why ethics and explainability are so important. According to Nvidia, a strategic partner with Oracle that is using the latter’s OCI for its AI Services, Generative AI “enables users to quickly generate new content based on a variety of inputs. Inputs and outputs to these models can include text, images, sounds, animation, 3D models, or other types of data.”
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
Now, I’m not looking to bash Generative AI. However, something important to note here is that the word “new” in “new content” is something of a marketing construct: The content created is seemingly new, but it is, in fact, 100% dependent on the training data gathered, the humans entering prompts in unique ways, and pulling “inspiration” from things humans have already created. Nonetheless, we have seen truly amazing things as a result of Generative AI’s addition to the creativity toolbox.
A Copilot for Humans
When Microsoft announced Microsoft Copilot in March 2023, the company purposefully stated, “With Copilot, you’re always in control. You decide what to keep, modify or discard.” This means that there are humans in the loop as AI acts as the copilot — not a replacement pilot.
Beyond Microsoft’s goals and intentions for Copilot, the reality is that AI should always be an accompaniment to people in order to fuel creative ideas, art, music, videos, and even new ways to interact or new ways to make the world more accessible for everyone. These are all things that only humans can control. And they speak to the pressing need to keep AI ethical and explainable.
Why You Should Care About Generative AI
Let’s set aside the hype of Generative AI and focus on its impact on businesses, technology, employees, and partners across two specific dimensions: cost and people.
Cost Considerations
Behind the Generative AI tools being used (ChatGPT, Azure OpenAI, Google Bard) are Large Language Models (LLMs) that are expensive to operate and maintain. One small online gaming company was seeing a monthly bill of $200,000 as the number of users kept growing.
If you recall, when cloud services were first launched, the biggest draw was the easy-to-understand pricing of a subscription price — per user per month. Simple and straightforward.
Today, we see a lot of cloud services based on a compute consumption model. This means that the more compute power you consume, the more you pay. It brings to mind the old price model for texting and those sky-high phone bills that were a bit of a rude awakening.
In the case of Generative AI, it’s based on a tokenized pricing model. I had to dive into this a bit to really understand how a company can come to understand its potential costs and thereby create a budget.
Thankfully, someone “did the math” on this and provided a great explanation based on using OpenAI to calculate the costs. OpenAI’s pricing page defines a token as “pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.” OpenAI offers an interactive Tokenizer tool and provides a counter to let you know the number of tokens in your text.
What does this pricing model mean in your business? First, finance and technology teams will need to be trained or upskilled to truly understand the costs associated with utilizing Generative AI. Second, employees will need to be trained in building the best prompts that align with how the company intends to use Generative AI.
A Hackernoon post provided a great real-world scenario and set of steps to help calculate the cost of a customer service chatbot:
- analyze all the messages they send,
- extract the entities (e.g. product names, product categories),
- and assign each an appropriate label.
You have ~15.000 visitors per month, and every visitor sends 3 requests twice a week. In this scenario, we have 360K requests per month. If we take the average length of the input and output from the experiment (~1800 and 80 tokens) as representative values, we can easily count the price of one request.
The cost of using GPT-3 (Davinci model) in the analyzed case would be ~$14,4K per month. It’s important to note, however, that it is only a simplified simulation, and its results are not fully representative.”
People Considerations
Many times throughout this analysis, I have mentioned the importance of people when it comes to AI. Of course, it’s understandable to experience FUD (fear, uncertainty, and doubt) about AI replacing people or certain jobs or roles. However, this must be balanced with an understanding that with any new technological advancement, new jobs will be created, as will new ways of working.
As a leader or executive within your company, ask yourself these questions about your employees:
- Can they be trained/upskilled? If so, how much time and cost is involved to keep them trained?
- Are the right people in the right positions? Often someone in one team might have the right mix of experience and skills but is in a totally unrelated role. They could be the very ones to train easily.
- Do I need to let people go and bring in different talent? This is a harder question to examine as this could be a catch-22. Retaining current talent could be more costly in the long run and hurt the company’s competitive advantage in the market. Bringing in new talent at the right time could set the company on the best path forward for success.
Closing Thoughts
As the two preceding sections indicate, Generative AI has significant implications on the costs of running a business and its AI applications, as well as the people who work in that business today and in the future.
Because costs and people are central to the success of any business, and because Generative AI impacts them so directly, that makes an even stronger case that companies using Generative AI have ethics and explainability in place before moving ahead too rapidly. Ethical AI, Explainable AI, and Generative AI can coexist but they should be squarely focused on benefits to core business functions and not riding a hype cycle.
I would recommend getting outside direction, such as Acceleration Economy’s Advisory Services, to help guide your organization through the process, goals, and intended usage and outcomes; and ensure your company’s AI use is legal, compliant, secure, trustworthy, and reliable.
Success with Generative AI success is clearly possible, but it’s up to you so stay real and stay you.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: