Generative AI is rocking the business world. It summarizes, answers questions, creates content, and creates code—but what is it?
Any business leader facing the clamor to use generative AI (GenAI) should read our POV Dos and don’ts and potential costs of GenAI to understand appropriate use cases. But when an emerging technology arrives with the promise of cost take-out across the enterprise, it’s worth taking a moment to get up to speed with the key terms so you can pull the conversation back into line when the techies start darting off into uncharted territories. What is it you are being offered when they tell you GenAI is the solution?
Let’s start by establishing where GenAI sits in the world of artificial intelligence (AI). GenAI is a form of machine learning, and machine learning is a subset of AI. Let’s be clear about what each of these terms describes.
Source: HFS Research, 2023
Artificial intelligence uses software, data, and rules to perform tasks that otherwise require human intelligence and performance. AI was around before data scientists had even thought of machine learning.
Rules-based systems, such as early chatbots, are an example of AI that does not use ML. The rules tell the bot how to answer questions and they constrain the bot’s ability to generate responses. The responses can only follow the pre-defined rules.
Machine learning is a subset of artificial intelligence that solves specific tasks by learning from data and making predictions. It spots patterns in data and decides for itself how to act on what it infers from those patterns. The field describes computer systems that can learn and adapt without having to follow set rules. Machine learning combines algorithms and uses statistical modeling.
Machine learning has long been used in dataOps (such as automated data governance), sales (lead scoring and forecasting), marketing (ad optimization), logistics (demand forecasting), finance and accounting (invoice processing), and customer support (improving customer workflows).
A simple way to understand what generative AI does—and the clue is in the title—is that it is generative. GenAI generates new instances of text, images, code, and other media. So, while AI suggests actions and ML analyzes patterns, GenAI generates additional, novel outcomes. GenAI is a type of machine learning. It, too, learns from the patterns and structure of the data it is trained on. It is defined by its ability to generate new data.
Exhibit 2 shows use cases across domains and industries.
Source: IBM Consulting and HFS Research, 2023
But before we get too excited, it is important to note that the new data that GenAI delivers is entirely shaped by the input it receives. The “new data” must be derivative from the old data, sharing similar characteristics to the patterns and structures it identified in the training set. To return to an adage: garbage in, garbage out. Don’t expect eureka moments. It can’t think up anything that is not already present in the data.
To function, GenAI must access its data via a foundation model. A foundation model, also known as a base mode, is a machine learning model trained on very large quantities of data. It can be adapted for use across a wide range of tasks. Some businesses are considering building their own. This can prove a very expensive investment, but it offers control over the data being accessed, solving the issue of not being sure where open GenAI’s answers are being derived.
Large language models (LLMs) are a type of foundation model. So are visual foundation models (VFMs). LLMs and VFMs are also being used in combination for task-specific models (to convert text to images, for example).
GenAI applies algorithms known as generative adversarial networks (GANs) to generate and improve new data. These combine a generator network (the part that learns from large datasets to generate new data) with a discriminator network (the part that evaluates the new data that is generated). The generator generates and the discriminator discriminates, sorting the wheat from the chaff and feeding back the best to the generator in a continuous cycle of improvement.
Thanks to GenAI’s natural language processing capabilities, requests for outputs are made using prompts. Users type a description of what they want to see from the output. They can further define and refine this in what becomes a very natural conversation with GenAI, making GenAI systems such as ChatGPT, Bard (Google), Bing Chat (an OpenAI Chatbot), Stable Diffusion, and Midjourney simple for any user to access. Its simplicity explains the rapid rise in users; for example, in May this year, ChatGPT broke the 100 million user mark.
LLMs are particularly good at understanding and generating natural language. Generative pre-trained transformers (the GPT in ChatGPT) are key to LLMs’ ability to do this at scale and pace because the transformer enables LLMs to process sequences of data (such as text) while simultaneously considering the data’s context. Previous technologies could not do this in parallel.
Earlier instances of AI could be trained to effectively perform specific tasks, but a single LLM can perform a range of tasks. For example, the same LLM could answer questions, support a chatbot, summarize data, and translate languages.
Source: HFS Research, 2023
Every emerging technology comes with its own set of buzzwords and shorthand. These can often become a secret language, excluding business decision makers from what should be business-impact conversations. Get to know your LLMs from your MLs to give you the confidence to wade in when the conversation gets technical, and remind all parties that the focus must remain on the business outcomes the tech can help you achieve.
Further reading:
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started