Enterprise leaders are struggling to keep up with today’s turbulent rate of change in artificial intelligence (AI). This glossary will help you keep up.
Our glossary aligns with the three phases of AI we identify in Exhibit 1: foundational, generative, and purposeful.
Source: HFS Research, 2024
Artificial intelligence (AI): Uses software, data, and rules to perform tasks that otherwise require human intelligence and performance. AI was around before data scientists had even thought of machine learning. A rules-based system, such as early chatbots, is an example of AI that does not use machine learning.
Reasoning: In the context of AI, reasoning is the process of making decisions based on data, rules, and logical principles. It enables an AI system to logically evaluate evidence to arrive at a solution. Reasoning is the critical component of AI systems, allowing them to mimic human-like reasoning and make context-aware decisions in things like virtual assistants, autonomous vehicles, and scientific research. Various human reasoning types have been developed, including common sense, deductive, inductive, and abductive. The field also includes fuzzy logic and automated reasoning.
Fuzzy logic: Allows AI systems to handle uncertain and imprecise data, applying degrees of truth rather than binary (yes/no) decision making.
Automated reasoning: Uses algorithms and logical rules to make decisions without human intervention.
Machine learning (ML): A subset of artificial intelligence solving specific tasks by learning from data and making predictions. It spots patterns in data and decides how to act on what it infers from those patterns. The field describes computer systems that can learn and adapt without having to follow set rules. Machine learning combines algorithms and uses statistical modeling. Machine learning has long been used in dataops (such as automated data governance), sales (lead scoring and forecasting), marketing (ad optimization), logistics (demand forecasting), finance and accounting (invoice processing), and customer support (improving customer workflows).
Neural network: A machine learning model in which data is processed via interconnected nodes (or neurons) in a layered structure resembling structures in the human brain.
Deep learning: A subset of machine learning using a neural network of three layers or more. These networks try to simulate the behavior of the human brain to learn from large amounts of data. Single-layer neural networks can make approximate predictions; additional layers improve accuracy through optimization and cycles of refinement.
Natural language processing (NLP): The branch of AI concerned with giving computers the ability to understand text and spoken worlds. NLP combines rule-based modeling of language with statistical, machine learning, and deep learning models for processing human language as data. It is often seen in chatbots and other text-to-text, voice-to-text, voice-to-voice, and text-to-voice interfaces.
Computer vision: Enables computers to identify and understand objects and people in images and videos to perform and automate human-like tasks. Computer vision attempts to see how a human sees and makes sense of what it sees. The latest versions apply deep learning to retain data from what it sees, allowing it to become more accurate through increased use.
Generative artificial intelligence (GenAI): A simple way to understand what generative AI does—and the clue is in the title—is that it is generative. GenAI generates new instances of data, text, images, code, and other media. So, while AI suggests actions and ML analyzes patterns, GenAI generates additional, novel outcomes. GenAI is a type of machine learning. It, too, learns from the patterns and structure of the data it is trained on. It is defined by its ability to generate new data.
Foundation models: To function, GenAI must access its data via a foundation model. A foundation model, also known as a base mode, is a machine learning model trained on very large quantities of data. It can be adapted for use across a wide range of tasks. Some businesses are considering building private versions using their proprietary data. This can prove a very expensive investment, but it offers control over the data being accessed, solving the issue of not being sure where answers are being derived—a challenge when your foundation model is trained on publicly available data.
Large language models: A large language model is an AI system designed to understand and generate human language text. They are trained on vast amounts of data. “Large” refers to the size of the neural network and the amount of training data. The largest also have more variables (parameters) with which to make predictions, making them better able to understand complex language and generate relevant text. Earlier AI instances could be trained to perform specific tasks effectively, but a single LLM can perform various tasks. For example, the same LLM could answer questions, support a chatbot, summarize data, and translate languages.
Source: HFS Research, 2024
Transformers: LLMs are particularly good at understanding and generating natural language. Generative pre-trained transformers (the GPT in ChatGPT) are key to LLMs’ ability to do this at scale and pace because the transformer enables LLMs to process sequences of data (such as text) while simultaneously considering the data’s context. Previous technologies could not do this in parallel.
Prompts: Using GenAI, requests for outputs are made using prompts. Users type a description of what they want to see from the output. They can further define and refine this in what becomes a very natural conversation with GenAI, making GenAI systems such as ChatGPT, Bard (Google), Bing Chat (an OpenAI chatbot), Stable Diffusion, and Midjourney simple for any user to access.
Generative adversarial networks (GANs): GenAI applies algorithms known as generative adversarial networks (GANs) to generate and improve new data. These combine a generator network (the part that learns from large datasets to generate new data) with a discriminator network (the part that evaluates the new data that is generated). The generator generates, and the discriminator discriminates. The two networks are pitted against each other (hence “adversarial”), with the generative network offering data for the discriminator to test. The output is fed back to the generator each time in a continuous improvement cycle.
Multi-modal: The ability to understand and generate content across multiple models of input or output, including text, images, audio, and video. This ability can be applied to image captioning, video summarization, generating images from text or spoken word, and answering questions that might benefit from a combination of outputs such as images, words, and sounds.
Agentic systems: An agentic system can pursue complex goals with limited direct supervision. It can make independent decisions, take action, and adapt to changes in conditions within constraints set by its human orchestrator. The concept is one HFS encapsulates as “Purposeful AI.”
Large action models: While examples of LAMs are being used in newly launched consumer tech in 2024 (Rabbit’s R1, for example), we are more likely to see their use as Phase 3 gets underway. LAMs use neuro-symbolic models and learn from human intention and interaction with interfaces to copy actions such as scrolling, clicking, and typing into boxes, eliminating the need for users to jump in, out, and between apps to complete tasks. LAMs work well when paired with LLMs. The LLM applies NLP to understand what a user is setting as a task. The LAM then divides the task into steps to carry them out in real time.
Neuro-symbolic models: Neuro-symbolic models integrate neural and symbolic AI architectures to support reasoning and learning. Combining the two offers a bridge between low-level, data-intensive perception and high-level, logical reasoning.
Watch this space: More new terms will emerge as Phase 3 gets properly underway from 2025 onwards. We keep track and keep you informed of the key concepts you’ll need to apply in your business.
Language matters. Take large language models, for example. Two years ago, few outside AI labs ever had cause to reference them. Today, anyone in the boardroom who has to ask what LLM stands for marks themselves out as something of a Luddite, as surely as putting in a request for a new pager would.
This guide will help you stay in the game. We’ll make regular updates as Phase 2 beds in and Phase 3 kicks off.
Further reading
You may also find our previously published generative AI (GenAI) explainer a useful reference: How business leaders can take control of the GenAI conversation.
AI is essential in making the efficiency, productivity, cost, revenue, and market valuation gains we identified in our research, Your Generative Enterprise™ playbook for the future.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started