Highlight Report

AI-first, AI-everywhere at Google Cloud Next

Home » Research & Insights » AI-first, AI-everywhere at Google Cloud Next

In an ever-changing tech landscape, dominance in one area doesn’t guarantee future success. While Google remains numero uno in search, it lags its peers in the cloud. The rise of generative artificial intelligence (GenAI) presents challenges and opportunities. At Google Cloud Next, its flagship cloud conference, Google showcased a host of capabilities integrating AI into its products, including chip investments, new agents, and LLM (large language model) advancements.

Beyond search: Google Cloud the key to diversification

In the pre-internet days, trying to learn a new topic meant visiting the library or asking experts in the field. Then, along came Google, revolutionizing access to information — the phrase “Google it” is now a household staple.

However, Google’s reliance on search revenue is a vulnerability if something disrupts the market. To diversify its revenue streams, Google has invested significantly in the cloud market with Google Cloud. While Google Cloud lags market leaders Amazon Web Services (AWS) and Microsoft Azure, Google Cloud revenue as a share of the company’s total has been increasing; it crossed the 10% mark as a percentage of total Alphabet revenues in 2023.

Exhibit 1: On the up: Google Cloud’s share of total Alphabet revenue is increasing

Source: Alphabet Annual Reports, HFS Analysis, 2024

The advent of GenAI challenged Google’s business model, spurring it to set a goal of becoming an AI-first company across its portfolio of products from advertising to the cloud. At Google Cloud Next, its flagship annual cloud event, where it unveiled new products and features and convened the cloud community, the AI-focused approach was front and center. All the key announcements had an AI focus, from its advancements in semiconductors to cater to increased demand for compute power to its usage of GenAI across its product suite.

Google’s cloud offerings are now GenAI-infused with Gemini

Google made Gemini 1.5 Pro, its latest GenAI model, available to its Cloud customers for public preview. Its multimodal capabilities enable it to process text, audio, video, code, and other content formats from a single window.

Google introduced Gemini Cloud Assist to help teams design, operate, and optimize their application lifecycle. Gemini Cloud Assist comes with the following key capabilities:

  • Goal-driven design: Helps generate requirement-driven architecture configurations.
  • Guided operations and troubleshooting: Helps with error detection and remediation.
  • Tailored optimizations: Help identify areas for enhancement based on goals such as cost savings, performance improvement, or availability improvement.

Google Cloud launched a new GenAI, the Vertex AI Agent Builder, for building chatbots. It enables building no-code conversational agents using natural language. To facilitate accuracy, Google allows agents to connect to trusted data sources. The tool aims to help accelerate the development, experimentation, and deployment of GenAI-powered applications, and it benefits from enterprise-grade security.

Google’s product launches show that its GenAI capabilities are on the journey from model training to infusing Gen AI capabilities into all its product offerings. Using GenAI in its products is the first step toward its customers deriving value from the technology. Google also considers the shift catalyzed by GenAI as an opportunity to reset the conversation on its offerings as it competes with its larger peers, Microsoft Azure and Amazon Web Services, for market share.

Gemini Code Assist debuts, aiming to improve developer productivity

Google announced the launch of Gemini Code Assist, an AI-powered collaborator that helps development teams build, deploy, and operate applications. It was previously known as Duet AI for developers. The product’s features include full codebase awareness, code customization, and enhancements to the tool’s partner ecosystem, increasing efficiency. To improve the efficiency of the code it generates, Google added several companies to its partner ecosystem such as Stack Overflow, Datadog, DataStax, Elastic, HashiCorp, Neo4j, Pinecone, Redis, SingleStore, and Snyk. Gemini Code Assist’s launch pits it against competitor products such as GitHub Copilot and Amazon CodeWhisperer.

Code Assist’s new enhancements should improve developer productivity, but it can be prone to inaccuracies at this early stage, making it an assistant and not a replacement for coding.

Increasing demand for compute power is fueling a focus on semiconductors

Google unveiled the TPU v5p, its TPU (tensor processing unit) that can train models almost three times faster than its predecessor, TPU v4, as being available for developers. Google CEO Sundar Pichai stated “Google’s investments to develop new AI hardware put us at the forefront of the AI platform shift,” showcasing the investment and the importance its underlying hardware holds. Currently, Nvidia, of which Google is a customer, is the runaway market leader in producing chips for training AI applications.

While Google improves its hardware, it will continue to partner with Nvidia. The companies announced that Nvidia’s latest AI platform, Blackwell, will be available to Google’s cloud customers in early 2025. The Blackwell platform will be available in two variations: the HGX B200, catering to data analytics and high-performance computing, and the GB200 NVL72, designed for model trading and real-time inferencing.

Google also showcased its investment in general-purpose CPUs for data centers by launching Google Axion, the first custom ARM-based CPU built by Google. ARM based chips are known for their energy-efficient architecture. The chip is expected to significantly improve energy efficiency, reduce costs, and enable companies to hit environmental targets more easily.

The Bottom Line: You couldn’t miss Google’s AI-first vision, but the proof of the pudding will be in the eating.

Being an AI-first company has been one of Google’s mantras in the past year. You could not miss the AI-first vision in its Cloud Next announcements — you might say it was AI everywhere.

Google’s investments in custom chips for its devices are significant for diversifying the market and fostering innovation, even while Nvidia remains the market leader. While there may be competition further down the line, the demand for compute power is such that collaboration is the order of the day today.

Google’s infusion of Gemini into its cloud offerings and the launch of new agents should make the platform more user-friendly and help customers drive productivity gains. However, with its rival hyperscalers launching GenAI capabilities of their own, these developments are likely to be table stakes. For the buy side, there will be intense competition to infuse AI into cloud offerings, which can potentially unlock significant value for businesses.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI