Google Cloud is banking on its breadth of first-party, third-party, and open-source models to grab its share of enterprise generative AI (GenAI) budgets. And it’s pushing the boat out to tackle enterprise concerns that GenAI may be another way hyperscalers tighten their grip on critical business infrastructure.
Enterprise leaders tell us they are worried about the influence and control hyperscalers such as Google Cloud, Microsoft, and AWS already exert through the cloud, data, and software services that many enterprise business functions depend on. Witnessing Microsoft’s investments in OpenAI and MS Azure, Google’s array of LLMs and services, and AWS’ Bedrock managed service, concerns about the risk of an emerging oligarchy in AI now extend to the White House.
The US Federal Government’s recently published AI executive order requires leveling the playing field: “The Federal Government will promote a fair, open, and competitive ecosystem and marketplace so entrepreneurs can continue to drive innovation. Doing so requires…addressing risks from dominant firms’ use of key assets to disadvantage competitors.”
My informal LinkedIn survey identified that 82% of respondents expected this was the line the US government would take. In line with most businesses engaged in GenAI, Google Cloud has already prepared its response.
Google Cloud addresses enterprise risk and data concerns with an approach that allows enterprises to maintain ownership of their data and the value it generates. Enterprise customers retain ownership of their data, prompts, and outputs; Google Cloud says it will not even use customer data to improve its models.
Google met enterprise concerns regarding potential copyright breaches with what it claims are industry-first levels of indemnity, covering the training data it uses to create its GenAI services models and the outputs customers using its DUET AI, Google Workspace, and relevant cloud services generate.
To meet concerns about governance, Google Cloud encrypted data privacy controls, built them into the stack, and provided access to attestations where proof of use of data, sources, and identity was required.
As our introduction mentioned, Google Cloud is keen to highlight a wide range of GenAI solutions in its services, relying on an ecosystem of technology providers. This ecosystem approach and support for open source will help allay the market’s fear of the control an oligarchy could exert. However, we must note that Google retains control of access to its ecosystem.
Google’s Model Garden provides models developed by Google, open-source models, and models provided by third parties. Google’s models already support 33 languages, and another 100 languages are in testing. Other Model Garden offerings include Chirp, a speech-to-text model for sentiment detection and virtual agents; Codey, for text-to-code with support for 20 languages; and Imagen, which lets users ask questions about what is visible in images.
Google Cloud is not forgetting its many service provider partners. It has identified model selection, prompt tuning, enterprise architecture, and LMOps (language model management and task adherence) as growing needs that its partners are better placed to take care of.
Google seems to be going all out to be enterprises’ open-friendly partner. It offers data promises, indemnities, built-in governance, and a wide range of technology options designed to meet customer needs. But how well all this meets enterprise concerns about vendor lock-in or the US government’s worries about leveling the playing field remains open to question.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started