Uncategorized

Google And Hugging Face Partner To Advance Generative AI With Open Models



Google and Hugging Face have announced a strategic partnership aimed at advancing open AI and machine learning development.

This collaboration will integrate Hugging Face’s platform with Google Cloud’s infrastructure, including Vertex AI, to make generative AI more accessible and impactful for developers. The partnership will enable Hugging Face users and Google Cloud customers to deploy models easily for production on Google Cloud with Inference Endpoints, accelerate applications with TPUs on Hugging Face Spaces and manage usage through their Google Cloud account.

Developers will be able to train, tune and serve open models quickly and cost-effectively on Google Cloud, leveraging AI-optimized infrastructure such as TPUs and GPUs. The partnership also supports Google Kubernetes Engine deployments, allowing for the creation of new generative AI applications.

This move is seen as a significant step into AI for Alphabet, Google’s parent company, and is compared to the collaboration between Microsoft and OpenAI, although Hugging Face’s head of product, Jeff Boudier, commented that the Google-Hugging Face partnership is quite different.

Google’s Tensor Processing Units are specialized hardware developed to accelerate machine learning tasks, particularly those involving large matrix operations. Unlike General Purpose Graphics Processing Units, which are designed for parallel computing and are versatile across various compute workloads, TPUs are purpose-built for AI and ML workloads, focusing on tensor operations to achieve higher speeds and energy efficiencies. TPUs are also more energy-efficient than GPUs. They are designed with a focus on reducing power consumption per operation, which can lead to lower energy costs and a smaller carbon footprint. With this partnership, Hugging Face users will be able to take advantage of TPUs available through Google Cloud.

Vertex AI is Google’s machine learning and MLOps platform available in its cloud environment. With the integration between the two, Hugging Face users can target Vertex AI as the deployment platform to host and manage open models. They can also choose GKE, the managed Kubernetes service for hosting models, which delivers fine-grained control and customization capabilities.

Hugging Face has attracted significant investments from tech giants, including Google. In a Series D funding round, Hugging Face raised $235 million, with participation from Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce, and others, doubling the startup’s valuation to $4.5 billion. With its commitment to open source and open models, Hugging Face has fast become the preferred platform for hosting models, datasets and inference endpoints. Almost all the open model providers, like Meta, Microsoft and Mistral make their models available on Hugging Face Hub.

Google has foundation models that are exclusively available on its public cloud platform. Gemini, one of the top-performing large language models, was announced last month. Other models like Imagen, Chirp and Codey are part of the Vertex AI offering. With the integration of Hugging Face with Google Cloud, customers get the choice of proprietary models and open models for building and deploying generative AI applications in the cloud.

The partnership between Google and Hugging Face is expected to democratize AI by making it easy for companies to build their own AI using open models and technologies. As Hugging Face becomes the central hub for open-source AI software, this collaboration will likely double the size of its repository of AI-related software.

The new capabilities, including Vertex AI and GKE deployment options, are expected to be available to Hugging Face Hub users in the first half of 2024.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *