Google Just Launched Its Own Version of GPT-4 Called Gemini — But Execs Are Refusing to Answer One Very Important Question The tech giant's latest language learning model comes in three sizes.
- Google has released Gemini, its most advanced AI model, with three distinct sizes tailored for various applications.
- Gemini Pro surpasses OpenAI's GPT-3.5 in performance, with future plans to integrate Gemini across Google Cloud and consumer AI applications.
Gemini arrives in a suite of three sizes — each designed for specific tasks: Gemini Ultra, the "largest and most capable model"; Gemini Pro, meant to scale across a broad range of tasks; and Gemini Nano, optimized for mobile use.
Gemini will be integrated across consumer-facing products, powering tools such as Google's Bard chatbot and allowing a more engaging Search Generative Experience.
Starting December 13, developers and enterprise clients will be able to harness the capabilities of Gemini Pro through the Google AI Studio and Cloud Vertex AI.
Beyond consumer use, Google envisions Gemini fulfilling a versatile range of business needs — transforming customer service interactions, providing astute product recommendations and enabling companies to identify market trends with greater acuity.
Additionally, Gemini's potential extends into creative realms; it could facilitate content creation for marketing campaigns and blogging efforts and offer productivity enhancements through meeting summarizations and streamlined code generation.
One of Gemini's most impressive accomplishments is its Ultra model's adept mastery of the MMLU, outperforming human experts in various subjects, from philosophy to medicine.
"It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video," CEO Sundar Pichai wrote in a blog post Wednesday.
Google remains tight-lipped regarding the monetization of these advancements. Meanwhile, recent tech reveals include the TPU v5p chip, designed to fortify the AI model training infrastructure and enhance performance at a more compelling price point than predecessors.