At Google's 2024 I/O Connect China developer conference, the diversification of AI large language models became a key focus.
Google introduced three different Gemini model variants for app development:
- Gemini Nano: Most efficient model for on-device tasks
- Gemini 1.5 Flash: Fastest and most economical model for high-volume tasks
- Gemini 1.5 Pro: Open to all developers, supports 2 million token context window
Both Gemini 1.5 Pro and 1.5 Flash now have context caching to reduce compute requirements.
Google also launched Gemma, a sister model to Gemini, with new 9B and 27B parameter versions. The 27B version is optimized to run on a single NVIDIA GPU on Google Cloud or single TPU on Vertex AI.
Gemini models are now integrated into development tools like Android Studio, Chrome DevTools, Project IDX, Colab, VS Code, IntelliJ and Firebase to assist with coding, debugging, testing, documentation and code comprehension.
For Flutter, Google released Flutter 3.24 and Dart 3.5, featuring an early preview of the new "Flutter GPU" API. This allows developers to access the GPU using Dart code for improved graphics rendering.
Google also launched several new packages like Flutter_Scene for importing 3D projects.
An early preview of Android Studio ON IDX was introduced, which runs entirely in the browser. New components like Firebase AI Monitoring and Checks AI Safety were released to ensure reliability, compliance and security when building AI-powered apps.
For open source LLMs, Google launched Project Oscar, initially supporting Go projects with 93,000 code submissions and 2,000 developers.
For web development, Google introduced:
- Speculation Rules API for instant navigation
- View Transitions API for improved page transitions
- Chrome DevTools with Gemini integration for development efficiency
For Android native app development, new offerings include:
- On-device Gemini Nano model and AI Core system service
- Kotlin Multiplatform for cross-platform code sharing
- Multiplatform support added to Jetpack libraries
- Android Device Streaming beta for remote device testing
- Gemini integration in Android Studio stable version
For cloud development, Google outlined a new approach with:
- New Vertex AI features like context caching
- 150+ new models including Gemini, Gemma, Anthropic Claude, Meta Llama and Hugging Face models
- Cross-cloud capabilities with optimized PostgreSQL and BigQuery Omni
- Automated infrastructure setup in 45 minutes
- Gemini Code Assist IDE plugin and database integrations
While Google is pushing to commercialize its LLMs, there are still areas for improvement in output quality and retrieval capabilities. Recent tests have shown issues with logical reasoning and math problem-solving.
Additionally, the rapid growth of AI is leading to increased power consumption and carbon emissions from data centers, posing environmental challenges that will need to be addressed.