While Google I/O has served as a venue for some of the company’s biggest debuts over the years, the conference has always had its roots in developer news.
This post will be steadily updated over the duration of the Google I/O 2023 developer keynote with the latest news from all of the company’s various avenues of development, as well as news from the “What’s New” sessions afterward. We’re expecting news from Android, Flutter, Jetpack, and Firebase, as well as many AI-related announcements.
Table of Contents
The Google Play Store has a few tricks up its sleeve to help developers get their apps onto more devices. For starters, you can now display a custom listing for “inactive users,” explaining why they “should give your app or game another chance.”
However, the real highlight of the day is a generative AI tool that can help you write a proper Play Store listing for your app (in English). Just fill in a few blanks and the AI will generate the rest, including the short description, full description, and more.
Meanwhile, users will be able to get a better idea of your app based on its reviews, as the Play Store will soon offer high-level summaries of reviews. These will appear as quotes in a new “What users are saying” section and will include a count of how many reviewers had a similar sentiment.
The Play Console will now feature a new “Price Experiments” feature that allows you to run tests to see if a particular price tag is any more or less effective at driving sales. Similarly, an app’s listing in the Play Store can soon surface “featured products.”
To help make sure users aren’t experiencing issues due to using an outdated version of an app, the Play Store will soon offer “automatic update prompts” if they experience a crash and “a more stable version is available.” You’ll also be able to manually choose to prompt updates for users on specific versions. This feature is coming soon to “all apps built with app bundles.”
Lastly, Google has updated its Play Console app, built from the ground up using Flutter. Optimized for “modern developer needs,” the new Play Console app is designed to be customizable, showing only the metrics you need most, while also integrating with the Play Console “Inbox” to stay up to date. The new experience is available now in open beta testing.
This year, Android Studio is marking the 10-year anniversary of its original announcement at I/O 2013. Alongside touting features of the upcoming “Giraffe” release, Google is now detailing the major improvements to expect from Android Studio Hedgehog.
The headlining feature, staying in theme with the rest of Google I/O 2023, is Studio Bot, “an AI-powered conversational experience.” According to the company, Studio Bot is still “in its very early days,” but it should be able to generate useful code for your app, suggest fixes for errors in your code, and answer general questions about Android development.
Studio Bot is built on top of “Codey,” Google’s “foundation model for coding.” For now, Studio Bot is only available to Android developers in the United States through Android Studio Hedgehog or newer.
Google particularly highlighted the privacy practices in place for Studio Bot, noting that your private source code is not shared with Google. Instead, “only the chat dialogue between you and Studio Bot is shared.”
Live Edit, a feature that makes it quick and easy to see changes to your Jetpack Compose code reflected on your device, is ready to preview in the beta version of Giraffe, while the company says “additional improvements in error handling and reporting” will arrive with Hedgehog.
Relatedly, Android Studio’s “Layout Inspector” is getting an upgrade with Hedgehog, overlaying the information onto the video from your phone/tablet. This should improve the tool’s performance while also making it more directly actionable.
With Android 13, Google made it possible for multilingual users to prefer a particular language for certain apps, such as using their phone in English and using an app in Hindi. However, this still required a bit of work from developers, even if the necessary translations were already in place. With version 8.1 or higher of the Android Gradle plugin, per-app language settings can now be supported automatically.
In Android Studio Giraffe and newer, you can find a new “Android SDK Upgrade Assistant,” which walks you through the changes you need to make to upgrade your app to a newer version of Android. As the Google Play Store – and, more recently, Android itself – has been gradually increasing its minimum API requirements, this tool will surely become invaluable.
Properly developing for Android can be an expensive venture, as there are many different forms of hardware to target, and it’s unrealistic to expect an independent developer to pay to have multiple devices on which to test. To help address that, Android Studio and Firebase are launching an “Android Device Streaming” service that can stream “remote physical Google Pixel devices” to which you can deploy and test your app. Enrollment is open now.
A new “Power Profiler” in Android Studio Hedgehog is able to directly measure the amount of power consumed by your app across the CPUs, GPU, camera, cell modem, and more. Google offers an example of this being useful to “A/B test multiple algorithms of your video calling app to optimize power consumed by the camera sensor.”
For Jetpack Compose developers, when debugging Compose UI code in Android Studio, you’ll now get more information about the parameters of your Composable functions. This will make it easier to understand what parameters changed and caused your app to recompose.
Lastly, alongside the general improvements from the IntelliJ 2023.1 release, Android Studio Hedgehog also includes more tweaks to the “New UI” that Jetbrains has been working on for all of its tools. The biggest highlight here is the new “Compact Mode,” while Google has included some Android-specific changes such as “updating the main toolbar, tool windows, and new iconography.”
In recent years, Jetpack Compose has become Google’s recommended way to create Android applications for Wear OS, phones, and tablets/foldables. The framework still has some catching up to do with traditional Android “Views,” but Compose’s declarative nature, its use of the Kotlin language, and the way it makes modern Material Design available to developers result in a winning formula.
In terms of performance, an upcoming release of Compose will use a “new and more efficient system” for modifiers which resulted in “an average 22% performance gain” for text. Plus, there’s support for the latest set of emojis.
You’ll also find a collection of new composables, including two “pagers” that make it easy to scroll through objects and “flows” to automatically arrange content vertically or horizontally.
Compose pager Glance for Compose Compose for TV
Jetpack Compose can also be used to create homescreen widgets for Android, using the “Jetpack Glance” library, available in Beta today.
Meanwhile, Compose is expanding to a new set of screens today, thanks to the new Jetpack Compose for TV library. In the library, you’ll find just about everything you need to easily follow Google’s latest design guidelines for Android TV. For instance, there are easily navigable grids, rows, featured lists, tabs, side navigation, and more.
Today’s Google I/O keynotes also served as the first preview of this fall’s “Wear OS 4” release. It’s based on Android 13 and includes a wide variety of new features, enhancements, and more.
Thankfully, developers don’t have to wait to get their hands on Wear OS 4 and test their apps. A new Wear OS 4 emulator is available today in Android Studio Hedgehog.
One of the key highlights that Google (and Samsung) emphasized today is the introduction of a new “Watch Face Format” that completely redefines how watch faces are created for Wear OS. To significantly reduce power consumption, the new Watch Face Format is a “declarative XML format, meaning there will be no code in your watch face APK.” This means that Wear OS will handle all of the graphical optimizations to ensure your custom watch face runs as smoothly as possible across devices.
Better yet, all watch faces in the new format can be customized using the same tools as the built-in Wear OS designs, making for a more streamlined experience.
To get started, you can use Samsung’s Watch Face Studio tool to design a new face, and you can test the new Watch Face Format today with the Wear OS 4 emulator. New faces using the Watch Face Format can be published to the Play Store today, “ready for when the first Wear OS 4 watches are available.”
Flutter & Dart
Flutter is Google’s framework for building applications for a wide variety of platforms, including Android, iOS, web, Windows, macOS, Linux, and more. Today marks the release of Flutter 3.10, showing marked progress on the vision set forth earlier this year at the Flutter Forward event.
The biggest improvement of Flutter 3.10 is that Impeller, the new rendering engine that’s set to enable groundbreaking performance improvements for Flutter apps, is now enabled by default on iOS. To take advantage of Impeller on your own apps, simply update your Flutter SDK and see the difference for yourself.
Google is now focused on getting Impeller ready for use on Android, which is a trickier task as there are still devices that do not support the necessary Vulkan graphics APIs. Flutter has committed to supporting a “backward-compatible mode” for those devices, while full Impeller support for Android should be ready to preview in the near future.
Flutter on the web also got some major improvements as of this latest release, with some size reductions to the underlying CanvasKit and usage of fonts resulting in a 42% decrease in load times when “using a simulated cable connection.”
Meanwhile, Google has been working to bring support for Dart (and other garbage-collected languages) to WebAssembly. This effort is still in its very early stages, but when completed, it could lead to Flutter apps being even more efficient than before, with Google citing “up to 3× performance in execution speeds.” You can test your app in Flutter’s WebAssembly support today to provide feedback.
Another major change with Flutter 3.10 is the introduction of version 3.0 of the Dart programming language. With this bump, all Dart code must now be written with sound null safety, ensuring that your app’s code cannot unexpectedly return a null value. Google has been pushing the Dart community toward sound null safety for quite some time, but Dart 3.0 now makes it a requirement.
Dart 3 also includes some useful new features that should help make your code more readable. For example, “records” allow a function to return multiple values instead of just one, while “patterns” make it easy to validate data before using it.
Firebase is Google’s ever-expanding suite of tools designed to make it faster and easier to build meaningful apps with the power of Google Cloud. Staying true to that vision and following this year’s focus on AI-powered features, the hallmark new features for Firebase are a pair of extensions that give developers easy access to Google’s PaLM language model.
Using the “Chatbot with PaLM API” extension, you can “add intelligent chat capabilities to your app with Google’s latest generative AI technology.” Similarly, “Summarize Text with PaLM API” can quickly digest large pieces of text and offer shorter summaries.
Meanwhile, Google is also making it easier to publish your own Firebase Extensions. Previously, this was only available through a limited preview program. Between Google-built extensions, options from third-party partners, and contributions from the community, the company intends to provide a “thriving ecosystem of extensions that solve countless problems to benefit the entire Google developer community.”
Firebase Cloud Functions, a serverless framework for running your Node.js backend code on Google Cloud, is fully launching its second-generation service with numerous benefits:
- Up to 32 gigabytes of memory to handle demanding workloads
- Concurrency, which allows each instance of a function to handle up to 1000 requests in parallel resulting in fewer cold starts, improved latency, and lower cost
- Support for triggers to stitch together many Firebase products, including Firestore, so you can trigger 2nd gen functions based on writes to a Firestore collection
Additionally, you can now run Python code in Cloud Functions, including Python’s rich package ecosystem. This will enable more developers to try serverless for themselves.
Cloud Firestore, one of Firebase’s two database options, is getting a small, yet significant update. Upon updating to the latest SDK, you’ll be able to use “OR” queries when navigating through your data.
App Check, a service designed to keep your app and its services safe from abuse, now protects Firebase’s advanced Authentication with Identity Platform. Game developers can also now access App Check SDKs for Unity and C++, protecting against common forms of cheating.
Firebase Hosting is expanding its support for dynamic web frameworks, gaining “experimental” compatibility with SvelteKit, Astro, and Nuxt. Plus, with the new Python support in Cloud Functions, Firebase Hosting will soon have “one command” deployment support for Django and Flask.
Hosting is also gaining “dynamic preview channels,” which allow web developers to share preview links of new releases of their website before deploying to production.
First announced last year, Firebase’s “App Quality Insights” menu in Android Studio is now getting a stable release as part of Android Studio Flamingo, bringing some additional features:
- The ability to close issues right from the IDE
- The ability to add notes for easier collaboration with your team
- The ability to filter issues by Crashlytics’ signals, like regressions, so you can better prioritize what to fix and when
On the web side of things, Google’s leading announcement is the new “WebGPU” API, which recently shipped in Chrome and is coming soon to Firefox. Already, the new API is able to run three times faster than WebGL, but this is about more than just graphics.
The fact is that the AI and machine learning tools that have become so popular as of late require a significant amount of processing power. While some AI tools simply offload that work to the cloud, doing so is at the cost of privacy. Instead, using WebGPU gives developers full access to a desktop’s graphics card, offering faster processing for machine learning.
Google has also been among the companies hard at work on bringing support for garbage collection to WebAssembly. Beyond the benefits for users of app frameworks like Flutter, Jetbrains has enabled a way for Kotlin developers to use some of the same code across Android and the web.
There’s also a new multi-company effort called “Baseline,” intended to help make it clearer which new features are available across all major browsers. Each year, a new “Baseline” will be established, listing “everything that’s new and compatible across all browsers” since the last Baseline.
FTC: We use income earning auto affiliate links. More.