Android Developers Blog
News and insights on the Android platform, developer tools, and events.
Posted by Matthew McCullough – VP of Product Management, Android Developer Android 16 has officially reached Platform Stability today with Beta 3! That means the API surface is locked, the app-facing behaviors are final, and you can push your Android 16-targeted apps to the Play store right now. Read on for coverage of new security and accessibility features in Beta 3. Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 16 developer site has more information about the beta, including how to get it onto devices and the release timeline. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that benefits everyone. New in Android 16 Beta 3 At this late stage in the development cycle, there are only a few new things in the Android 16 Beta 3 release for you to consider when developing your apps. Broadcast audio support Pixel 9 devices on Android 16 Beta now support Auracast broadcast audio with compatible LE Audio hearing aids, part of Android's work to enhance audio accessibility. Built on the LE Audio standard, Auracast enables compatible hearing aids and earbuds to receive direct audio streams from public venues like airports, concerts, and classrooms. Our Keyword post has more on this technology. Outline text for maximum text contrast Users with low vision often have reduced contrast sensitivity, making it challenging to distinguish objects from their backgrounds. To help these users, Android 16 Beta 3 introduces outline text, replacing high contrast text, which draws a larger contrasting area around text to greatly improve legibility. Android 16 also contains new AccessibilityManager APIs to allow your apps to check or register a listener to see if this mode is enabled. This is primarily for UI Toolkits like Compose to offer a similar visual experience. If you maintain a UI Toolkit library or your app performs custom text rendering that bypasses the android.text.Layout class then you can use this to know when outline text is enabled. Text with enhanced contrast before and after Android 16's new outline text accessibility feature Test your app with Local Network Protection Android 16 Beta 3 adds the ability to test the Local Network Protection (LNP) feature which is planned for a future Android major release. It gives users more control over which apps can access devices on their local network. What's Changing? Currently, any app with the INTERNET permission can communicate with devices on the user's local network. LNP will eventually require apps to request a specific permission to access the local network. Beta 3: Opt-In and Test In Beta 3, LNP is an opt-in feature. This is your chance to test your app and identify any parts that rely on local network access. Use this adb command to enable LNP restrictions for your app: adb shell am compat enable RESTRICT_LOCAL_NETWORK After rebooting your device, your app's local network access is restricted. Test features that might interact with local devices (e.g., device discovery, media casting, connecting to IoT devices). Expect to see socket errors like EPERM or ECONNABORTED if your app tries to access the local network without the necessary permission. See the developer guide for more information, including how to re-enable local network access. This is a significant change, and we're committed to working with you to ensure a smooth transition. By testing and providing feedback now, you can help us build a more private and secure Android ecosystem. Get your apps, libraries, tools, and game engines ready! If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 16. Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you don't yet target Android 16: JobScheduler: JobScheduler quotas are enforced more strictly in Android 16; enforcement will occur if a job executes while the app is on top, when a foreground service is running, or in the active standby bucket. setImportantWhileForeground is now a no-op. The new stop reason STOP_REASON_TIMEOUT_ABANDONED occurs when we detect that the app can no longer stop the job. Broadcasts: Ordered broadcasts using priorities only work within the same process. Use other IPC if you need cross-process ordering. ART: If you use reflection, JNI, or any other means to access Android internals, your app might break. This is never a best practice. Test thoroughly. Intents: Android 16 has stronger security against Intent redirection attacks. Test your Intent handling, and only opt-out of the protections if absolutely necessary. 16KB Page Size: If your app isn't 16KB-page-size ready, you can use the new compatibility mode flag, but we recommend migrating to 16KB for best performance. Accessibility: announceForAccessibility is deprecated; use the recommended alternatives. Bluetooth: Android 16 improves Bluetooth bond loss handling that impacts the way re-pairing occurs. Other changes that will be impactful once your app targets Android 16: User Experience: Changes include the removal of edge-to-edge opt-out, requiring migration or opt-out for predictive back, and disabling elegant font APIs. Core Functionality: Optimizations have been made to fixed-rate work scheduling. Large Screen Devices: Orientation, resizability, and aspect ratio restrictions will be ignored. Ensure your layouts support all orientations across a variety of aspect ratios. Health and Fitness: Changes have been implemented for health and fitness permissions. Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues. Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues. Two Android API releases in 2025 This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes. We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible. There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level. Get started with Android 16 You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 3. While the API and behaviors are final, we're still looking for your feedback so please report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release. For the best development experience with Android 16, we recommend that you use the latest feature drop of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it. We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information on Android 16 please visit the Android 16 developer site.
Posted by Anirudh Dewani – Director, Android Developer Relations We just dropped our Winter episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time we were in Barcelona to give you the latest from Mobile World Congress and across the Android Developer world. We unveiled a big update to Gemini in Android Studio (multi-modal support, so you can translate image to code) and we shared some news for games developers ahead of GDC later this month. Plus we unpacked the latest Android hardware devices from our partners coming out of Mobile World Congress and recapped all of the latest in Android XR. Let’s dive in! Multimodality image-to-code, now available for Gemini in Android Studio At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion. Today, we took the wraps off a new feature: Gemini in Android Studio now supports multimodal image to code, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve collaboration and design workflows. You can try out this new feature by downloading the latest canary - Android Studio Narwal, and read more about multimodal image attachment – now available for Gemini in Android Studio. Building excellent games with better graphics and performance Ahead of next week’s Games Developer Conference (GDC), we announced new developer tools that will help improve gameplay across the Android ecosystem. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplay sessions. Learn more about how we're building excellent games with better graphics and performance. A deep dive into Android XR Since we unveiled Android XR in December, it's been exciting to see developers preparing their apps for the next generation of Android XR devices. In the latest episode of #TheAndroidShow we dove into this new form factor and spoke with a developer who has already been building. Developing for this new platform leverages your existing Android development skills and familiar tools like Android Studio, Kotlin, and Jetpack libraries. The Android XR SDK Developer Preview is available now, complete with an emulator, so you can start experimenting and building XR experiences immediately! Visit developer.android.com/xr for more. New Android foldables and tablets, at Mobile World Congress Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona: OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen - making it as compact or expansive as needed. Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet. Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity. These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost. Watch the Winter episode of #TheAndroidShow That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show. Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.
Posted by Paris Hsu – Product Manager, Android Studio At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion, making it easier to build high quality apps. We are excited to announce a significant expansion: Gemini in Android Studio now supports multimodal inputs, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve team collaboration and UI development workflows. You can try out this new feature by downloading the latest Android Studio canary. We’ve outlined a few use cases to try, but we’d love to hear what you think as we work through bringing this feature into future stable releases. Check it out: Image attachment - a new dimension of interaction We first previewed Gemini's multimodal capabilities at Google I/O 2024. This technology allows Gemini in Android Studio to understand simple wireframes, and transform them into working Jetpack Compose code. You'll now find an image attachment icon in the Gemini chat window. Simply attach JPEG or PNG files to your prompts and watch Gemini understand and respond to visual information. We've observed that images with strong color contrasts yield the best results. 1.1 New “Attach Image File” icon in chat window 1.2 Example multimodal response in chat We encourage you to experiment with various prompts and images. Here are a few compelling use cases to get you started: Rapid UI prototyping and iteration: Convert a simple wireframe or high-fidelity mock of your app's UI into working code. Diagram explanation and documentation: Gain deeper insights into complex architecture or data flow diagrams by having Gemini explain their components and relationships. UI troubleshooting: Capture screenshots of UI bugs and ask Gemini for solutions. Rapid UI prototyping and iteration Gemini's multimodal support lets you convert visual designs into functional UI code. Simply upload your image and use a clear prompt. It works whether you're working from your own sketches or from a designer mockup. Here’s an example prompt: "For this image provided, write Android Jetpack Compose code to make a screen that's as close to this image as possible. Make sure to include imports, use Material3, and document the code.” And then you can append any specific or additional instructions related to the image. 2. Example of generating Compose code from high-fidelity mock using Gemini in Android Studio (code output) For more complex UIs, refine your prompts to capture specific functionality. For instance, when converting a calculator mockup, adding "make the interactions and calculations work as you'd expect" results in a fully functional calculator: 3. Example of generating Compose code from wireframe via Gemini in Android Studio (code output) Note: this feature provides an initial design scaffold. It’s a good “first draft” and your edits and adjustments will be needed. Common refinements include ensuring correct drawable imports and importing icons. Consider the generated code a highly efficient starting point, accelerating your UI development workflow. Diagram explanation and documentation With Gemini's multimodal capabilities, you can also try uploading an image of your diagram and ask for explanations or documentation. Example prompt: Upload the Now in Android architecture diagram and say "Explain the components and data flow in this diagram" or “Write documentation about this diagram”. 4. Example of asking Gemini to help document the NowInAndroid architecture diagram UI troubleshooting Leverage Gemini's visual analysis to identify and resolve bugs quickly. Upload a screenshot of the problematic UI, and Gemini will analyze the image and suggest potential solutions. You can also include relevant code snippets for more precise assistance. In the example below, we used Compose UI check and found that the button is stretched too wide in tablet screens, so we took a screenshot and asked Gemini for solutions - it was able to leverage the window size classes to provide the right fix. 5. Example of fixing UI bugs using Image Attachment (code output) Download Android Studio today Download the latest Android Studio canary today to try the new multimodal features! As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent. You can read more on Gemini in Android Studio's commitment to privacy. We appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on X, Medium, or YouTube for more Android development updates!
Posted by Matthew McCullough – VP of Product Management, Android We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem. Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below. More immersive visuals built on Vulkan, now the official graphics API These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts. Diablo Immortal running on Vulkan For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player. Pokémon TCG Pocket running on Vulkan We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition. We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly. To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient. Longer and smoother gameplay sessions with ADPF Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices. Lineage W running on ADPF Here’s how we're enhancing ADPF with better performance and simplified integration: Stronger performance: Our collaboration with MediaTek, a leading chip supplier for Android devices, has brought enhanced stability to ADPF. Devices powered by MediaTek's MAGT system-on-chip solution can now fully utilize ADPF's performance optimization capabilities. Easier integration: Major game engines now offer built-in ADPF support with simple interfaces and default configurations. For advanced controls, developers can customize the ADPF behavior in real time. Performance optimization with more features in Play Console Once you’ve launched your game, Play Console offers the tools to monitor and improve your game's performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game's user distribution and notifies developers for device-specific issues. Check your Android vitals regularly to ensure high technical quality Bringing PC games to mobile, and pushing the boundaries of gaming We're launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile pilot program. New PC games are coming to mobile You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello - stop by at the Moscone Center West Hall! * Source: Google internal data measuring games on Android 14 or later launched between August 2024 - February 2025.
Posted by Aurash Mahbod – VP and GM of Games on Google Play We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem. Today, we’re sharing a closer look at what’s new from Play. We’re expanding our support for native PC games with a new earnback program and making Google Play Games on PC generally available this year with major upgrades. Check out the video or keep reading below. Google Play connects developers with over 2 billion monthly active players1 worldwide. Our tools and features help you engage these players across a wide range of devices to drive engagement and revenue. But we know the gaming landscape is constantly evolving. More and more players enjoy the immersive experiences on PC and want the flexibility to play their favorite games on any screen. That’s why we’re making even bigger investments in our PC gaming platform. Google Play Games on PC was launched to help mobile games reach more players on PC. Today, we’re expanding this support to native PC games, enabling more developers to connect with our massive player base on mobile. Expanding support for native PC games For games that are designed with a PC-first audience in mind, we’ve added even more helpful tools to our native PC program. Games like Wuthering Waves, Remember of Majesty, Genshin Impact, and Journey of Monarch have seen great success on the platform. Based on feedback from early access partners, we’re taking the program even further, with comprehensive support across game development, distribution, and growth on the platform. Develop with Play Games PC SDK: We're launching a dedicated SDK for native PC games on Google Play Games, providing powerful tools, such as easier in-app purchase integration and advanced security protection. Distribute through Play Console: We’ve made it easier for developers to manage both mobile and PC game builds in one place, simplifying the process of packaging PC versions, configuring releases, and managing store listings. Grow with our new earnback program: Bring your PC games to Google Play Games on PC to unlock up to 15% additional earnback.2 We’re opening up the program for all native PC games - including PC-only games - this year. Learn more about the eligibility requirements and how to join the program. Native PC games on Google Play Games Making PC an easy choice for mobile developers Bringing your game to PC unlocks a whole new audience of engaged players. To help maximize your discoverability, we’re making all mobile games available3 on PC by default with the option to opt out anytime. Games will display a playability badge indicating their compatibility with PC. "Optimized" means that a game meets all of our quality standards for a great gaming experience while "playable" means that the game meets the minimum requirements to play well on a PC. With the support of our new custom control mappings, many games can be playable right out of the box. Learn more about the playability criteria and how to optimize your games for PC today. Thousands of new games are added to Google Play Games To enhance our PC experience, we’ve made major upgrades to the platform. Now, gamers can enjoy the full Google Play Games on PC catalog on even more devices, including AMD laptops and desktops. We’re partnering with PC OEMs to make Google Play Games accessible right from the start menu on new devices starting this year. We’re also bringing new features for players to customize their gaming experiences. Custom controls is now available to help tailor their setup for optimal comfort and performance. Rolling out this month, we’re adding a handy game sidebar for quick adjustments and enabling multi-account and multi-instance support by popular demand. You can customize controls while playing Dye Hard - Color War Unlocking exclusive rewards on PC with Play Points To help you boost engagement, we’re also rolling out a more seamless Play Points4 experience on PC. Play Points balance is now easier to track and more rewarding, with up to 10x points boosters5 on Google Play Games. This means more opportunities for players to earn and redeem points for in-game items and discounts, enhancing the overall PC experience. Google Play Points is integrated seamlessly with Google Play Games Bringing new PC UA tools powered by Google Ads More developers are launching games on PC than ever, presenting an opportunity to reach a rapidly growing audience on PC. We want to make it easier for developers to reach great players with Google Ads. We’re working on a solution to help developers run user acquisition campaigns for both mobile emulated and native PC titles within Google Play Games on PC. We’re still in the early stages of partner testing, but we look forward to sharing more details later this year. Join the celebration! We're celebrating all that’s to come to Google Play Games on PC with players and developers. Take a look at the behind-the-scenes from our social channels and editorial features on Google Play. At GDC, you can dive into the complete gaming experience that is available on the best Android gaming devices. If you’ll be there, please stop by and say hello - we’re at the Moscone Center West Hall! 1 Source: Google internal data measuring monthly users who opened a game downloaded from the Play store. 2 Additional terms apply for the earnback program. 3 Your game’s visibility on Google Play Games on PC is determined by its playability badge. If your game is labeled as “Untested”, this means it will only appear if a user specifically searches for it in the Google Play Games on PC search menu. The playability badge may change once testing is complete. You can express interest in having Play evaluate your game for playability using this form. 4 Please see the Play Points help center for more information including country availability. 5 Offered for a limited time period. Additional terms apply.
Posted by Xiaodao Wu - Developer Relations Engineer Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features. What's new in WindowManager 1.4 WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights. WindowSizeClass We’ve updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows: Opened the constructor to take in minWidthDp and minHeightDp parameters so you can create your own window size classes Added convenience methods for checking breakpoint validity Deprecated WindowWidthSizeClass and WindowHeightSizeClass in favor of WindowSizeClass#isWidthAtLeastBreakpoint() and WindowSizeClass#isHeightAtLeastBreakpoint() respectively Here’s a migration example: // old val sizeClass = WindowSizeClass.compute(widthDp, heightDp) when (sizeClass.widthSizeClass) { COMPACT -> doCompact() MEDIUM -> doMedium() EXPANDED -> doExpanded() else -> doDefault() } // new val sizeClass = WindowSizeClass.BREAKPOINTS_V1 .computeWindowSizeClass(widthDp, heightDp) when { sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> { doExpanded() } sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> { doMedium() } else -> { doCompact() } } Some things to note in the new API: The order of the when branches should go from largest to smallest to support custom values from developers or new values in the future The default branch should be treated as the smallest window size class Activity embedding Activity stack pinning Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content. private fun pinActivityStackExample(taskId: Int) { val splitAttributes: SplitAttributes = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.66f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) .build() val pinSplitRule = SplitPinRule.Builder() .setDefaultSplitAttributes(splitAttributes) .build() SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule) } Pane expansion The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes – and the activities in the panes – on the fly. This gives users control over how they want to view the app’s content. val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.33f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) if (WindowSdkExtensions.getInstance().extensionVersion >= 6) { splitAttributesBuilder.setDividerAttributes( DividerAttributes.DraggableDividerAttributes.Builder() .setColor(getColor(context, R.color.divider_color)) .setWidthDp(4) .setDragRange( DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT) .build() ) } val splitAttributes: SplitAttributes = splitAttributesBuilder.build() Dialog full-screen dim WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%. Customised shopping cart reminder with dialog full-screen dim in Temu. Enhanced posture support WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app's layout and features accordingly. val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion val message = if (currentSdkVersion >= 6) { val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures buildString { append(supportedPostures.isNotEmpty()) if (supportedPostures.isNotEmpty()) { append(" ") append( supportedPostures.joinToString( separator = ",", prefix = "(", postfix = ")")) } } } else { "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)" } Other API changes WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include: Stable and no longer experimental APIs: ActivityEmbeddingController#invalidateVisibleActivityStacks ActivityEmbeddingController#getActivityStack SplitController#updateSplitAttributes API added to set activity embedding animation background: SplitAttributes.Builder#setAnimationParams API to get updated WindowMetrics information: ActivityEmbeddingController#embeddedActivityWindowInfo API to finish all activities in an activity stack: ActivityEmbeddingController#finishActivityStack How to get started To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version: dependencies { implementation("androidx.window:window:1.4.0") // or, if you're using the WindowManager testing library: testImplementation("androidx.window:window-testing:1.4.0") } Happy coding!
Posted by Brenda Shaw – Health & Home Partner Engineering Technical Writer At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store. We're excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities. Jetpack SDK is now in Beta We are excited to announce the beta release of our Jetback SDK! Since its initial release, we've dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point. In the latest SDK, we’re introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users: Make Recording Method Mandatory To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience: If your app currently does not set metadata when creating a record: Before StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, ) // error: metadata is not provided After StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata.manualEntry() ) If your app currently calls Metadata constructor when creating a record: Before StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata( clientRecordId = "client id", recordingMethod = RECORDING_METHOD_MANUAL_ENTRY, ), // error: Metadata constructor not found ) After StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata.manualEntry(clientRecordId = "client id"), ) Make Device Type Mandatory You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data. Before Device() // error: type not provided After Device(type = Device.Companion.TYPE_PHONE) We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes. New background reads permission To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission. This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings. Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file: ... Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices. Allow your app to read historic data By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history. Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings. Additional data access and types Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types: Exercise Routes allows users to share exercise routes with other apps for a seamless synchronized workout. By allowing users to share all routes or one route, their associated exercise activities and maps for their workouts will be synced with the fitness apps of their choice. The skin temperature data type measures peripheral body temperature unlocking insights around sleep quality, reproductive health, and the potential onset of illness. Health Connect also provides a planned exercise data type to enable training apps to write training plans and workout apps to read training plans. Recorded exercises (workouts) can be read back for personalized performance analysis to help users achieve their training goals. Access granular workout data, including sessions, blocks, and steps, for comprehensive performance analysis and personalized feedback. These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being. To learn more about all new APIs and bug fixes, check out the full release notes. Get started with the Health Connect Jetpack SDK Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard. Subscribe to our newsletter: Stay up-to-date with the latest news, announcements, and resources from Google Health and Fitness. Subscribe to our Health and Fitness Google Developer Newsletter and get the latest updates delivered straight to your inbox. Check out our Health Connect developer guide: The Health and Fitness Developer Center is your one-stop-shop for building health and fitness apps on Android - including a robust guide for getting started with Health Connect. Report an issue: Encountered a bug or technical issue? Report it directly to our team through the Issue Tracker so we can investigate and resolve it. You can also request a feature or provide feedback with Issue Tracker. We can’t wait to see what you create!
Posted by Tyler Beneke – Product Manager, and Lucas Silva – Software Engineer Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they'll be on a lock screen near you. Lock screen widgets in clock-wise order: Clock, Weather, Stocks, Timers, and Google Home App. In the top right is a customization call-to-action. Q: When will lock screen widgets be available? A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets. Q: Are there any specific requirements for widgets to be allowed on the lock screen? A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked="true" in its manifest entry. Q: How can I test my widget on the lock screen? A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget. Q: Which widgets can be displayed in this experience? A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we've made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device. Q: Can my widget opt-out of the experience? A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category "not_keyguard" in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility. Q: Are there any CDD requirements specifically for lock screen widgets? A: No, there are no specific CDD requirements solely for lock screen widgets. However, it's crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features. Q: Will lock screen widgets be enabled on existing devices? A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP. Q: Does the device need to be docked to use lock screen widgets? A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired. Q: Can OEMs set their own default widgets? A: Yes! Hardware providers can pre-set and automatically display default widgets. Q: Can OEMs customize the user interface for lock screen widgets? A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices. Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com Widgets: https://developer.android.com/develop/ui/views/appwidgets/overview Widget Design: https://developer.android.com/design/ui/widget Jetpack Glance: https://developer.android.com/develop/ui/compose/glance This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality. The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance. Getting set up with Transformer To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll: Create one or many MediaItem instances from your video file(s), then Apply item-specific edits to them by building an EditedMediaItem for each MediaItem, Create a Transformer instance configured with settings applicable to the whole exported video, and finally start the export to save your applied edits to a file. Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post! Here’s what this looks like in code: val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build() val editedMediaItem = EditedMediaItem.Builder(mediaItem).build() val transformer = Transformer.Builder(context) .addListener(/* Add a Transformer.Listener instance here for completion events */) .build() transformer.start(editedMediaItem, outputFilePath) Transcoding, Trimming, Muting, and Resizing with the Transformer API Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding. Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change: val transformer = Transformer.Builder(context) .addListener(...) .setVideoMimeType(MimeTypes.VIDEO_H265) .setAudioMimeType(MimeTypes.AUDIO_AAC) .build() Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg: $ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath The next operation we’ll try is Trimming. Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change: // Configure the trim operation by adding a ClippingConfiguration to // the media item val clippingConfiguration = MediaItem.ClippingConfiguration.Builder() .setStartPositionMs(3000) .setEndPositionMs(8000) .build() val mediaItem = MediaItem.Builder() .setUri(mediaItemUri) .setClippingConfiguration(clippingConfiguration) .build() // Transformer also has a trim optimization feature we can enable. // This will prioritize Transmuxing over Transcoding where possible. // See more about Transmuxing further down in this post. val transformer = Transformer.Builder(context) .addListener(...) .experimentalSetTrimOptimizationEnabled(true) .build() With FFmpeg: $ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath Next, we can mute the audio in the exported video file. val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setRemoveAudio(true) .build() The corresponding FFmpeg command: $ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width. val scaleEffect = ScaleAndRotateTransformation.Builder() .setScale(0.5f, 0.5f) .build() val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setEffects( /* audio */ Effects(emptyList(), /* video */ listOf(scaleEffect)) ) .build() An FFmpeg command could look like this: $ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple. Transformer API Performance results Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device: (Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.) Input video format: 10s 720p H264 video with AAC audio Transcoding to H265 video and AAC audio: ~1300ms Trimming video to 00:03-00:08: ~2300ms Muting audio: ~200ms Resizing video to half height and width: ~1200ms Input video format: 25s 360p VP8 video with Vorbis audio Transcoding to H265 video and AAC audio: ~3400ms Trimming video to 00:03-00:08: ~1700ms Muting audio: ~1600ms Resizing video to half height and width: ~4800ms Input video format: 4s 8k H265 video with AAC audio Transcoding to H265 video and AAC audio: ~2300ms Trimming video to 00:03-00:08: ~1800ms Muting audio: ~2000ms Resizing video to half height and width: ~3700ms One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times. When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences: Transmuxing Transformer’s preferred approach when possible - a quick transformation that preserves elementary streams. Only applicable to basic operations, such as rotating, trimming, or container conversion. No quality loss or bitrate change. Transcoding Transformer's fallback approach in cases when Transmuxing isn't possible - Involves decoding and re-encoding elementary streams. More extensive modifications to the input video are possible. Loss in quality due to re-encoding, but can achieve a desired bitrate target. We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above. A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest. Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder's output format and the input format must be compatible. If the optimization fails, Transformer automatically falls back to normal export. What’s next? As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs. To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!
Posted by Thomas Ezan Sr. – Android Developer Relation Engineer (@lethargicpanda) Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps. Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities. Image generated by Imagen 3 with prompt: “Shot in the style of DSLR camera with the polarizing filter. A photo of two hot air balloons over the unique rock formations in Cappadocia, Turkey. The colors and patterns on these balloons contrast beautifully against the earthy tones of the landscape below. This shot captures the sense of adventure that comes with enjoying such an experience.” Image generated by Imagen 3 with prompt: A weathered, wooden mech robot covered in flowering vines stands peacefully in a field of tall wildflowers, with a small blue bird resting on its outstretched hand. Digital cartoon, with warm colors and soft lines. A large cliff with a waterfall looms behind. Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app's storytelling by bringing its narratives to life with delightful personalized illustrations. You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide. Get started with Imagen 3 The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project: dependencies { implementation(platform("com.google.firebase:firebase-bom:33.10.0")) implementation("com.google.firebase:firebase-vertexai") } Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings: val imageModel = Firebase.vertexAI.imagenModel( modelName = "imagen-3.0-generate-001", generationConfig = ImagenGenerationConfig( imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75), addWatermark = true, numberOfImages = 1, aspectRatio = ImagenAspectRatio.SQUARE_1x1 ), safetySettings = ImagenSafetySettings( safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT ) ) Finally generate the image by calling generateImages: val imageResponse = imageModel.generateImages( prompt = "An astronaut riding a horse" ) Retrieve the generated image from the imageResponse and display it as a bitmap as follow: val image = imageResponse.images.first() val uiImage = image.asBitmap() Next steps Explore the comprehensive Firebase documentation for detailed API information. Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page. Start experimenting with Imagen 3 today! We're looking forward to seeing how you’ll leverage Imagen 3's capabilities to create truly unique, immersive and personalized Android experiences.
Posted by Anirudh Dewani, Director – Android Developer Relations In just a few days, on Thursday, March 13 at 10AM PT, we’ll be dropping our winter episode of #TheAndroidShow, on YouTube and on developer.android.com! Mobile World Congress - the annual event in Barcelona where Android device makers show off their latest devices, kicked off yesterday. In our winter episode we’ll take a look at these foldables, tablets and wearables and tell you what you need to get building. Plus we’ve got some news to share, like a new update for Gemini in Android Studio and some new goodies for games developers ahead of the Game Developer Conference (GDC) in San Francisco later this month. And of course, with the launch of Android XR in December, we’ll also be taking a look at how to get building there. It’s a packed show, and you don’t want to miss it! Some new Android foldables and tablets, at Mobile World Congress Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona: OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen - making it as compact or expansive as needed. Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet. Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity. These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost. Tune in to #TheAndroidShow: March 13 at 10AM PT These new devices are just one of the many things we’ll cover in our winter episode, you don’t want to miss it! If you watch live on YouTube, we’ll have folks standing by to answer your questions in the comments. See you on March 13 on YouTube or at developer.android.com/events/show!
Posted by Summers Pitman – Developer Relations Engineer, and Ivy Knight – Senior Design Advocate Widgets can bring more productive, delightful and customized experiences to users' home screens, but they can be tricky to design to ensure a high quality focused experience. In this blog post, we’ll cover how easy Widget Canonical Layouts can make this process. But, what is a Canonical Layout? It is a common layout pattern that works for various screen sizes. You can use them as a starting point, ready-to-use compositions that help layouts adapt for common use cases and screen sizes. Widgets also provide Canonical Layouts to get started crafting higher quality widgets. The Widget Canonical Layouts Figma makes previewing your widget content in multiple breakpoints and layout types. Join me in our Figma design resource to explore how they can simplify designing a widget for one of our sample apps, JetNews. 1. Content to adapt Jetnews is a sample news reading app, built with Jetpack Compose. With the experience in mind, the primary user journey is reading articles. A widget should be glanceable, so displaying a full article would not be a good use case. Since they are timely news articles, surfacing newer content could be more productive for users. We’ll want to give a condensed version of each article similar to the app home feed. The addition of a bookmark action would allow the user to save and read later in the full app experience. 2. Choosing a Canonical Layout With our content and user journey established, we’ll take a glance at which canonical layouts would make sense. We want to show at least a few new articles with a headline, truncated description, and possible thumbnail. Which brings us to the Image + Text Grid layout and maybe the list layout. Within our new Figma Widget Canonical Layout preview, we can add in some mock content to check out how these layouts will look in various sizes. 3. Adapting to breakpoint sizes Now that we’ve previewed our content in both the grid and list layouts, we don’t have to choose between just one! The grid layout better displays our content for larger sizes, where we have some more room to take advantage of multiple columns and a larger thumbnail image. While the list is working nicely for smaller sizes, giving a one column layout with a smaller thumbnail. But we can adapt even further to allow the user to have more resizing flexibility and anticipate different OEM grid sizing. For JetNews, we decided on an additional extra small layout to accommodate a smaller grid size and vertical height while still using the List layout. For this size I decided to remove the thumbnail all together to give the title and action space. Consider these in-between design tweaks as needed (between any of the breakpoints), that can be applied as general rules in your widget designs. Here are a few guidelines to borrow: Establish a content hierarchy on what to hide as the widget shrinks. Use a type scale so the type scales consistently. Create some parameters for image scaling with aspect ratios and cropping techniques. Use component presentation changes. For example, the title bar’s FAB can be reduced to a standard icon. Last, I’ll swap the app icon, round up all the breakpoint sizes, and provide an option with brand colors. These are ready to send over to dev! Tune in for the code along to check out how to implement the final widget. Go try it out and explore more widgets You can find the Widget Canonical Layouts at our new Figma Community Page: figma.com/@androiddesign. Stay tuned for more Android Figma resources. Check out the official Android documentation for detailed information and best practices Widgets on Android and more on Widget Quality Tiers, and join us for the rest of Widget Spotlight week! This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Ivy Knight – Senior Design Advocate Level up your app Widgets with new quality tiers Widgets can be a powerful tool for engaging users and increasing the visibility of your app. They can also help you to improve the user experience by providing users with a more convenient way to access your app's content and features. To build a great Android widget, it should be helpful, adaptive, and visually cohesive with the overall aesthetic of the device home screen. In order to help you achieve a great widget, we are pleased to introduce Android Widget Quality Tiers! The new Widget quality tiers are here to help guide you towards a best practice implementation of widgets, that will look great and bring your user’s value across the ecosystem of Android Phone, Tablets and Foldables. What does this mean for widget makers? Whether you are planning a new widget, or investing in an update to an existing widget, the Widget Quality Tiers will help you evaluate and plan for a high quality widget. Just like Large Screen quality tiers help optimize app experiences, these Widget tiers guide you in creating great widgets across all Android devices. Now, similar tiers are being introduced for widgets to ensure they're not just functional, but also visually appealing and user-friendly. Widgets that meet quality tier guidelines will be discoverable under the new Widget filter in Google Play. Consider using our Canonical Widget layouts, which are based on Jetpack Glance components, to make it easier for you to design and build a Tier 1 widget your users will love. Let’s take a look at the Widget Quality Tiers There are three tiers built with required system defaults and suggested guidance to create an enhanced widget experience: Tier 1: Differentiated Differentiated widgets go further by implementing theming and adapting to resizing. Tier 1 widgets are exemplary widgets offering hero experiences that are personalized, and create unique and productive homescreens. These widgets meet Tier 2 standards plus enhancements for layout, color, discovery, and system coherence criteria. For example, use the system provided corner radius, and don’t set a custom corner radius on Widgets. Add more personalization with dynamic color and generated previews while ensuring your widgets look good across devices by not overriding system defaults. Tier 1 widgets that, from the top left, properly crop content, fill the layout bounds, have appropriately sized headers and touch targets, and make good use of colors and contrast. Tier 2: Quality Standard These widgets are helpful, usable, and provide a quality experience. They meet all criteria for layout, color, discovery, and content. Make sure your widget has appropriate touch targets. Tier 2 widgets are functional but simple, they meet the basic criteria for a usable app. But if you want to create a truly stellar experience for your users, tier 1 criteria introduce ways to make a more personal, interactive, and coherent widget. Tier 3: Low Quality These widgets don't meet the minimum quality bar and don't provide a great user experience, meaning they are not following or missing criteria from Tier 2. Clockwise from the top left not filling the bounds, poorly cropped content, low color contrast, mis-sized header, and small touch targets. For example, ensure content is visible and not cropped Build and elevate your Android widgets with Widget Quality Tiers Dive deeper into the widget quality tiers and start building widgets that not only look great but also provide an amazing user experience! Check out the official Android documentation for detailed information and best practices. This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Summers Pittman – Developer Relations Engineer To make it even easier for users to listen on Android, developers at SoundCloud — an artist-first music platform — turned to Jetpack Glance to create a Liked Tracks widget for their highly-rated app, which boasts 4.6 stars and over 100 million downloads. With a catalog of over 400 million tracks from more than 40 million creators, SoundCloud is dedicated to connecting artists and fans through music, and this latest update to its Android app offers listeners an even more convenient way to enjoy their favorite tracks. Propelled by Glance, the team was able to complete the project in just two weeks, saving precious development time and boosting engagement. Maximize visibility with user-friendly touchpoints By showcasing the artwork of their recently liked tracks, the new Liked Tracks widget allows users to to jump directly to a specific song or access their full track list right from their home screen. This keeps SoundCloud front and center for listeners, acting as a shortcut to their personal libraries and encouraging them to tune back in. Liked Tracks isn’t SoundCloud’s first widget. Over a decade ago, SoundCloud developers used RemoteViews to create a Player widget that let users easily control playback and like tracks. After recently updating the Player widget based on design feedback, developers made sure to prioritize a personalized interface for Liked Tracks. The new widget features both light and dark modes, resizes freely to accommodate user preferences, and dynamically adapts its theme to complement the user's wallpaper. Backed by Glance, these design choices ensured the widget isn’t just seamless to use but also serves as an appealing and tailored gateway into the SoundCloud app. SoundCloud’s Liked Tracks widget in action. Accelerate development cycles with Glance Glance also played a crucial role in streamlining the development of Liked Tracks. For developers already proficient in Compose, Glance’s intuitive design felt familiar, minimizing the learning curve and accelerating the team's onboarding. The platform’s collection of code samples provided a useful starting point, too, helping developers quickly grasp its capabilities and best practices. “Using sample app repositories is a great way to learn. I can check out an entire repository and inspect how the code operates,” said Sigute Kateivaite, lead SoundCloud engineer on the Android team. “It sped up our widget development by a lot.” The declarative nature of Glance’s UI was especially beneficial to developers. Because they didn’t have to use additional XML files when building, developers could create cleaner, more readable code with less boilerplate. Glance also allowed them to work with modules separately, meaning components could be written and integrated one at a time and reused for later iterations. By isolating components, developers could quickly test modules, identify and resolve issues, and build for different states without duplication, leading to more efficient workflows. Glance’s design also improved the overall code quality. The ability to make changes using Android Studio’s support for Glance’s real-time preview enabled developers to build components in isolation without needing to integrate the UI component into the widget or deploy the full widget on the phone. They could represent various states, view all relevant cases, and review changes to components without having to compile the full app. Put simply, Glance made developers more productive because it allowed them to iterate faster, refining the widget for a more polished final product. Elevate app widgets with the power of Glance With effective new workflows and no major development issues, the SoundCloud team applauds Glance for streamlining a successful production. “With the new Liked Tracks widget, rollout has been really stable,” Sigute said. “Development and the testing process went really smoothly.” Early data also shows promising results — active users now interact with the widget to access the app multiple times a day on average. 2X average daily active user interaction with widget feature. Looking ahead, the SoundCloud team is eager to employ more of Glance to improve existing widgets, like adopting canonical layouts, and even develop new ones. While the current Liked Tracks widget focuses primarily on image display, the team is interested in including other types of content to further enrich user experience. Developers also hope to migrate the Player widget over to Glance to access the framework’s robust theming options, simplify resizing processes, and address some long-standing bugs. Beyond the Liked Tracks and Player features, the team is excited about the potential of using Glance to build a wider range of widgets. The modular, component-based architecture of the Liked Tracks widget, with reusable elements like UserAvatar and Logo, offers a solid foundation for future development, promising to simplify processes from the start. Get started building custom app widgets with Jetpack Glance Rapidly develop and deploy widgets that keep your app visible and engaging with Glance. This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Ash Nohe and Summers Pitman – Developer Relations Engineers We’re kicking off the next edition in our Spotlight Week series! This week, we'll be diving deep into how to create high-quality widgets that boost user engagement and improve discoverability. We've heard your feedback: you want your widgets to be easily discoverable. To address this, we’re excited to share that Google Play is introducing a new search filter specifically for apps with high-quality widgets. By equipping you with the knowledge and tools to ensure your widgets shine, we aim to demonstrate how widgets can be a crucial element in building delightful, helpful, and performant widgets that keep your users engaged. Learn more about Google Play’s widget discovery features. Here’s what we’re covering this week in our Spotlight Week on Widgets: Why Widgets? Monday, March 3rd We’re kicking off the week with an overview of why widgets are essential for today's users. Learn how you can level up your app with Widgets and get inspired by these best-in-class examples. Plus, learn how Google Play is improving widget discovery through a dedicated search filter, new app detail page badges, and other enhancements designed to increase user interaction. Design great widgets with Figma and Canonical Layouts Tuesday, March 4th Learn how to visualize your content in widget layouts and create high quality widgets with a new Figma resource, hands-on lab and blog with the Canonical Layouts. Learn from SoundCloud's experience: a case study showcasing impactful widget implementation. Develop best practice widgets with Glance Wednesday, March 5th Follow our code-along video to learn practical widget update techniques using Canonical Layouts. #AskAndroid Thursday, March 6th Get your widget questions answered in #AskAndroid, and dive into lockscreen widgets in our FAQ. That's a week packed with widget insights! This blog post serves as your central hub for updates, with links added regularly throughout the week. Get even more widgets content and insights by following Android Developers on X, and Android by Google at Linkedin. Resources Glance guidance Canonical layouts on github Design guidance Android UI Figma Kit Socialite App Widget
Posted by André Labonté – Senior Product Manager, Android Widgets If you're an Android app developer and you're looking to boost your app's visibility, and engagement, you should definitely consider adding widgets. These small but mighty UI elements can have a significant impact on your app's success. A widget is basically a UI that lives outside your main app. Widgets act like a window into your app content and a shortcut to your core features, which users can conveniently engage with right from their home screen, lock screen, or even through digital assistants. Why Widgets are Awesome for Your App: More Visibility: Widgets put your brand and key features front and center on the user's device, so they're more likely to see it. Better User Engagement: By giving users quick access to important features, widgets encourage them to use your app more often. Increased Conversions: You can use widgets to recommend personalized content or promote premium features, which could lead to more conversions. Happier Users Who Stick Around: Easy access to app content and features through widgets can lead to overall better user experience, and contribute to retention. Understanding What Users Want: Key to Good Widget Design People use widgets for different reasons. Understanding these motivations is crucial for designing widgets that resonate. Customization: Users like to personalize their home screens. Think about how your app's content can help them do that. Efficiency: Widgets give users quick access to the features they use a lot, which saves them time and effort. If your app has features that users would find handy to access right from their home screen, think about putting them in a widget. Quick Info: Some widgets are great for giving users essential info at a glance. If users often open your app for quick updates, a glanceable widget. Building Awesome Widgets: Tips for Developers Here's how to make widgets that users will love: Focus on Value: Make sure your widget does something useful for users without them having to open the app. Keep it Simple: Design widgets that are easy to use and understand. Make it Adaptable: Test your widgets on different Android devices (phones, tablets, foldables) to make sure they work well on all of them. Match the Look: Design widgets that fit in with the system's overall look by using system colors, fonts, and corner shapes. Make it Easy to Find: Use the widget pinning API to encourage users to add your widget from within your app. Give them good previews and descriptions so they know what it's all about. Get Inspired and Start Building We encourage you to integrate widgets into your Android app strategy. For inspiration and guidance, explore our new Widget design gallery, featuring Canonical Widget Layouts. We can't wait to see the awesome widgets you come up with! This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Yinka Taiwo-Peters – Product Manager Android developers, we've heard you. Historically, one of the challenges with investing in widget development has been discoverability and user understanding. You've asked for better ways for users to find and utilize your widgets, and we're delivering. Google Play now offers significant enhancements to widget discovery, creating a prime opportunity to re-engage with your users on a deeper level. We understand that the effort required to build and maintain widgets needs to be justified by user adoption, that’s why we’ve designed these key improvements, which are coming soon to Google Play on Android phones, tablets and foldables: Dedicated Widgets Search Filter: Users can now directly search for apps with widgets using a dedicated filter on Google Play. This means your apps/games with widgets will be easily identified, helping drive targeted downloads and engagement. New Widget Badges on App Detail Pages: We’ve introduced a visual badge on your app’s detail pages to clearly indicate the presence of widgets. This eliminates guesswork for users and highlights your widget offerings, encouraging them to explore and utilize this capability. Curated Widgets Editorial Page: We're actively educating users on the value of widgets through a new editorial page. This curated space showcases collections of excellent widgets and promotes the apps that leverage them. This provides an additional channel for your widgets to gain visibility and reach a wider audience. click to enlarge What this means for you: Increased User Engagement: Enhanced discoverability may translate to more users finding and using your widgets, leading to increased app engagement and user retention. New Opportunities for User Interaction: Widgets offer a unique way to provide value and interact with users on their home screens, fostering a deeper connection with your app. Renewed Investment Justification: The improved discoverability features make widget development a more viable and rewarding investment. We encourage you to revisit your app strategy and consider the potential of widgets. With these new discovery tools, Google Play is making it easier than ever for users to find and love your widgets. Now is the time to leverage the power of widgets and enhance your Android app experience. This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Posted by Ashley Tschudin – Social Media Specialist, MTP at Google Welcome to "Meet the Android Studio Team"! In this blog series, we introduce you to the passionate people who create the Android development tools you use every day. Get to know the engineers, designers, product managers, and more who work hard to craft the best possible experience for Android developers, and explore their unique perspectives. Dan Dole: Building Android Studio for You Meet Dan Dole, a UX Manager for Android Developer UX, who offers a unique perspective on the Android development journey. He highlights the passion and talent within the Android Developer team, emphasizing the importance of elegant solutions and efficient experiences for developers. Dan also delves into the exciting potential of AI and machine learning to transform Android development, foreseeing a future where AI accelerates learning, refines code, and empowers developers to focus on innovation. Through his insights, Dan underscores the collaborative spirit and unwavering commitment to developer success that defines the Android Developer Experience. Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development? My journey with Android Development and the Android Studio team started with a conversation with a former colleague and the product lead for Android Developer. She was a leader I respected as someone who was passionate about developers, and believed that UX was a critical component of product development. After meeting with her and understanding the direction of Android, I was convinced that Android could be not just an outstanding mobile platform but a platform that spanned devices, and this was an organization that was focused on enabling developers to bring their talents and creativity to billions of users. Each year, I see us advancing in that direction and feel more confident in my choice to be part of the Android Developer team. This question can’t be answered without mentioning that the people working on Android Developer tools and APIs are some of the most passionate and talented people I have ever worked with. What are some of the biggest challenges you've faced in your career as a developer, and how have those experiences shaped your approach to your job? I am a UX professional in a highly technical environment. This has been the case for about two decades. One of the challenges I have faced is articulating the value of elegant solutions for developers. This is partially because developers are very capable and resourceful. Clearly, they are tolerant and they will overcome issues that average users won’t. Prior to joining Android Developer Experience, I would have to create processes and negotiate quality bars to drive quality and build efficient experiences. This challenge gave me skill in release management and how to understand some complexities unique to this space, but it also gave me tools to help explain that developers may be able to manage complexity better than most. Developers appreciate refinement, productivity, and quality, as much as they appreciate flexibility and capability. How has the integration of AI and machine learning impacted Android developer capabilities, and how do you see it evolving in the future? We are in the very early stages of AI and its ability to impact developers. As we learn how to be transparent and give developers control over how an AI can benefit them, we are seeing an immediate impact on accelerating learning and refining code. I expect AI to remove the “chores” that developers have to do, creating more space for them to be productive. I also expect AI to evolve from generating artifacts to generating actions. Making AI features more proactive and allowing developers to more quickly adjust to users' needs. How does the Android Studio team ensure that products or features meet the ever-changing needs of developers? I lead our Android Developer research and design team. We spent countless hours listening to developers, evaluating feedback, and understanding technology investments. We approach these conversations and instruments by evaluating what we have already delivered, looking and listening to the challenges developers face, and designing and evaluating new approaches. The Android Developer team (ENG, Product, UX and Test) are motivated by supporting developers, so all developer feedback is received with gratitude and influences all our investments. What advice would you give to aspiring Android developers who are just starting their journey? Android is a vibrant and welcoming community, so my advice would be to engage the community. It is where we learn, inspire and grow together. I have heard many Android developers talk about the pride they have working on this platform and the conviction they have in it being the best platform to work on. I feel like this is unique to Android, the platform isn’t a means to an end, it’s an identity and value system. Android is a community of amazing people, get involved. Make Gemini in Android Studio Your Coding Companion Embrace Dan's vision for the future of Android development and explore the latest AI advancements in Android Studio. Features like AI-powered code generation and refactoring tools empower you to develop higher-quality apps with greater efficiency. Stay tuned! Want to meet more of the Android Studio team? Stay tuned for future installments of this series, where we'll introduce you to new faces and share their unique insights. Find Dan Dole on LinkedIn.
Posted by Ashley Tschudin – Social Media Specialist, MTP at Google Welcome to "Meet the Android Studio Team," our new ongoing blog series. Each week, we'll introduce you to the talented people behind Android Studio. Get to know the engineers, designers, product managers, and more who create the best possible experience for Android developers like you. Join us and explore their unique perspectives. Tor Norbye: Building Android Studio for You Meet Tor Norbye, an Engineering Director at Google leading the development of Android Studio. From his early days of coding to leading the charge on AI-powered development tools, Tor shares his insights on the evolution of Android and the vital role Android Studio plays in its future. We'll delve into the challenges of creating developer tools, the importance of community feedback, and how Google strives to empower developers worldwide. Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development? I grew up in Norway and I was fascinated by programming; my first exposure was as a middle schooler reading program listings in magazines (yes, in the early 80s, monthly computer magazines would include source code!) and in 1983 I got my hands on a microcomputer, and knew immediately that's what I wanted to do as a career. And now, 40+ years later, I still love programming. It's not my day-job anymore, but I still write bits and pieces of code for Android Studio on the shuttle and during quiet periods. I've worked on developer tools my whole career - first, 14 years at Sun Microsystems after college. In 2010 I got increasingly interested in the rise of mobile computing and really wanted to be part of it, so I joined the Android team, and I've been here since. Back then there was no "Android Studio". At the time we were working on Eclipse-based tooling for Android development. But we all knew that IntelliJ was the gold-standard for Java development, so a couple years later we began the work on building Android Studio on top of IntelliJ and with various new and ported code from our Eclipse plugins. I then had the honor of doing the unveiling demo at Google I/O in 2013. How has the integration of AI and machine learning impacted Android developer capabilities, and how do you see it evolving in the future? The integration of artificial intelligence has absolutely impacted Android developer capabilities, and this is just the beginning. I felt very fortunate to be part of bringing about the massive shift from desktop computing to mobile computing when I joined Android, and I can't believe I get to be in the middle of a second massive industry shift as well, with AI and large language models. I actually spend a lot of my time on this, working with Studio engineers, UX and product managers on our various AI related features, and talking to partner AI teams at Google. We've made a huge amount of progress in the last couple of years, both on the Studio feature integration side, as well as Google-wide on the AI side. While there is some skepticism that we're just doing AI features for AI's sake, I don't see it that way. With AI, we can suddenly, with relatively low effort, build useful features not previously possible. Here's a very simple example from the latest Studio version: When you invoke the Rename refactoring feature, we use Gemini to add additional naming suggestions into the name popup based on what your code is doing. Here we're helping you pick good names – and naming is famously one of the two hardest problems in computer science – naming, cache invalidation and off-by-one errors. Yet LLMs are good at this – so coupled with the safe refactoring machinery in the IDE, we were able to safely add a useful feature with relatively low engineering cost on the IDE side (of course, this is building on top of a massive investment from Google over on the Gemini side). The field is moving incredibly quickly, so it's hard to predict where things are going, but we're actively working in several areas, making the AI more aware of your codebase, and making it handle larger, complex tasks via AI Agents, and so much more. What are some of the biggest challenges you've faced in your career as a developer, and how have those experiences shaped your approach to your job? Earlier in my career, at a different company, we had big annual releases. I took a lot of pride in my productivity, and as my responsibilities grew, I'd try to do the impossible and deliver, no matter what. I'd not only work long hours, but I'd also try to work as quickly as I can. This led to a lot of stress. I remember putting my (at the time) young children to bed and impatiently waiting for them to fall asleep such that I could head back out to the garage office and start the evening coding shift. And I knew that stress isn't healthy, so I'd also stress about being stressed! This obviously wasn't sustainable. Now, I emphasize work life balance not only for myself, but also for our team. I want to make sure our work is sustainable, and that people can thrive and be in it for the long term. It's a marathon, not a sprint. Can you share an example of how feedback from the developer community has directly influenced a feature or improvement? We have a number of feedback channels; the most important one is the Android Studio issue tracker. We still have a very large backlog of bugs, so it's easy to get the impression that we're ignoring user reports, but that's not true. As a team, we've actually fixed several thousand bugs in 2024 alone. The best bugs are those that are clear and actionable, ideally with steps to reproduce. I'm also very thankful to everyone who turns on data sharing in Studio; if you don't already, please consider it! Our analytics is more of an indirect, but still vital, feedback channel from the community. In addition to collecting information on, for example, which menu items are clicked, we also use it to collect quality metrics on system health. For instance, when we detect that the UI is lagging (such as a 1+ second freeze in the UI thread), we grab a thread dump and send it to the server, then aggregate these into a dashboard where we can see top freeze spots in the IDE across the user population, and can focus our efforts on fixing those. How does the Studio team contribute to Google's broader vision for the Android platform? In Android Studio we're always making sure we support the latest technologies and recommendations from Android, Firebase, Material, and other Google technologies. That way, it's easier for developers to adopt recommendations, like using Kotlin, Coroutines, Compose, Material, and so on. Explore the Power of AI code completion, automated refactoring, and other AI-driven tools. Stay tuned! Don't miss our next and final installment in the "Meet the Android Studio Team" series; we'll feature one more talented team member and share their unique perspective. Stay tuned to learn more about the amazing people behind Android Studio. Find Tor Norbye on Bluesky.
Posted by Matthew McCullough – VP of Product Management, Android Developer Today we're releasing the second beta of Android 16, continuing our work to build a platform that enables creative expression. You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. This build adds new support for professional camera experiences, graphical effects, extends our performance framework, and continues the evolution of features related to privacy, security, and background tasks. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone. Media and camera updates Android 16 enhances support for professional camera users, allowing for hybrid auto exposure along with precise color temperature and tint adjustments. It's easier than ever to capture motion photos with new Intent actions, and we're continuing to improve UltraHDR images, with support for HEIC encoding and new parameters from the ISO 21496-1 draft standard. Hybrid auto-exposure Android 16 adds new hybrid auto-exposure modes to Camera2, allowing you to manually control specific aspects of exposure while letting the auto-exposure (AE) algorithm handle the rest. You can control ISO + AE, and exposure time + AE, providing greater flexibility compared to the current approach where you either have full manual control or rely entirely on auto-exposure. fun setISOPriority() { // ... val availablePriorityModes = mStaticInfo.characteristics.get( CameraCharacteristics.CONTROL_AE_AVAILABLE_PRIORITY_MODES ) // ... // Turn on AE mode to set priority mode reqBuilder[CaptureRequest.CONTROL_AE_MODE] = CameraMetadata.CONTROL_AE_MODE_ON reqBuilder[CaptureRequest.CONTROL_AE_PRIORITY_MODE] = CameraMetadata.CONTROL_AE_PRIORITY_MODE_SENSOR_SENSITIVITY_PRIORITY reqBuilder[CaptureRequest.SENSOR_SENSITIVITY] = TEST_SENSITIVITY_VALUE val request: CaptureRequest = reqBuilder.build() // ... } Precise color temperature and tint adjustments Android 16 adds camera support for fine color temperature and tint adjustments to better support professional video recording applications. White balance settings are currently controlled through CONTROL_AWB_MODE, which contains options limited to a preset list, such as Incandescent, Cloudy, and Twilight. The COLOR_CORRECTION_MODE_CCT enables the use of COLOR_CORRECTION_COLOR_TEMPERATURE and COLOR_CORRECTION_COLOR_TINT for precise adjustments of white balance based on the correlated color temperature. fun setCCT() { // ... (Your existing code before this point) ... val colorTemperatureRange: Range = mStaticInfo.characteristics[CameraCharacteristics.COLOR_CORRECTION_COLOR_TEMPERATURE_RANGE] // Set to manual mode to enable CCT mode reqBuilder[CaptureRequest.CONTROL_AWB_MODE] = CameraMetadata.CONTROL_AWB_MODE_OFF reqBuilder[CaptureRequest.COLOR_CORRECTION_MODE] = CameraMetadata.COLOR_CORRECTION_MODE_CCT reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TEMPERATURE] = 5000 reqBuilder[CaptureRequest.COLOR_CORRECTION_COLOR_TINT] = 30 val request: CaptureRequest = reqBuilder.build() // ... (Your existing code after this point) ... } Motion photo capture intent actions Android 16 adds standard Intent actions — ACTION_MOTION_PHOTO_CAPTURE, and ACTION_MOTION_PHOTO_CAPTURE_SECURE — which request that the camera application capture a motion photo and return it. You must either pass an extra EXTRA_OUTPUT to control where the image will be written, or a Uri through Intent setClipData. If you don't set a ClipData, it will be copied there for you when calling Context.startActivity. UltraHDR image enhancements Android 16 continues our work to deliver dazzling image quality with UltraHDR images. It adds support for UltraHDR images in the HEIC file format. These images will get ImageFormat type HEIC_ULTRAHDR and will contain an embedded gainmap similar to the existing UltraHDR JPEG format. We're working on AVIF support for UltraHDR as well, so stay tuned. In addition, Android 16 implements additional parameters in UltraHDR from the ISO 21496-1 draft standard, including the ability to get and set the colorspace that gainmap math should be applied in, as well as support for HDR encoded base images with SDR gainmaps. Custom graphical effects with AGSL Android 16 adds RuntimeColorFilter and RuntimeXfermode, allowing you to author complex effects like Threshold, Sepia, and Hue Saturation and apply them to draw calls. Since Android 13, you've been able to use AGSL to create custom RuntimeShaders that extend Shaders. The new API mirrors this, adding an AGSL-powered RuntimeColorFilter that extends ColorFilters, and a Xfermode effect that allows you to implement AGSL-based custom compositing and blending between source and destination pixels. private val thresholdEffectString = """ uniform half threshold; half4 main(half4 c) { half luminosity = dot(c.rgb, half3(0.2126, 0.7152, 0.0722)); half bw = step(threshold, luminosity); return bw.xxx1 * c.a; }""" fun setCustomColorFilter(paint: Paint) { val filter = RuntimeColorFilter(thresholdEffectString) filter.setFloatUniform(0.5) paint.colorFilter = filter } Behavior changes With every Android release, we seek to make the platform more efficient, privacy conscious, internationalization friendly, and robust, balancing the needs of apps against hardware support, system performance, user privacy, and battery life. This can result in behavior changes that impact compatibility. Edge to edge opt-out going away Android 15 enforced edge-to-edge for apps targeting Android 15 (SDK 35), but your app could opt-out by setting R.attr#windowOptOutEdgeToEdgeEnforcement to true. Once your app targets Android 16 (Baklava), R.attr#windowOptOutEdgeToEdgeEnforcement is deprecated and disabled and your app cannot opt-out of going edge-to-edge. To be compatible with Android 16 Beta 2, ensure your app supports edge-to-edge and remove any use of R.attr#windowOptOutEdgeToEdgeEnforcement. To support edge-to-edge, see the Compose and Views guidance. Please let us know about concerns in our tracker on the feedback page. Health and fitness permissions For apps targeting Android 16 or higher, BODY_SENSORS permissions are transitioning to the granular permissions under android.permissions.health also used by Health Connect. Any API previously requiring BODY_SENSORS or BODY_SENSORS_BACKGROUND will now require the corresponding android.permissions.health permission. This affects the following data types, APIs, and foreground service types: HEART_RATE_BPM from Wear Health Services Sensor.TYPE_HEART_RATE from Android Sensor Manager heartRateAccuracy and heartRateBpm from Wear ProtoLayout FOREGROUND_SERVICE_TYPE_HEALTH where the respective android.permission.health permission is needed in place of BODY_SENSORS If your app uses these APIs, it should now request the respective granular permissions: For while-in-use monitoring of Heart Rate, SpO2, or Skin Temperature, request the granular permission under android.permissions.health, such as READ_HEART_RATE instead of BODY_SENSORS. For background sensor access, request READ_HEALTH_DATA_IN_BACKGROUND instead of BODY_SENSORS_BACKGROUND. These permissions are the same as those that guard access to reading data from Health Connect, the Android datastore for health, fitness, and wellness data. Abandoned empty jobs stop reason An abandoned job occurs when the JobParameters object associated with the job has been garbage collected, but jobFinished has not been called to signal job completion. This indicates that the job may be running and being rescheduled without the application's awareness. Applications in Android 16 that rely on JobScheduler without maintaining a strong reference to the JobParameters object will now be granted the new job stop reason STOP_REASON_TIMEOUT_ABANDONED on timeout, instead of STOP_REASON_TIMEOUT. If there are frequent occurrences of the new abandoned stop reason, the system will take mitigation steps to reduce job frequency. Please use the new stop reason to detect and reduce abandoned jobs. Note: If you're using WorkManager, you're not expected to be impacted by this change — one nice side effect of using Android Jetpack to schedule your work. Intent redirect changes Android 16 introduces default security hardening against Intent redirection attacks regardless of your app's targetSDK version. The removeLaunchSecurityProtection API allows you to opt-out of this protection if your testing reveals issues. Note: Opting out of security protections should be done with caution and only when absolutely necessary, as it can increase the risk of security vulnerabilities. val iSublevel = intent.getParcelableExtra("sub_intent", Intent::class.java) iSublevel?.let { it.removeLaunchSecurityProtection() startActivity(it) } Elegant font APIs deprecated and disabled Apps targeting Android 15 (API level 35) have the elegantTextHeight TextView attribute set to true by default, replacing the compact font with one that is much more readable. You could override this by setting the elegantTextHeight attribute to false. Android 16 deprecates the elegantTextHeight attribute, and the attribute will be ignored once your app targets Android 16. The “UI fonts” controlled by these APIs are being discontinued, so you should adapt any layouts to ensure consistent and future proof text rendering in Arabic, Lao, Myanmar, Tamil, Gujarati, Kannada, Malayalam, Odia, Telugu or Thai. default elegantTextHeight behavior for apps targeting Android 14 (API level 34) and lower default elegantTextHeight behavior for apps targeting Android 15 (API level 35) and higher 16 KB page size compatibility mode Android 15 introduced support for 16KB memory pages to optimize performance of the platform. Android 16 adds a compatibility mode, allowing some apps built for 4K memory pages to run on a device configured for 16KB memory pages. If Android detects that your app has 4KB aligned memory pages, it will automatically use compatibility mode and display a notification dialog to the user. Setting the android:pageSizeCompat property in the AndroidManifest.xml to enable the backwards compatibility mode will prevent the display of the dialog when your app launches. For best performance, reliability, and stability, your app should still be 16KB aligned. Read our recent blog post about updating your apps to support 16KB memory pages for more details. Measurement system customization Users can now customize their measurement system in regional preferences within Settings. The user preference is included as part of the locale code, so you can register a BroadcastReceiver on ACTION_LOCALE_CHANGED to handle locale configuration changes when regional preferences change. Using formatters can help match the local experience. For example, "0.5 in" in English (United States), is "12,7 mm" for a user who has set their phone to English (Denmark) or who uses their phone in English (United States) with the metric system as the measurement system preference. To find these settings in Android 16 Beta 2, open the Settings app and navigate to System > Languages & region. Content handling for live wallpapers In Android 16, the live wallpaper framework is gaining a new content API to address the challenges of dynamic, user-driven wallpapers. Currently, live wallpapers incorporating user-provided content require complex, service-specific implementations. Android 16 introduces WallpaperDescription and WallpaperInstance. WallpaperDescription allows you to identify distinct instances of a live wallpaper from the same service. For example, a wallpaper that has instances on both the home screen and on the lock screen may have unique content in both places. The wallpaper picker and WallpaperManager use this metadata to better present wallpapers to users, streamlining the process for you to create diverse and personalized live wallpaper experiences. Headroom APIs in ADPF The SystemHealthManager introduces the getCpuHeadroom and getGpuHeadroom APIs, designed to provide games and resource-intensive apps with estimates of available CPU and GPU resources. These methods offer a way for you to gauge how your app or game can best improve system health, particularly when used in conjunction with other Android Dynamic Performance Framework (ADPF) APIs that detect thermal throttling. By using CpuHeadroomParams and GpuHeadroomParams on supported devices, you will be able to customize the time window used to compute the headroom and select between average or minimum resource availability. This can help you reduce your CPU or GPU resource usage accordingly, leading to better user experiences and improved battery life. Key sharing API Android 16 adds APIs that support sharing access to Android Keystore keys with other apps. The new KeyStoreManager class supports granting and revoking access to keys by app uid, and includes an API for apps to access shared keys. Standardized picture and audio quality framework for TVs The new MediaQuality package in Android 16 exposes a set of standardized APIs for access to audio and picture profiles and hardware-related settings. This allows streaming apps to query profiles and apply them to media dynamically: Movies mastered with a wider dynamic range require greater color accuracy to see subtle details in shadows and adjust to ambient light, so a profile that prefers color accuracy over brightness may be appropriate. Live sporting events are often mastered with a narrow dynamic range, but are often watched in daylight, so a profile that gives preference to brightness over color accuracy can give better results. Fully interactive content wants minimal processing to reduce latency, and wants higher frame rates, which is why many TV's ship with a game profile. The API allows apps to switch between profiles and users to enjoy the benefits of tuning supported TVs to best suit their content. Accessibility Android 16 adds additional APIs to enhance UI semantics that help improve consistency for users that rely on accessibility services, such as TalkBack. Duration added to TtsSpan Android 16 extends TtsSpan with a TYPE_DURATION, consisting of ARG_HOURS, ARG_MINUTES, and ARG_SECONDS. This allows you to directly annotate time duration, ensuring accurate and consistent text-to-speech output with services like TalkBack. Support elements with multiple labels Android currently allows UI elements to derive their accessibility label from another, and now offers the ability for multiple labels to be associated, a common scenario in web content. By introducing a list-based API within AccessibilityNodeInfo, Android can directly support these multi-label relationships. As part of this change, we've deprecated AccessibilityNodeInfo setLabeledBy and getLabeledBy in favor of addLabeledBy, removeLabeledBy, and getLabeledByList. Improved support for expandable elements Android 16 adds accessibility APIs that allow you to convey the expanded or collapsed state of interactive elements, such as menus and expandable lists. By setting the expanded state using setExpandedState and dispatching TYPE_WINDOW_CONTENT_CHANGED AccessibilityEvents with a CONTENT_CHANGE_TYPE_EXPANDED content change type, you can ensure that screen readers like TalkBack announce state changes, providing a more intuitive and inclusive user experience. Indeterminate ProgressBars Android 16 adds RANGE_TYPE_INDETERMINATE, giving a way for you to expose RangeInfo for both determinate and indeterminate ProgressBar widgets, allowing services like TalkBack to more consistently provide feedback for progress indicators. Tri-state CheckBox The new AccessibilityNodeInfo getChecked and setChecked(int) methods in Android 16 now support a "partially checked" state in addition to "checked" and "unchecked." This replaces the deprecated boolean isChecked and setChecked(boolean). Two Android API releases in 2025 This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-impacting behavior changes. We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible. There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level. How to get ready In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing. App compatibility The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website. We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. Learn more by checking the release timeline details. Get started with Android 16 You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 1 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 2. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it. We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 16 developer site.
Posted by Ashley Tschudin – Social Media Specialist, MTP at Google Android Studio isn't just code and algorithms – it's built by real people with fascinating stories. Our "Meet the Android Studio Team" series gives you a glimpse into the lives and passions of the talented individuals who craft the tools you use every day. Tune in each month to meet new team members and discover their unique journey. Trevor Johns: Building Android Studio for You Meet Trevor Johns, a seasoned Staff Developer Programs Engineer at Google. Reflecting on his journey, Trevor sheds light on the most impactful advancements in the Android ecosystem and offers a glimpse into his vision for the future where AI plays a pivotal role in streamlining development workflows. Trevor discusses the Android Studio team's dedication to enhancing developer productivity through AI, highlighting their focus on understanding and addressing developer needs, and reflects on the dynamic journey of Android development while sharing valuable insights. Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development? I've been at Google in various roles since Google since 2007, and transferred to Android team in 2009 shortly after the launch of the HTC G1 — the first publicly available Android phone. Even in those early days it was clear that mobile computing was a unique opportunity to reimagine many of the limitations of desktop computers and how users interact with the digital world. Among my first projects were helping developers optimize their apps for the MyTouch 3G and Motorola Droid, as well as creating developer resources for Android's 1.6 Donut release. Over the years, I've worked on various parts of the Android OS including our first tablet devices, Android Wear, helping develop the original Android support libraries (which later became Jetpack), and the migration to Kotlin. Recently I joined the Android Studio team to help improve developer productivity, using AI to streamline common developer tasks and help developers have more time to focus on creativity. How does the Android Studio team ensure that products or features meet the ever-changing needs of developers? Like the rest of Android, we approach development of new features by listening to our developer community. We hold regular listening sessions with publishers, work with our UX research team to conduct case studies, and participate in online discussions to get a sense for where developers face the most friction — and then try to find ways to reduce that friction. For example, we developed Gemini in Android Studio's integration with Play Vitals and Firebase Crashlytics based on feedback from members of the developer community who commented to let us know where they would find AI most useful across their developer workflow. Speaking of, if you'd like to provide us with feedback, you can always file a bug or feature request on the Android Studio issue tracker. How does the Studio team contribute to Google's broader vision for the Android platform? In addition to listening to the Android community, we also keep an eye on what's being developed across the rest of the Android team and make sure that Android Studio has the right tools to help developers quickly migrate between Android versions and adopt those new platform features. Beyond that, the Studio team provides leading edge editing tools to make sure that Android remains one of the easiest computing platforms to develop for — unlocking this unique computing platform for millions of developers. In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why? For developers, my answer would have to be the migration to Kotlin. This language has modernized the Android developer experience — letting developers write apps with less code and fewer errors. It's also the foundation for Jetpack Compose, which is the future of Android UI development. If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why? I'd love to see Gemini be able to not just autocomplete code for me, but generate scaffolds for new projects. That way I can focus on building features rather than worrying about basic structure when starting a new project. Develop Android Apps with Kotlin Follow Trevor's lead and embrace the power of Kotlin for modern Android development. Enhance your skills and write better Android apps faster with Kotlin. Stay tuned! Get ready for another inspiring story! The "Meet the Android Studio Team" series continues next week with a new team member in the spotlight. Don't miss their unique insights and journey. Find Trevor Johns on LinkedIn, X, Bluesky, and Medium.
Posted by Kanyinsola Fapohunda – Software Engineer, and Geoffrey Boullanger – Technical Lead Accurate time is crucial for a wide variety of app functionalities, from scheduling and event management to transaction logging and security protocols. However, a user can change the device’s time, so a more accurate source of time than the device’s local system time may be required. That's why we're introducing the TrustedTime API that leverages Google's infrastructure to deliver a trustworthy timestamp, independent of the device's potentially manipulated local time settings. How does TrustedTime work? The new API leverages Google's secure infrastructure to provide a trusted time source to your app. TrustedTime periodically syncs its clock to Google's servers, which have access to a highly accurate time source, so that you do not need to make a server request every time you want to know the current network time. Additionally, we've integrated a unique model that calculates the device's clock drift. This will inform you when the time may be inaccurate between network synchronizations. Why is an accurate source of time important? Many apps rely on the device's clock for various features. However, users can change their device's time settings, either intentionally or unintentionally, therefore changing the time that your app gets. This can lead to problems such as: Data Inconsistency: Apps relying on chronological event ordering are vulnerable to data corruption if users manipulate device time. TrustedTime mitigates this risk by providing a trustworthy time source. Security Gaps: Time-based security measures, like one-time passwords or timed access controls require an unaltered time source to be effective. Unreliable Scheduling: Apps that depend on accurate scheduling, like calendar or reminder apps, can malfunction if the device clock (i.e. Unix timestamp) is incorrect. Inaccurate Time: The device's internal clock can drift due to various factors, such as temperature, doze mode, battery level, etc. This can lead to problems in applications that require more precision. The TrustedTime API also provides the estimated error with the timestamps, so that you can ensure your app's time-sensitive operations are performed correctly. Lack of Consistency Between Devices: Inconsistent time across devices can cause problems in multi-device scenarios, such as gaming or collaborative applications. The TrustedTime API helps ensure that all devices have a consistent view of time, improving the user experience. Unnecessary Power and Data Consumption: TrustedTime is designed to be more efficient than calling an NTP server every time an app needs the current time. It avoids the overhead of repeated network requests by periodically syncing its clock with time servers. This synced time is then used as a reference point, and the TrustedTime API calculates the current time based on the device's internal clock. This approach reduces network usage and improves performance for apps that need frequent time checks. TrustedTime Use Cases The TrustedTime API opens up a range of possibilities for enhancing the reliability and security of your apps, with use cases in areas such as: Financial Applications: Ensure the accuracy of transaction timestamps even when the device is offline, preventing fraud and disputes. Gaming: Implement fair play by preventing users from manipulating the game clock to gain an unfair advantage. Limited-Time Offers: Guarantee that promotions and offers expire at the correct time, regardless of the user's device settings. E-commerce: Accurately track order processing and delivery times. Content Licensing: Enforce time-based restrictions on digital content, like rentals or subscriptions. IoT Devices: Synchronize clocks across multiple devices for consistent data logging and control. Productivity apps: Accurately record the time of any changes made to cloud documents while offline. Getting started with the TrustedTime API The TrustedTime API is built on top of Google Play services, making integration seamless for most Android developers. The simplest way to integrate is to initialize the TrustedTimeClient early in your app lifecycle, such as in the onCreate() method of your Application class. The following example uses dependency injection with Hilt to make the time client available to components throughout the app. [Optional] Setup dependency injection // TrustedTimeClientAccessor.kt import com.google.android.gms.tasks.Task import com.google.android.gms.time.TrustedTimeClient interface TrustedTimeClientAccessor { fun createClient(): Task } // TrustedTimeModule.kt @Module @InstallIn(SingletonComponent::class) class TrustedTimeModule { @Provides fun provideTrustedTimeClientAccessor( @ApplicationContext context: Context ): TrustedTimeClientAccessor { return object : TrustedTimeClientAccessor { override fun createClient(): Task { return TrustedTime.createClient(context) } } } } Initialize early in your app's lifecycle // TrustedTimeDemoApplication.kt @HiltAndroidApp class TrustedTimeDemoApplication : Application() { @Inject lateinit var trustedTimeClientAccessor: TrustedTimeClientAccessor var trustedTimeClient: TrustedTimeClient? = null private set override fun onCreate() { super.onCreate() trustedTimeClientAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later val exception = task.exception } } // To use Kotlin Coroutine, you can use the await() method, // see https://developers.google.com/android/guides/tasks#kotlin_coroutine for more info. } } NOTE: If you don't use dependency injection in your app. You can simply call `TrustedTime.createClient(context)` instead of using a TrustedTimeClientAccessor. Use TrustedTimeClient anywhere in your app // Retrieve the TrustedTimeClient from your application class val myApp = applicationContext as TrustedTimeDemoApplication // In this example, System.currentTimeMillis() is used as a fallback if the // client is null (i.e. client creation task failed) or when there is no time // signal available. You may not want to do this if using the system clock is // not suitable for your use case. val currentTimeMillis = myApp.trustedTimeClient?.computeCurrentUnixEpochMillis() ?: System.currentTimeMillis() // trustedTimeClient.computeCurrentInstant() can be used if Instant is // preferred to long for Unix epoch times and you are able to use the APIs. Use in short-lived components like Activity @AndroidEntryPoint class MainActivity : AppCompatActivity() { @Inject lateinit var trustedTimeAccessor: TrustedTimeAccessor private var trustedTimeClient: TrustedTimeClient? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) ... trustedTimeAccessor.createClient().addOnCompleteListener { task -> if (task.isSuccessful) { // Stash the client trustedTimeClient = task.result } else { // Handle error, maybe retry later or use another time source. val exception = task.exception } } } private fun getCurrentTimeInMillis() : Long? { return trustedTimeClient?.computeCurrentUnixEpochMillis() } } TrustedTime API availability and limitations The TrustedTime API is available on all devices running Google Play services on Android 5 (Lollipop) and above. You need to add the dependency com.google.android.gms:play-services-time:16.0.1 (or above) to access the new API. No additional permission is required to use this API. However, TrustedTime needs an internet connection after the device starts up to provide timestamps. If the device hasn't connected to the internet since booting, the TrustedTime APIs won't return timestamps. It’s important to note that the device's internal clock can drift due to factors like temperature, doze mode, and battery level. TrustedTime doesn't prevent this drift, but its APIs provide an error estimate for each timestamp. Use this estimate to determine if the timestamp's accuracy meets your application's requirements. While TrustedTime makes it more difficult for users to manipulate the time accessed by your app, it does not guarantee complete safety. Advanced techniques can still be used to tamper with the device’s time. Next steps To learn more about the TrustedTime API, check out the following resources: TrustedTime | Google Play Services Time API | Google for Developers
Posted by the Google I/O team Google I/O is back Google I/O returns May 20 – 21! Join us online as we share our vision for the future of technology, along with updates across Android, AI, web, cloud, and more. Tune in to learn how the latest AI models can help you build innovative apps and transform development workflows. We'll also share how we're making Android development even easier, and empowering you to build richer, more engaging web experiences. Register now and tune in live Head to the Google I/O website and register to receive updates. The livestreamed keynotes kick off on May 20th at 10 AM PT, and new this year, we’ll be streaming developer product keynotes live from Shoreline across both days! Stay tuned for details about I/O Connect events this summer, and test your skills at solving the #GoogleIO puzzle to unlock bonus worlds and earn badges.
Posted by Eiji Kitamura – Developer Advocate (@agektmr) In October 2024, we announced that Chrome 131 will allow third-party autofill services on Android (like password managers) to natively autofill forms on websites. Reflecting on feedback from autofill service developers, we've decided to shift the schedule and allow the third-party autofill services from Chrome 135. Native Chrome support for third-party autofill services on Android means that users will be able to use their preferred password manager or autofill service directly in Chrome, without having to rely on workarounds or extensions. This change is expected to improve the user experience and security for Android users who use third-party autofill services. Based on developer feedback, we've fixed bugs, and have been working to make the new setting easier to discover. To support those goals, we've added the following capabilities: An ability to query Chrome settings and learn whether the user wishes to use a third party autofill service An ability to deep link to the Chrome settings page where users can enable third-party autofill services. Read Chrome settings Any app can read whether Chrome uses the 3P autofill mode that allows it to use Android Autofill. Chrome uses Android's ContentProvider to communicate that information. Declare in your Android manifest which channels you want to read settings from, e.g.: Then, use Android's ContentResolver to request that information by building the content URI as in this example code: final String CHROME_CHANNEL_PACKAGE = "com.android.chrome"; // Chrome Stable. final String CONTENT_PROVIDER_NAME = ".AutofillThirdPartyModeContentProvider"; final String THIRD_PARTY_MODE_COLUMN = "autofill_third_party_state"; final String THIRD_PARTY_MODE_ACTIONS_URI_PATH = "autofill_third_party_mode"; final Uri uri = new Uri.Builder() .scheme(ContentResolver.SCHEME_CONTENT) .authority(CHROME_CHANNEL_PACKAGE + CONTENT_PROVIDER_NAME) .path(THIRD_PARTY_MODE_ACTIONS_URI_PATH) .build(); final Cursor cursor = getContentResolver().query( uri, /*projection=*/new String[] {THIRD_PARTY_MODE_COLUMN}, /*selection=*/ null, /*selectionArgs=*/ null, /*sortOrder=*/ null); cursor.moveToFirst(); // Retrieve the result; int index = cursor.getColumnIndex(THIRD_PARTY_MODE_COLUMN); if (0 == cursor.getInt(index)) { // 0 means that the third party mode is turned off. Chrome uses its built-in // password manager. This is the default for new users. } else { // 1 means that the third party mode is turned on. Chrome uses forwards all // autofill requests to Android Autofill. Users have to opt-in for this. } Deep-link to Chrome settings To deep-link to the Chrome settings page where users can enable third-party autofill services, use an Android Intent. Ensure to configure the action and categories exactly as in this example code: Intent autofillSettingsIntent = new Intent(Intent.ACTION_APPLICATION_PREFERENCES); autofillSettingsIntent.addCategory(Intent.CATEGORY_DEFAULT); autofillSettingsIntent.addCategory(Intent.CATEGORY_APP_BROWSER); autofillSettingsIntent.addCategory(Intent.CATEGORY_PREFERENCE); // Invoking the intent with a chooser allows users to select the channel they want to // configure. If only one browser reacts to the intent, the chooser is skipped. Intent chooser = Intent.createChooser(autofillSettingsIntent, "Pick Chrome Channel"); startActivity(chooser); // If the caller knows which Chrome channel they want to configure, // they can instead add a package hint to the intent, e.g. autofillSettingsIntent.setPackage("com.android.chrome"); startActivity(autofillSettingsInstent); Updated timeline To reflect the feedback and to leave time for autofill service developers to make relevant changes, we are shifting the plan. Users must select Autofill using another service in Chrome settings to ensure their autofill experience is unaffected. The new setting will become available in Chrome 135. Autofill services should encourage their users to toggle the setting, to ensure they have the best autofill experience possible with their service and Chrome on Android. Chrome plans to stop supporting the compatibility mode in summer 2025. March 5th, 2025: Chrome 135 beta is available April 1st, 2025: Chrome 135 is in stable Summer 2025: Compatibility mode will no longer be available on Chrome
Posted by Ashley Tschudin – Social Media Specialist, MTP at Google Dive into the world of Android Studio and meet the masterminds behind your favorite development tools! In our recurring blog series, "Meet the Android Studio Team," we'll introduce you to the brilliant engineers, designers, product managers, and more who are shaping the future of Android development. Join us each week to uncover the unique perspectives and stories of the people who make Android Studio the best it can be. Jamal Eason: Building better Android apps - insights on Gemini, Crashlytics, and App Quality Meet Jamal Eason, a Director of Product Management at Google, whose passion for empowering developers shines through in his work on Android Studio. His journey, from studying computer science at West Point to developing Android hardware at Intel (including contributions to the Motorola Razr i), showcases a deep understanding of the developer experience. From attending the very first Android Studio unveiling at Google I/O to now shaping its future, Jamal brings a unique perspective to the team. Jamal shares his insights on the evolution of Android Studio, the importance of a strong developer community, and the features he's most proud of. Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development? What unique perspective or experience do you bring to the Android Studio team, and how does it influence your work? Technical Translation - In my prior roles, I worked with highly technical teams, and learned how to take absurd technical concepts and present them to different audiences of different technical skill levels. And in the reverse, I worked with many non-technical customers and colleagues and learned how to translate their pain points into product opportunities solved with technical solutions and innovation. User Empathy - Previously, I was a software developer, and I regularly like to code on small side projects, and really enjoy spending time with developers who use Android Studio. From first-hand experience and user engagement, I regularly bring in the voice of the user into the discussion from the inception of a product idea to the final stages of the release process. UX Design Sense - In a previous career, I designed and created websites, and user interfaces for software. I developed an eye for good UX design and flows particularly in technical software products. These skills aid in complementing the dedicated UX design team in Android Studio, and aids in avoiding productivity pitfalls with poor product and UX flows. In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why? integration of Gemini and 3) integrations with Crashlytics and Play with App Quality Insights. integration of Gemini into Android Studio is a real accelerate for app development. Our focus with AI is to make Android developers more productive, and make the harder tasks and toil easier. So from AI powered code completion, or built-in Gemini chat for Android app development, to enhancing existing tools with AI such as using Gemini to generate Jetpack Compose UI Previews, we are just at the beginning of leveraging AI to make Android app developers more productive. Lastly, with App Quality Insights, it is now much easier for app developers to address the performance and quality issues found with Firebase Crashlytics and Android Vitals from Google Play. Surfacing these issues right next to source code and source control, make resolving issues much faster and intuitive. How does the Android Studio team ensure that products or features meet the ever-changing needs of developers? new Android OS and API changes so developers are ready to adopt new Android platform capability into their apps. Then, we constantly review and prioritize developer feedback received via our issue tracker or via our bi-annaul developer survey we post on the Android Developers site. When we can, we sometimes engage with developers via various social media channels. And lastly, we regularly interview developers at various experience levels, and regions around the world in targeted User Research studies. What advice would you give to aspiring Android developers who are just starting their journey? Start with a robust set of code labs and tutorials. Get inspired on the possibilities of Android and what you can build. Join the Android developer community: Android Studio X Android Developers LinkedIn Android Developers YouTube Android Developers Medium Deploy with Confidence App Quality Insights, to improve your app's performance and address issues quickly. Stay tuned Find Jamal Eason on LinkedIn and X.
Posted by Ashley Tschudin – Social Media Specialist, MTP at Google Welcome to "Meet the Android Studio Team"; a short blog series where we pull back the curtain and introduce you to the passionate people who build your favorite Android development tools. Get to know the talented minds – engineers, designers, product managers, and more – who pour their hearts into crafting the best possible experience for Android developers. Join us each week to meet a new member of the team and explore their unique perspectives. Paris Hsu: Empowering Android developers with Compose tools Meet Paris Hsu, a Product Manager at Google passionate about empowering developers to build incredible Android apps. Her journey to the Android Studio team started with a serendipitous internship at Microsoft, where she discovered the power of developer tools. Now, as part of the UI Tools team, Paris champions intuitive solutions that streamline the development process, like the innovative Compose Tools suite. In this installment of "Meet the Android Studio Team," Paris shares insights into her work, the importance of developer feedback, and her dream Android feature (hint: it involves acing that forehand). Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development? Honestly, I joined a bit by chance! The summer before my last year of grad school, I was in the Microsoft's Garage incubator internship program. Our project, InkToCode, turned handwritten designs into code. It was my first experience building developer tools and made me realize how powerful developer tools can be, which led me to the Android Studio team. Now, after 6 years, I'm constantly amazed by what Android developers create – from innovative productivity apps to immersive games. It's incredibly rewarding to build tools that empower developers to create more. In your opinion, what is the most impactful feature or improvement the Android Studio team has introduced in recent years, and why? As part of the UI Tools team in Android Studio, I'm biased towards Compose Tools! Our team spent a lot of time rethinking how we can take a code-first approach for tools as we transition the community for XML to Compose. Features like the Compose Preview and its submodes (Interactive, Animation, Deploy preview) enable fast UI iteration, while features such as Layout Inspector or Compose UI Check helps find and diagnose UI issues with ease. We are also exploring ways to apply multimodal AI into these tools to help developers write more high quality, adaptive, and inclusive Compose code quicker. How does the Android Studio team ensure that products or features meet the ever-changing needs of developers? We are constantly engaging and listening to developer feedback to ensure we are meeting their needs! some examples: Direct feedback: UXR studies, Annual developer surveys, and Buganizer reports provide valuable insights. Early access: We release Early Access Programs (EAPs) for new features, allowing developers to test them and provide feedback before official launch. Community engagement: We have advisory boards with experienced Android developers, gather feedback from Google Developer Experts (GDEs), and attend conferences to connect directly with the community. How does the Studio team contribute to Google's broader vision for the Android platform? I think Android Studio contributes to Google's broader mission by providing Android developers with powerful and intuitive tools. This way, developers are empowered to create amazing apps that bring the best of Google's services and information to our users. Whether it's accessing knowledge through Search, leveraging Gemini, staying connected with Maps, or enjoying entertainment on YouTube, Android Studio helps developers build the experiences that connect people to what matters most. If you could wave a magic wand and add one dream feature to the Android universe, what would it be and why? Anyone who knows me knows that I am recently super obsessed with tennis. I would love to see more coaching wearables (e.g. Pixel Watch, Pixel Racket?!). I would love real-time feedback on my serve and especially forehand stroke analysis. Learn more about Compose Tools Inspired by Paris’ passion for empowering developers to build incredible Android apps? To learn more about how Compose Tools can streamline your app development process, check out the Compose Tools documentation and get started with the Jetpack Compose Tutorial. Stay tuned Keep an eye out for the next installment in our “Meet the Android Studio Team” series, where we’ll shine the spotlight on another team member and delve into their unique insights. Find Paris Hsu on LinkedIn, X, and Medium.
Posted by Thomas Ezan – Sr. Developer Relation Engineer (@lethargicpanda) Gemini can help you build and launch new user features that will boost engagement and create personalized experiences for your users. The Vertex AI in Firebase SDK lets you access Google’s Gemini Cloud models (like Gemini 1.5 Flash and Gemini 1.5 Pro) and add GenAI capabilities to your Android app. It became generally available last October which means it's now ready for production and it is already used by many apps in Google Play. Here are tips for a successful deployment to production. Implement App Check to prevent API abuse When using the Vertex AI in Firebase API it is crucial to implement robust security measures to prevent unauthorized access and misuse. Firebase App Check helps protect backend resources (like Vertex AI in Firebase, Cloud Functions for Firebase, or even your own custom backend) from abuse. It does this by attesting that incoming traffic is coming from your authentic app running on an authentic and untampered Android device. Firebase App Check ensures that only legitimate users access your backend resources To get started, add Firebase to your Android project and enable the Play Integrity API for your app in the Google Play console. Back in the Firebase console, go to the App Check section of your Firebase project to register your app by providing its SHA-256 fingerprint. Then, update your Android project’s Gradle dependencies with the App Check library for Android: dependencies { // BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.7.0")) // Dependency for App Check implementation("com.google.firebase:firebase-appcheck-playintegrity") } Finally, in your Kotlin code, initialize App Check before using any other Firebase SDK: Firebase.initialize(context) Firebase.appCheck.installAppCheckProviderFactory( PlayIntegrityAppCheckProviderFactory.getInstance(), ) To enhance the security of your generative AI feature, you should implement and enforce App Check before releasing your app to production. Additionally, if your app utilizes other Firebase services like Firebase Authentication, Firestore, or Cloud Functions, App Check provides an extra layer of protection for those resources as well. Once App Check is enforced, you’ll be able to monitor your app’s requests in the Firebase console. App Check metrics page in the Firebase console You can learn more about App Check on Android in the Firebase documentation. Use Remote Config for server-controlled configuration The generative AI landscape evolves quickly. Every few months, new Gemini model iterations become available and some models are removed. See the Vertex AI in Firebase Gemini models page for details. Because of this, instead of hardcoding the model name in your app, we recommend using a server-controlled variable using Firebase Remote Config. This allows you to dynamically update the model your app uses without having to deploy a new version of your app or require your users to pick up a new version. You define parameters that you want to control (like model name) using the Firebase console. Then, you add these parameters into your app, along with default "fallback" values for each parameter. Back in the Firebase console, you can change the value of these parameters at any time. Your app will automatically fetch the new value. Here's how to implement Remote Config in your app: // Initialize the remote configuration by defining the refresh time val remoteConfig: FirebaseRemoteConfig = Firebase.remoteConfig val configSettings = remoteConfigSettings { minimumFetchIntervalInSeconds = 3600 } remoteConfig.setConfigSettingsAsync(configSettings) // Set default values defined in your app resources remoteConfig.setDefaultsAsync(R.xml.remote_config_defaults) // Load the model name val modelName = remoteConfig.getString("model_name") Read more about using Remote Config with Vertex AI in Firebase. Gather user feedback to evaluate impact As you roll out your AI-enabled feature to production, it's critical to build feedback mechanisms into your product and allow users to easily signal whether the AI output was helpful, accurate, or relevant. For example, you can incorporate interactive elements such as thumb-up and thumb-down buttons and detailed feedback forms within the user interface. The Material Icons in Compose package provides ready to use icons to help you implement it. You can easily track the user interaction with these elements as custom analytics events by using Google Analytics logEvent() function: Row { Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_up") } } ) { Icon(Icons.Default.ThumbUp, contentDescription = "Thumb up") }, Button ( onClick = { firebaseAnalytics.logEvent("model_response_feedback") { param("feedback", "thumb_down") } } ) { Icon(Icons.Default.ThumbDown, contentDescription = "Thumb down") } } Learn more about Google Analytics and its event logging capabilities in the Firebase documentation. User privacy and responsible AI When you use Vertex AI in Firebase for inference, you have the guarantee that the data sent to Google won’t be used by Google to train AI models (see Vertex AI documentation for details). It's also important to be transparent with your users when they're engaging with generative AI technology. You should highlight the possibility of unexpected model behavior. Finally, users should have control within your app over how their activity related to AI model interactions is stored and deleted. You can learn more about how Google is approaching Generative AI responsibly in the Google Cloud documentation.
Posted by JJ Zou – Product Manager, and Scott Lin – Product Manager At Google Play, we're committed to empowering you with the tools and resources you need to build successful and secure apps that users can rely on. That's why we're introducing a new way to recognize VPN apps that go above and beyond to protect their users: a "Verified" badge for consumer-facing VPN apps. This new badge is designed to highlight apps that prioritize user privacy and safety, help users make more informed choices about the VPN apps they use, and build confidence in the apps they ultimately download. This badge complements existing features such as the Google Play Store banner for VPNs and Data Safety section declaration in the Play Store. Build user trust with more transparency Earning the VPN badge isn't just about checking a box— it's proof that your VPN app invests in app safety. This badge signifies that your app has gone above and beyond, adhering to the Play safety and security guidelines and successfully completed a Mobile Application Security Assessment (MASA) Level 2 validation. The VPN badge helps your app stand out in a crowded marketplace. Once awarded, the badge is prominently displayed on your app’s details page and in search results. Additionally, we have built new surfaces to showcase verified VPN applications. Demonstrating commitment to security and safety We're excited to share insights from some of our partners who have already earned the VPN badge and are leading the way in building a safe and trusted Google Play ecosystem. Learn how partners like NordVPN, hide.me, and Aloha are using the badge and implementing best practices for user security: NordVPN “We’re excited that the new ‘Verified’ badge will help users easily identify VPNs that meet high standards for security and privacy. In a market where trust is key, this badge not only provides reassurance to customers, but also highlights the integrity of developers committed to delivering secure and reliable products.” hide.me “Privacy and user safety are fundamental to our VPN's architecture. The MASA program has been valuable in validating our security practices and maintaining high standards. This accreditation provides independent verification of our commitment to protecting user privacy.” Aloha Browser “The certification process is well-organized and accessible to any company. If your product is developed with security as a core focus, passing the required audits should not pose any difficulty. We regularly conduct third-party audits and have been active participants in the MASA program since its inception. Additionally, it fosters discipline in your development practices, knowing that regular re-certification is required. Ultimately, it’s the end user who benefits the most—a secure and satisfied user is the ultimate goal for every app developer.” Getting your App Badge-Ready To take advantage of this opportunity to enhance your app's profile and attract more users, learn more about the specific criteria and start the validation process today. To be considered for the "Verified" badge, your VPN app needs to: Complete a Mobile Application Security Assessment (MASA) Level 2 validation Have an Organization developer account type Meet target API level requirements for Google Play apps Have at least 10,000 installs and 250 reviews Be published on Google Play for at least 90 days Submit a Data Safety section declaration, opting into: Independent security review, under ‘Additional badges’ Encryption in transit Note: This list is not exhaustive and doesn't fully represent all the criteria used to display the badge. While other factors contribute to the evaluation, fulfilling these requirements significantly increases your chances of seeing your VPN app “Verified.” Join us in our mission to create a safer and more transparent Google Play ecosystem. We're here to support you with the tools and resources you need to build trusted apps.
Posted by Tor Norbye – Engineering Director, Jamal Eason – Director of Product Management, and Xavier Ducrohet – Tech Lead | Android Studio Android Studio provides you an integrated development environment (IDE) to develop, test, debug, and package Android apps that can reach billions of users across a diverse set of Android devices. Last month we reached a big milestone for the product: 10 years since the Android Studio 1.0 release reached the stable channel. You can hear a bit more about its history in the most recent episode of Android Developers Backstage, or watch some of the team’s favorite moments: 🎉 When we set out to develop Android Studio we started with these three principles: First, we wanted to build and release a complete IDE, not just a plugin. Before Android Studio, users had to go download a JDK, then download Eclipse, then configure it with an update center to point to Android, install the Eclipse plugin for Android, and then configure that plugin to point to an Android SDK install. Not only did we want everything to work out-of-the-box, but we also wanted to be able to configure and improve everything: from having an integrated dependency management system to offering code inspections that were relevant to Android app developers to having a single place to report bugs. Second, we wanted to build it on top of an actively maintained, open-sourced, and best-of-breed Java programing language IDE. Not too long before releasing Android Studio, we had all used IntelliJ and felt it was superior from a code editing perspective. And third, we wanted to not only provide a build system that was better suited for Android app development, but to also enable this build system to work consistently from both from the command line and from inside the IDE. This was important because in the previous tool chain, we found that there were discrepancies in behavior and capability between the in-IDE builds with Eclipse, and CI builds with Ant. This led to the release of Android Studio, including these highlights: The initial announcement of Android Studio at I/O 2013 Announcement of Gradle as the integrated build system at I/O 2013 What’s New in Android Developer tools talk from I/O 2014 Here are some nostalgic screenshots from that first version of Android Studio: First-run setup wizard of Android Studio Editing code within Android Studio Editing and previewing layouts across different screen sizes Android Studio has come a long way since those early days, but our mission of empowering Android developers with excellent tools continues to be our focus. Let’s hear from some team members across Android, JetBrains, and Gradle as they reflect on this milestone and how far the ecosystem has come since then. Android Studio team “Inside the Android team, engineers who didn't work on apps had the choice between using Eclipse and using IntelliJ, and most of them chose IntelliJ. We knew that it was the gold standard for Java development (and still is, all these years later.) So we asked ourselves: if this is what developers prefer when given a choice, wouldn't this be for our users as well? And the warm reception when we unveiled the alpha at I/O in 2013 made it clear that it was the right choice.” - Tor Norbye, Engineering Director of Android Studio at Google “We had a vision of creating a truly Integrated Development Environment for Android app development instead of a collection of related tools. In our previous working model, we had contributions of Android tools from a range of frameworks and UX flows that did not 100% work well end-to-end. The move to the open-sourced JetBrains IntelliJ platform enabled the Google team to tie tools together in a thoughtful way with Android Studio, plus it allowed others to contribute in a more seamless way. Lastly, looking back at the last 10 years, I’m proud of the partnership with Jetbrains and Gradle, plus the community of contributors to bring the best suite of tools to Android app developers.” – Jamal Eason, Director of Product Management of Android Studio at Google JetBrains “Google choosing IntelliJ as the platform to build Android Studio was a very exciting moment for us at JetBrains. It allowed us to strengthen and build on the platform even further, and paved the way for further collaboration in other projects such as Kotlin.” – Hadi Hariri, VP of Program Management at JetBrains Gradle “Android Studio's 10th anniversary marks a decade of incredible progress for Android developers. We are proud that Gradle Build Tool has continued to be a foundational part of the Android toolchain, enabling millions of Android developers to build their apps faster, more elegantly, and at scale.” – Hans Dockter, creator of Gradle Build Tool and CEO/Founder of Gradle Inc. “Our long-standing strategic partnership with Google and our mutual commitment to improving the developer experience continues to impact millions of developers. We look forward to continuing that journey for many years to come.” – Piotr Jagielski, VP of Engineering, Gradle Build Tool Last but not least, we want to thank you for your feedback and support over the last decade. Android Studio wouldn’t be where it is today without the active community of developers who are using it to build Android apps for their communities and the world and providing input on how we can make it better each day. As we head into this new year, we’ll be bringing Gemini into more aspects of Android Studio to help you across the development lifecycle to build quality apps faster. We’ll strive to make it easier and more seamless to build, test, and deploy your apps with Jetpack Compose across the range of form factors. We are proud of what we launch, but we always have room to improve in the evolving mobile ecosystem. Therefore, quality and stability of the IDE is our top priority so that you can be as productive as possible. We look forward to continuing to empower you with great tools and improvements as we take Android Studio forward into the next decade. 🚀 We also welcome you to be a part of our developer community on LinkedIn, Medium, YouTube, or X.
Posted by Matthew McCullough – VP of Product Management, Android Developer The first beta of Android 16 is now available, which means it's time to open the experience up to both developers and early adopters. You can now enroll any supported Pixel device here to get this and future Android Beta updates over-the-air. This build includes support for the future of app adaptivity, Live Updates, the Advanced Professional Video format, and more. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone. Android adaptive apps Users expect apps to work seamlessly on all their devices, regardless of display size and form factor. To that end, Android 16 is phasing out the ability for apps to restrict screen orientation and resizability on large screens. This is similar to features OEMs have added over the last several years to large screen devices to allow users to run apps at any window size and aspect ratio. On screens larger than 600dp wide, apps that target API level 36 will have app windows that resize; you should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing frameworks, tooling, and libraries to help. Key changes: Manifest attributes and APIs that restrict orientation and resizing will be ignored for apps — but not games — on large screens. Timeline: Android 16 (2025): Changes apply to large screens (600dp in width) for apps targeting API level 36 (developers can opt-out) Android release in 2026: Changes apply to large screens for apps targeting API level 37 (no opt-out) It's a great time to make your app adaptive! You can test these overrides without targeting using the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. Learn more about changes to orientation and resizability APIs in Android 16. Live Updates Live Updates are a new class of notifications that help users monitor and quickly access important ongoing activities. The new ProgressStyle notification template provides a consistent user experience for Live Updates, helping you build for these progress-centric user journeys: rideshare, delivery, and navigation. It includes support for custom icons for the start, end, and current progress tracking, segments and points, user journey states, milestones, and more. ProgressStyle notifications are suggested only for ride sharing, food delivery, and navigation use cases. @Override protected Notification getNotification() { return new Notification.Builder(mContext, CHANNEL_ID) .setSmallIcon(R.drawable.ic_app_icon) .setContentTitle("Ride requested") .setContentText("Looking for nearby drivers") .setStyle( new Notification.ProgressStyle() .addProgressSegment( new Notification.ProgressStyle.Segment(100) .setColor(COLOR_ORANGE) ).setProgressIndeterminate(true) ).build(); } Camera and media updates Android 16 advances support for the playback, creation, and editing of high-quality media, a critical use case for social and productivity apps. Advanced Professional Video Android 16 introduces support for the Advanced Professional Video (APV) codec which is designed to be used for professional level high quality video recording and post production. The APV codec standard has the following features: Perceptually lossless video quality (close to raw video quality) Low complexity and high throughput intra-frame-only coding (without pixel domain prediction) to better support editing workflows Support for high bit-rate range up to a few Gbps for 2K, 4K and 8K resolution content, enabled by a lightweight entropy coding scheme Frame tiling for immersive content and for enabling parallel encoding and decoding Support for various chroma sampling formats and bit-depths Support for multiple decoding and re-encoding without severe visual quality degradation Support multi-view video and auxiliary video like depth, alpha, and preview Support for HDR10/10+ and user-defined metadata A reference implementation of APV is provided through the OpenAPV project. Android 16 will implement support for the APV 422-10 Profile that provides YUV 422 color sampling along with 10-bit encoding and for target bitrates of up to 2Gbps. Camera night mode scene detection To help your app know when to switch to and from a night mode camera session, Android 16 adds EXTENSION_NIGHT_MODE_INDICATOR. If supported, it's available in the CaptureResult within Camera2. This is the API we briefly mentioned as coming soon in the "How Instagram enabled users to take stunning low light photos" blogpost. That post is a practical guide on how to implement night mode together with a case study that links higher-quality, in-app, night mode photos with an increase in the number of photos shared from the in-app camera. Vertical Text Android 16 adds low-level support for rendering and measuring text vertically to provide foundational vertical writing support for library developers. This is particularly useful for languages like Japanese that commonly use vertical writing systems. A new flag, VERTICAL_TEXT_FLAG, has been added to the Paint class. When this flag is set using Paint.setFlags, Paint’s text measurement APIs will report vertical advances instead of horizontal advances, and Canvas will draw text vertically. Note: Current high level text APIs, such as Text in Jetpack Compose, TextView, Layout classes and their subclasses do not support vertical writing systems, and do not support using the VERTICAL_TEXT_FLAG. val text = "「春は、曙。」" Box(Modifier .padding(innerPadding) .background(Color.White) .fillMaxSize() .drawWithContent { drawIntoCanvas { canvas -> val paint = Paint().apply { textSize = 64.sp.toPx() } // Draw text vertically paint.flags = paint.flags or VERTICAL_TEXT_FLAG val height = paint.measureText(text) canvas.nativeCanvas.drawText( text, 0, text.length, size.width / 2, (size.height - height) / 2, paint ) } }) {} Accessibility Android 16 adds new accessibility APIs to help you bring your app to every user. Supplemental descriptions When an accessibility service describes a ViewGroup, it combines content labels from its child views. If you provide a contentDescription for the ViewGroup, accessibility services assume you are also overriding the content of non-focusable child views. This can be problematic if you want to label things like a drop down (e.g. "Font Family") while preserving the current selection for accessibility (e.g. "Roboto"). Android 16 adds setSupplementalDescription so you can provide text that provides information about a ViewGroup without overriding information from its children. Required form fields Android 16 adds setFieldRequired to AccessibilityNodeInfo so apps can tell an accessibility service that input to a form field is required. This is an important scenario for users filling out many types of forms, even things as simple as a required terms and conditions checkbox, helping users to consistently identify and quickly navigate between required fields. Generic ranging APIs Android 16 includes the new RangingManager, which provides ways to determine the distance and angle on supported hardware between the local device and a remote device. RangingManager supports the usage of a variety of ranging technologies such as BLE channel sounding, BLE RSSI-based ranging, Ultra-Wideband, and WiFi round trip time. Behavior changes With every Android release, we seek to make the platform more efficient and robust, balancing the needs of your apps against things like system performance and battery life. This can result in behavior changes that impact compatibility. ART internal changes Code that leverages internal structures of the Android Runtime (ART) may not work correctly on devices running Android 16 along with earlier Android versions that update the ART module through Google Play system updates. These structures are changing in ways that help improve the Android Runtime's (ART's) performance. Impacted apps will need to be updated. Relying on internal structures can always lead to compatibility problems, but it's particularly important to avoid relying on code (or libraries containing code) that leverages internal ART structures, since ART changes aren't tied to the platform version the device is running on; they go out to over a billion devices through Google Play system updates. For more information, see the Android 16 changes affecting all apps and the restrictions on non-SDK interfaces. Migration or opt-out required for predictive back For apps targeting Android 16 or higher and running on an Android 16 or higher device, the predictive back system animations (back-to-home, cross-task, and cross-activity) are enabled by default. Additionally, the deprecated onBackPressed is not called and KeyEvent.KEYCODE_BACK is no longer dispatched. If your app intercepts the back event and you haven't migrated to predictive back yet, update your app to use supported back navigation APIs or temporarily opt out by setting the android:enableOnBackInvokedCallback attribute to false in the or tag of your app’s AndroidManifest.xml file. Predictive back support for 3-button navigation Android 16 brings predictive back support to 3-button navigation for apps that have properly migrated to predictive back. Long-pressing the back button initiates a predictive back animation, giving users a preview of where the back button takes them. This behavior applies across all areas of the system that support predictive back animations, including the system animations (back-to-home, cross-task, and cross-activity). Fixed rate work scheduling optimization Prior to targeting Android 16, when scheduleAtFixedRate missed a task execution due to being outside a valid process lifecycle, all missed executions will immediately execute when app returns to a valid lifecycle. When targeting Android 16, at most one missed execution of scheduleAtFixedRate will be immediately executed when the app returns to a valid lifecycle. This behavior change is expected to improve app performance. Please test the behavior to ensure your application is not impacted. You can also test by using the app compatibility framework and enabling the STPE_SKIP_MULTIPLE_MISSED_PERIODIC_TASKS compat flag. Ordered broadcast priority scope no longer global In Android 16, broadcast delivery order using the android:priority attribute or IntentFilter#setPriority() across different processes will not be guaranteed. Broadcast priorities for ordered broadcasts will only be respected within the same application process rather than across all system processes. Additionally, broadcast priorities will be automatically confined to the range (SYSTEM_LOW_PRIORITY + 1, SYSTEM_HIGH_PRIORITY - 1). Your application may be impacted if it does either of the following: 1. Your application has declared multiple processes that have set broadcast receiver priorities for the same intent. 2. Your application process interacts with other processes and has expectations around receiving a broadcast intent in a certain order. If the processes need to coordinate with each other, they should communicate using other coordination channels. Gemini Extensions Samsung just launched new Gemini Extensions on the S25 series, demonstrating new ways Android apps can integrate with the power of Gemini. We're working to make this functionality available on even more form factors. Two Android API releases in 2025 This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes. We'll continue to have quarterly Android releases. The Q1 and Q3 updates, which will land in-between the Q2 and Q4 API releases, will provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible. There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level. How to get ready In addition to performing compatibility testing on this next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing. App compatibility The Android 16 Preview program runs from November 2024 until the final public release in Q2 of 2025. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website. We’re targeting March of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. From that time you’ll have several months before the final release to complete your testing. The release timeline details are here. Get started with Android 16 Now that we've entered the beta phase, you can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Developer Preview 2 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 1. If you are in Android 25Q1 Beta and would like to take the final stable release of 25Q1 and exit Beta, you need to ignore the over-the-air update to 25Q2 Beta 1 and wait for the release of 25Q1. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. For the best development experience with Android 16, we recommend that you use the latest preview of Android Studio (Meerkat). Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it. We’ll update the preview/beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 16 developer site.
Posted by Maru Ahues Bouza – Director, Product Management With 3+ billion Android devices in use globally, the Android ecosystem is more vibrant than ever. Android mobile apps run on a diverse range of devices, from phones and foldables to tablets, Chromebooks, cars, and most recently XR. Users buy into an entire device ecosystem and expect their apps to work across all devices. To thrive in this multi-device environment, your apps need to adapt seamlessly to different screen sizes and form factors. Many Android apps rely on user interface approaches that work in a single orientation and/or restrict resizability. However, users want apps to make full use of their large screens, so Android device manufacturers added well-received features that override these app restrictions. With this in mind, Android 16 is removing the ability for apps to restrict orientation and resizability at the platform level, and shifting to a consistent model of adaptive apps that seamlessly adjust to different screen sizes and orientations. This change will reduce fragmentation with behavior that better meets user expectations, and improves accessibility by respecting the user’s preferred orientation. We're building tools, libraries, and platform APIs to help you do this to provide a consistently excellent user experience across the entire Android ecosystem. What's changing? Starting with Android 16, we're phasing out manifest attributes and runtime APIs used to restrict an app's orientation and resizability, enabling better user experiences for many apps across devices. These changes will initially apply when the app is running on a large screen, where “large screen” means that the smaller dimension of the display is greater than or equal to 600dp. This includes: Inner displays of large screen foldables Tablets, including desktop windowing Desktop environments, including Chromebooks The following manifest attributes and APIs will be ignored for apps targeting Android 16 (SDK 36) on large screens: Manifest attributes/API Ignored values screenOrientation portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape setRequestedOrientation() portrait, reversePortrait, sensorPortrait, userPortrait, landscape, reverseLandscape, sensorLandscape, userLandscape resizeableActivity all minAspectRatio all maxAspectRatio all There are some exceptions to these changes for controlling orientation, aspect ratio, and resizability: As mentioned before, these changes won't apply for screens that are smaller than sw600dp (e.g. most phones, flippables, outer displays on large screen foldables) Games will be excluded from these changes, based on the android:appCategory flag Also, users have control. They can explicitly opt-in to using the app’s default behavior in the aspect ratio settings. Apps, targeting API level 36, that were previously letterboxed on large screen devices will fill the display in landscape orientation on Android 16 Get ready for this change, by making your app adaptive Apps will need to support landscape and portrait layouts for window sizes in the full range of aspect ratios that users can choose to use apps in, as there will no longer be a way to restrict the aspect ratio and orientation to portrait or to landscape. To test if your app will be impacted by these changes, use the Android 16 Beta 1 developer preview with the Pixel Tablet and Pixel Fold series emulators in Android Studio, and either set targetSdkPreview = “Baklava” or use the app compatibility framework by enabling the UNIVERSAL_RESIZABLE_BY_DEFAULT flag. For existing apps that restrict orientation and aspect ratio, these changes may result in problems like overlapping layouts. To solve these issues and meet user expectations, our vision is that apps are built to be adaptive, to provide an optimal experience whether someone is using the app on a phone, foldable, tablet, Chromebook, XR or in a car. Resolving common problems Avoid stretched UI components: If layouts were designed and built with the assumption of phone screens, then app functionality may break for other aspect ratios. For example, if a layout was built assuming a portrait aspect ratio, then UI elements that fill the max width of the window will appear stretched in landscape-oriented windows. If layouts aren’t built to scroll, then users may not be able to click on buttons or other UI elements that are offscreen, resulting in confusing or broken behavior. Add a maximum width to components to avoid stretching, and add scrolling to ensure all content is reachable. Ensure camera compatibility in both orientations: Camera viewfinder previews might assume a specific aspect ratio and orientation relative to the camera sensor, resulting in stretching or flipped previews when those assumptions are broken. Ensure viewfinders rotate properly and account for the UI aspect ratio differing from the sensor aspect ratio. Preserve state across when window size changes: Removing orientation and aspect ratio restrictions also means that the window sizes of apps will change more frequently in response to how the user prefers to use an app, such as by rotating, folding, or resizing an app in multi-window or free-form windowing modes. Orientation changes and resizing will result in Activity recreation by default. To ensure a good user experience, it is critical that app state is preserved through these configuration changes so that users don’t lose their place in the app when changing posture or changing windowing modes. To account for different window sizes and aspect ratios, use window size classes to drive layout behavior in a way that doesn’t require device-specific customizations. Apps should also be built with the assumption that window sizes will frequently change. It’s not necessary to build duplicate orientation-specific layouts - instead, ensure your existing UIs can re-layout well no matter what the window size is. If you have a landscape- or portrait-specific layout, those layouts will still be used. Optimizing for window sizes by building adaptive If you're already building adaptive layouts and supporting all orientations, you're set up for success as your app will be prepared for each of the device types and windowing modes your users want to use your app in and these changes should have minimal impact. We've also got a range of testing resources to help you guarantee reliability. You can automate testing with tools like the Espresso testing framework and Jetpack Compose testing APIs. FlipaClip is a great example of why building for multiple form-factors matters: they saw 54% growth in tablet users in the four months after they optimized their app to be adaptive. Timeline We understand that the changes are significant for apps that have traditionally only supported portrait orientation. UI issues like buttons going off screen, overlapping content, or screens with camera viewfinders may need adjustments. To help you plan ahead and make the necessary adjustments, here’s the planned timeline outlining when these changes will take effect: Android 16 (2025): Changes described above will be the baseline experience for large screen devices (smallest screen width > 600dp) for apps that target API level 36, with the option for developers to opt-out. Android release in 2026: Changes described above will be the baseline experience for large screen devices (smallest screen width >600dp) for apps that target API level 37. Developers will not have an option to opt-out. Target API level Applicable devices Developer opt-out allowed 36 (Android 16) Large screen devices (smallest screen width >600dp) Yes 37 (Anticipated) Large screen devices (smallest screen width >600dp) No The deadlines for targeting a specific API level are app store specific. For Google Play, the plan is that targeting API 36 will be required in August 2026 and targeting API 37 will be required in August 2027. Preparing for Android 16 Refer to the Android 16 changes page for all changes impacting apps in Android 16, as well as additional resources for updating your apps if you are impacted. To test your app, download the Android 16 Beta 1 developer preview and update to targetSdkPreview = “Baklava” or use the app compatibility framework to enable specific changes. We're committed to helping developers embrace this new era of adaptive apps and unlock the full potential of their apps across the diverse Android ecosystem. Check out the do’s and don’ts for designing and building across multiple window sizes and form factors, as well how to test across the variety of devices that your app will be used in. Stay tuned for more updates and resources as we approach the release of Android 16!
Posted by John Zoeller – Developer Relations Engineer, and Caroline Vander Wilt – Group Product Manager New Wear OS features enable ‘standalone’ watches for kids, unlocking new possibilities for Wear OS app developers In collaboration with Samsung, Wear OS is introducing Galaxy Watch for Kids, a new kids experience enabling kids to explore while staying connected with their families from their smartwatch, no phone necessary. This launch unlocks new opportunities for Wear OS developers to reach younger audiences. Galaxy Watch for Kids is rolling out to Galaxy Watch7 LTE models , with features including: No phone ownership required: This experience enables the watch and its associated apps to operate on a fully standalone basis using LTE, and when available, Wifi connectivity. This includes calling, texting, games, and more. Selection of kid-friendly apps: From gaming to health, kids can browse and request installs of Teacher Approved apps and watch faces onGoogle Play. In addition to approving and blocking apps, parents can also monitor app usage from Google Family Link. Stay in touch with parent-managed contacts: Parents can ensure safer communications by limiting text and calling to approved contacts. Location sharing: Offers peace of mind with location sharing and geofencing notifications when kids leave or arrive at designated areas. School time: Limits watch functionality during scheduled hours of the day, so kids can focus while in school or studying. Building kids experiences with standalone functionality enables you to reach both standalone and tethered watches for kids. Apps like Math Tango have already created great Wear OS experiences for kids. Check out the video below to learn how they built a rich and engaging Wear OS app. Our new kids-focused design and content principles and developer guidance are also available today. Check out some of the highlights in the next section. New principles and guidelines for development We've created new design principles and guidelines to help developers take advantage of this opportunity to build and improve apps and watch faces for kids. Design principle: Active and fun Build engaging healthy experiences for children by including activity-based features. A great example of this is the Odd Squad Time Unit app from PBS KIDS that encourages children to get up and be physically active. By using the on-device sensors and power-efficient platform APIs, the app is able to provide a fun experience all day and still maintain battery life of the watch from wakeup to bed time. Note that while experiences should be catered to kids, they must also follow the Wear OS quality requirements related to the visual experience of your app, especially when crafting touch targets and font sizes. Content principle: Thoughtfully crafted Consider adjusting your content to make it not only appropriate, but also consumable and intuitive for younger kids (including those as young as 6). This includes both audio and visual app components. Tinkercast’s Two Whats?! And a Wow! app uses age-appropriate vocabulary and fun characters to aid in their teaching. It’s a great example of how a developer should account for reading comprehension. Development guidelines New Wear OS kids apps must adhere to the Wear OS app quality guidelines, the guidelines for standalone apps, and the new Kids development guide. Minimize impact on device battery Minimize events that affect battery life over the course of one session. Kids use watches that provide important safety features for their parents or guardians, which depend on the device having enough battery life. Below are best practices for reducing battery impact. ✅ DO design for offline use cases so that kids can play without incurring network-related battery costs ✅ DO minimize tasks that require an internet or GPS connection ✅ DO use power efficient APIs for all day activity tracking as well as tracking exercises 🚫 DO NOT use direct sensor tracking as this will significantly reduce the battery life 🚫 DO NOT include long-running animations Choose a development environment To develop kid-friendly apps and games you can use Compose for Wear OS, our recommended approach for building UI for Wear OS, as well as Unity for Android. We recommend Unity for developing games on Wear OS if you’re familiar and comfortable with its workflows and capabilities. However, for games with only a few animations, Compose Animation should be sufficient and is better supported within the Android environment. Be sure to consider that some Wear OS quality requirements may require custom Unity implementations, such as support for Rotary Input. Originator’s MathTango showcases the flexibility and richness of developing with Unity: Creating Watch Faces Developing watch faces for kids requires the use of Watch Face Format. Watch faces should adhere to our content and design principles mentioned above, as well as our quality standards, including our ambient mode requirement. The following examples demonstrate our Content Principle: Appealing. The content is relevant, engaging, and fun for kids, sparking their interest and imagination. The Crayola Pets Watch Face comes with a great variety of customization options, and demonstrates an informative and pleasant watch face: The Marvel Watch Faces (Captain America shown) provide a fun and useful step tracking feature: Kids experience publishing requirements Developers looking to get started on a new kids experience will need to keep a few things in mind when publishing on the Play Store. Age and Content Rating: Kids apps should be configured in the Play Store to meet the age and content requirements appropriate to their functionality Standalone Functionality: Apps must have 'standalone' defined in their manifest and meet all associated requirements, which will apply when the watch is set up with a child account Using Watch Face Format: Only watch faces which are built with Watch Face Format will be made available for kids Expand your reach with Wear OS Get ready to reach a new generation of Wear OS users! We've created all-new guidelines to help you build engaging experiences for kids. Here’s a quick recap: Continue to use the baseline set of Wear OS development resources, including Get started with Wear OS and Wear OS app quality Design & Content Guidance Focus on enrichment and age-tailoring Development Guidance Make sure it works with Standalone, and keep an eye on the battery With the Wear for Kids experience, developers can reach an entirely new audience of users and be part of the next generation of learning and enrichment on Wear OS. Check out all of the new experiences on the Play Store!
Posted by Caren Chang – Developer Relations Engineer The Jetpack Media3 library enables Android apps to build high quality media apps. As part of the Media3 library, the Transformer module aims to provide easy to use, reliable, and performant APIs for transcoding and editing media. For example, apps can use Transformer to apply editing operations such as trimming a long piece of media file, or applying effects to video tracks. Transformer can also be used to convert media files from one format to another, such as adjusting the resolution or encoding of the media file. Developing Transformer APIs As part of the process to introduce new APIs, our engineering team works closely with Google apps such as Google Photos to test and experiment the new APIs. Experimental flags are first introduced to enable performance improvements. Once the results are successful and conclusive, these experimental features are then built into the default API implementations or promoted to public APIs for all apps to use. This approach allows Transformer APIs to be tested on a wide variety of devices. Transformer Adoption in apps Apps that have been using Transformer in production observed in-app performance improvements, less code to maintain, and better developer experience. Let’s take a closer look at how Transformer has helped apps for their media-editing use cases. One of users’ favorite features in Google Photos is memory sharing, where snippets of your life story that are curated and presented as Google Photos memories can now be shared as videos to social media and chat apps. However, the process of combining media items to create a video on device is resource intensive and subject to significant latency, especially on low-end devices. To reduce this latency and enable the feature on a wider range of devices, Photos adopted Transformer in their media creation pipeline. Along with other improvements made, the team found that Transformer played a part in reducing the median user latency for creating memory videos by 41% on high-end devices and 27% on mid-range devices. The Photos app also enables users to perform media edits such as trimming or rotating a video. By adopting Transformer APIs for rotating videos, median save latency was reduced by 79% for applicable videos. The app also adopted Transformer’s API for optimizing video trimming, and observed video save latency decrease by 64%. 1 Second Everyday is a personal video journal that helps you create captivating montages and timelapses. One of the app’s main user journeys is sequentially combining short videos to create a meaningful movie. After adopting Transformer for this use case, the app observed that video encoding performance was up to 5x faster, allowing them to explore enabling 4k and HDR support. The Transformer adoption also helped decrease relevant code by 30%, making it easier for the developers to maintain the code base. BandLab is the next-generation music creation platform used by millions around the world to make and share their music. The app originally used MediaCodecs for their video creation use cases, but found that the low level implementation resulted in native crashes that were difficult to debug. After researching more on Transformer, the team made the decision to migrate from MediaCodecs to Transformer. Overall, it only took the team 12 working days for the migration, and this resulted in a simpler codebase and more maintainable pipeline for their media creation use cases. In addition, the app observed that all previously observed native crashes were no longer occurring anymore. What’s next for Transformers? We’re excited to see Transformer’s adoption in the developer community, and will continue adding new features to support more media-editing use cases for the Android ecosystem including: Better support for previewing media edits Improving the performance and developer experience for video frame extraction Easier integration with AI effects and much more Keep an eye out on what we’re working on in the Media3 Github, and file feature requests to help shape the future of Transformer!
Posted by Steven Jenkins – Product Manager, Android Studio Today, we are thrilled to announce the stable release of Android Studio Ladybug 🐞 Feature Drop (2024.2.2)! Accelerate your productivity with Gemini in Android Studio, Animation Preview support for Wear Tiles, App Links Assistant and much more. All of these new features are designed to help you build high-quality Android apps faster. Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Ladybug Feature Drop, and download the latest stable version today to try them out! Android Studio Ladybug Feature Drop Gemini in Android Studio Gemini Code Transforms Gemini Code Transforms can help you modify, optimize, or add code to your app with AI assistance. Simply right-click in your code editor and select "Gemini > Generate code" or highlight code and select "Gemini > Transform selected code." You can also use the keyboard shortcut Ctrl+\ (⌘+\ on macOS) to bring up the Gemini prompt. Describe the changes you want to make to your code, and Gemini will suggest a code diff, allowing you to easily review and accept only the suggestions you want. With Gemini Code Transforms, you can simplify complex code, perform specific code transformations, or even generate new functions. You can also refine the suggested code to iterate on the code suggestions with Gemini. It's an AI coding assistant right in your editor, helping you write better code more efficiently. Gemini Code Transform Rename Gemini in Android Studio enhances your workflow with intelligent assistance for common tasks. When renaming a single variable, class, or method from the code editor, the "Refactor > Rename" action uses Gemini to suggest contextually appropriate names, making it smoother and more efficient to refactor names as you’re coding in the editor. Rename Rethink For larger renaming refactors, Gemini can "Rethink variable names" across your whole file. This feature analyzes your code and suggests more intuitive and descriptive names for variables and methods, improving readability and maintainability. Rethink Commit Message Gemini now assists with commit messages. When committing changes to version control, it analyzes your code modifications and suggests a detailed commit message. Commit Message Generate Documentation Gemini in Android Studio makes documenting your code easier than ever. To generate clear and concise documentation, select a code snippet, right-click in the editor and choose "Gemini > Document Function" (or "Document Class" or "Document Property", depending on the context). Gemini will generate a draft that you can then refine and perfect before accepting the changes. This streamlined process helps you create informative documentation quickly and efficiently. Generate Documentation Debug Animation Preview support for Wear OS Tiles Animation Preview support for Wear OS Tiles helps you visualize and debug tile animations with ease. It provides a real-time view of your animations, allowing you to preview them, control playback with options like play, pause, and speed adjustment, and inspect key properties such as initial/end states and animation curves. You can even dynamically modify animation code and instantly observe the results within the inspector, streamlining the debugging and refinement process. Animation Preview support for Wear OS Tiles Wear Health Services The Wear Health Services feature in Android Studio simplifies the process of testing health and fitness apps by enabling Wear Health Services within the emulator. You can now easily customize various parameters for a given exercise such as heart rate, distance, and speed without needing a physical device or performing the activity itself. This streamlines the development and testing workflow, allowing for faster iteration and more efficient debugging of health-related features. Wear Health Services Optimize App Links Assistant App Links Assistant simplifies the process of implementing app links by serving valid JSON syntax that resolves broken deep links for your app. You can review the JSON file and then upload it to your website, resolving issues quickly. This eliminates the manual creation of the JSON file, saving you time and effort. The tool also allows you to compare existing JSON files with newly generated ones to easily identify any discrepancies. App Links Assistant Google Play SDK Insights Integration Android Studio now provides enhanced lint warnings for public SDKs from the Google Play SDK Index and the Google Play SDK Console, helping you identify and address potential issues. These warnings alert you if an SDK is outdated, violates Google Play policies, or has known security vulnerabilities. Furthermore, Android Studio provides helpful quick fixes and recommended version ranges whenever possible, making it easier to update your dependencies and keeping your app more secure and compliant. Google Play SDK Insights Integration Quality improvements Beyond new features, we also continued to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 770 bugs during the Ladybug Feature Drop development cycle. IntelliJ platform update Android Studio Ladybug Feature Drop (2024.2.2) includes the IntelliJ 2024.2 platform release, which has many new features such as more intuitive full line code completion suggestions, a preview in the Search Everywhere dialog and improved log management for the Java** and Kotlin programming languages. See the full IntelliJ 2024.2 release notes. Summary To recap, Android Studio Ladybug Feature Drop includes the following enhancements and features: Gemini in Android Studio Gemini Code Transforms Rename Rethink Commit Message Generate Documentation Debug Animation Preview support for Wear OS Tiles Wear Health Services Optimize App Links Assistant Google Play SDK Insights Integration Quality Improvements 770+ bugs addressed IntelliJ Platform Update More intuitive full line code completion suggestions Preview in the Search Everywhere dialog Improved log management for Java and Kotlin programming languages Getting Started Ready for next-level Android development? Download Android Studio Ladybug Feature Drop and unlock these cutting-edge features today. As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together! **Java is a trademark or registered trademark of Oracle and/or its affiliates.
Posted by Nevin Mital - Developer Relations Engineer, Android Media The Android ecosystem features a diverse range of devices, and it can be difficult to build experiences that take advantage of new or premium hardware features while still working well for users on all devices. With Android 12, we introduced the Media Performance Class (MPC) standard to help developers better understand a device’s capabilities and identify high-performing devices. For a refresher on what MPC is, please see our last blog post, Using performance class to optimize your user experience, or check out the Performance Class documentation. Earlier this year, we published the first stable release of the Jetpack Core Performance library as the recommended solution for more reliably obtaining a device’s MPC level. In particular, this library introduces the PlayServicesDevicePerformance class, an API that queries Google Play Services to get the most up-to-date MPC level for the current device and build. I’ll get into the technical details further down, but let’s start by taking a look at how Google Maps was able to tailor a feature launch to best fit each device with MPC. Performance Class unblocks premium experience launch for Google Maps Google Maps recently took advantage of the expanded device coverage enabled by the Play Services module to unblock a feature launch. Google Maps wanted to update their UI by increasing the transparency of some layers. Consequently, this meant they would need to render more of the map, and found they had to stop the rollout due to latency increases on many devices, especially towards the low-end. To resolve this, the Maps team started by slicing an existing key metric, “seconds to UI item visibility”, by MPC level, which revealed that while all devices had a small increase in this latency, devices without an MPC level had the largest increase. With these results in hand, Google Maps started their rollout again, but this time only launching the feature on devices that report an MPC level. As devices continue to get updated and meet the bar for MPC, the updated Google Maps UI will be available to them as well. The new Play Services module MPC level requirements are defined in the Android Compatibility Definition Document (CDD), then devices and Android builds are validated against these requirements by the Android Compatibility Test Suite (CTS). The Play Services module of the Jetpack Core Performance library leverages these test results to continually update a device’s reported MPC level without any additional effort on your end. This also means that you’ll immediately have access to the MPC level for new device launches without needing to acquire and test each device yourself, since it already passed CTS. If the MPC level is not available from Google Play Services, the library will fall back to the MPC level declared by the OEM as a build constant. As of writing, more than 190M in-market devices covering over 500 models across 40+ brands report an MPC level. This coverage will continue to grow over time, as older devices update to newer builds, from Android 11 and up. Using the Core Performance library To use Jetpack Core Performance, start by adding a dependency for the relevant modules in your Gradle configuration, and create an instance of DevicePerformance. Initializing a DevicePerformance should only happen once in your app, as early as possible - for example, in the onCreate() lifecycle event of your Application. In this example, we’ll use the Google Play services implementation of DevicePerformance. // Implementation of Jetpack Core library. implementation("androidx.core:core-ktx:1.12.0") // Enable APIs to query for device-reported performance class. implementation("androidx.core:core-performance:1.0.0") // Enable APIs to query Google Play Services for performance class. implementation("androidx.core:core-performance-play-services:1.0.0") import androidx.core.performance.play.services.PlayServicesDevicePerformance class MyApplication : Application() { lateinit var devicePerformance: DevicePerformance override fun onCreate() { // Use a class derived from the DevicePerformance interface devicePerformance = PlayServicesDevicePerformance(applicationContext) } } Then, later in your app when you want to retrieve the device’s MPC level, you can call getMediaPerformanceClass(): class MyActivity : Activity() { private lateinit var devicePerformance: DevicePerformance override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Note: Good app architecture is to use a dependency framework. See // https://developer.android.com/training/dependency-injection for more // information. devicePerformance = (application as MyApplication).devicePerformance } override fun onResume() { super.onResume() when { devicePerformance.mediaPerformanceClass >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE -> { // MPC level 34 and later. // Provide the most premium experience for the highest performing devices. } devicePerformance.mediaPerformanceClass == Build.VERSION_CODES.TIRAMISU -> { // MPC level 33. // Provide a high quality experience. } else -> { // MPC level 31, 30, or undefined. // Remove extras to keep experience functional. } } } } Strategies for using Performance Class MPC is intended to identify high-end devices, so you can expect to see MPC levels for the top devices from each year, which are the devices you’re likely to want to be able to support for the longest time. For example, the Pixel 9 Pro released with Android 14 and reports an MPC level of 34, the latest level definition when it launched. You should use MPC as a complement to any existing Device Clustering solutions you already use, such as querying a device’s static specs or manually blocklisting problematic devices. An area where MPC can be a particularly helpful tool is for new device launches. New devices should be included at launch, so you can use MPC to gauge new devices’ capabilities right from the start, without needing to acquire the hardware yourself or manually test each device. A great first step to get involved is to include MPC levels in your telemetry. This can help you identify patterns in error reports or generally get a better sense of the devices your user base uses if you segment key metrics by MPC level. From there, you might consider using MPC as a dimension in your experimentation pipeline, for example by setting up A/B testing groups based on MPC level, or by starting a feature rollout with the highest MPC level and working your way down. As discussed previously, this is the approach that Google Maps took. You could further use MPC to tune a user-facing feature, for example by adjusting the number of concurrent video playbacks your app attempts based on the MPC level’s concurrent codec guarantees. However, make sure to still query a device’s runtime capabilities when using this approach, as they may differ depending on the environment and state the device is in. Get in touch! If MPC sounds like it could be useful for your app, please give it a try! You can get started by taking a look at our sample code or documentation. We welcome you to share any questions or feedback you have in this short form. This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app. To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
Posted by Caren Chang- Android Developer Relations Engineer Android offers Camera and Media APIs to help you build apps that can capture, edit, share, and play media. To help you enhance Android Camera and Media experiences to be even more delightful for your users, this week we will be kicking off the Camera and Media Spotlight week. This Spotlight Week will provide resources—blog posts, videos, sample code, and more—all designed to help you uplevel the media experiences in your app. Check out highlights from the latest releases in Camera and Media APIs, including better Jetpack Compose support in CameraX, motion photo support in Media3 Transformer, simpler ExoPlayer setup, and much more! We’ll also bring in developers from the community to talk about their experiences building Android camera and media apps. Here’s what we’re covering during Camera and Media Spotlight week: What’s new in camera and media Tuesday, January 7 Check out what’s new in the latest CameraX and Media3 releases, including how to get started with building Camera apps with Compose. Creating delightful and premium experiences Wednesday, January 8 Building delightful and premium experiences for your users is what can help your app really stand out. Learn about different ways to achieve this such as utilizing the Media Performance Class or enabling HDR video capture in your app. Learn from developers, such as how Google Drive enabled Ultra HDR images in their Android app, and Instagram improved the in-app image capture experience by implementing Night Mode. Adaptive for camera and media, for large screens and now XR! Thursday, January 9 Thinking adaptive is important, so your app works just as well on phones as it does large screens, like foldables, tablets, ChromeOS, cars, and the new Android XR platform! On Thursday, we’ll be diving into the media experience on large screen devices, and how you can build in a smooth tabletop mode for your camera applications. Prepare your apps for XR devices by considering Spatial Audio and Video. Media creation Friday, January 10 Capturing, editing, and processing media content are fundamental features of the Android ecosystem. Learn about how Media3’s Transformer module can help your app’s media processing use cases, and see case studies of apps that are using Transformer in production. Listen in to how the 1 Second Everyday Android app approaches media use cases, and check out a new API that allows apps to capture concurrent camera streams.Learn from Android Google Developer Tom Colvin on how he experimented with building an AI-powered Camera app. These are just some of the things to think about when building camera and media experiences in your app. Keep checking this blog post for updates; we’ll be adding links and more throughout the week.
Posted by Kristina Simakova – Engineering Manager This article is cross-published on Medium Media3 1.5.0 is now available! Transformer now supports motion photos and faster image encoding. We’ve also simplified the setup for DefaultPreloadManager and ExoPlayer, making it easier to use. But that’s not all! We’ve included a new IAMF decoder, a Kotlin listener extension, and easier Player optimization through delegation. To learn more about all new APIs and bug fixes, check out the full release notes. Transformer improvements Motion photo support Transformer now supports exporting motion photos. The motion photo’s image is exported if the corresponding MediaItem’s image duration is set (see MediaItem.Builder().setImageDurationMs()) Otherwise, the motion photo’s video is exported. Note that the EditedMediaItem’s duration should not be set in either case as it will automatically be set to the corresponding MediaItem’s image duration. Faster image encoding This release accelerates image-to-video encoding, thanks to optimizations in DefaultVideoFrameProcessor.queueInputBitmap(). DefaultVideoFrameProcessor now treats the Bitmap given to queueInputBitmap() as immutable. The GL pipeline will resample and color-convert the input Bitmap only once. As a result, Transformer operations that take large (e.g. 12 megapixels) images as input execute faster. AudioEncoderSettings Similar to VideoEncoderSettings, Transformer now supports AudioEncoderSettings which can be used to set the desired encoding profile and bitrate. Edit list support Transformer now shifts the first video frame to start from 0. This fixes A/V sync issues in some files where an edit list is present. Unsupported track type logging This release includes improved logging for unsupported track types, providing more detailed information for troubleshooting and debugging. Media3 muxer In one of the previous releases we added a new muxer library which can be used to create MP4 container files. The media3 muxer offers support for a wide range of audio and video codecs, enabling seamless handling of diverse media formats. This new library also brings advanced features including: B-frame support Fragmented MP4 output Edit list support The muxer library can be included as a gradle dependency: implementation ("androidx.media3:media3-muxer:1.5.0") Media3 muxer with Transformer To use the media3 muxer with Transformer, set an InAppMuxer.Factory (which internally wraps media3 muxer) as the muxer factory when creating a Transformer: val transformer = Transformer.Builder(context) .setMuxerFactory(InAppMuxer.Factory.Builder().build()) .build() Simpler setup for DefaultPreloadManager and ExoPlayer With Media3 1.5.0, we added DefaultPreloadManager.Builder, which makes it much easier to build the preload components and the player. Previously we asked you to instantiate several required components (RenderersFactory, TrackSelectorFactory, LoadControl, BandwidthMeter and preload / playback Looper) first, and be super cautious on correctly sharing those components when injecting them into the DefaultPreloadManager constructor and the ExoPlayer.Builder. With the new DefaultPreloadManager.Builder this becomes a lot simpler: Build a DefaultPreloadManager and ExoPlayer instances with all default components. val preloadManagerBuilder = DefaultPreloadManager.Builder() val preloadManager = preloadManagerBuilder.build() val player = preloadManagerBuilder.buildExoPlayer() Build a DefaultPreloadManager and ExoPlayer instances with custom sharing components. val preloadManagerBuilder = DefaultPreloadManager.Builder().setRenderersFactory(customRenderersFactory) // The resulting preloadManager uses customRenderersFactory val preloadManager = preloadManagerBuilder.build() // The resulting player uses customRenderersFactory val player = preloadManagerBuilder.buildExoPlayer() Build a DefaultPreloadManager and ExoPlayer instances, while setting the custom playback-only configurations on the ExoPlayers. val preloadManagerBuilder = DefaultPreloadManager.Builder() val preloadManager = preloadManagerBuilder.build() // Tune the playback-only configurations val playerBuilder = ExoPlayer.Builder().setFooEnabled() // The resulting player will have playback feature "Foo" enabled val player = preloadManagerBuilder.buildExoPlayer(playerBuilder) Preloading the next playlist item We’ve added the ability to preload the next item in the playlist of ExoPlayer. By default, playlist preloading is disabled but can be enabled by setting the duration which should be preloaded to memory: player.preloadConfiguration = PreloadConfiguration(/* targetPreloadDurationUs= */ 5_000_000L) With the PreloadConfiguration above, the player tries to preload five seconds of media for the next item in the playlist. Preloading is only started when no media is being loaded that is required for the ongoing playback. This way preloading doesn’t compete for bandwidth with the primary playback. When enabled, preloading can help minimize join latency when a user skips to the next item before the playback buffer reaches the next item. The first period of the next window is prepared and video, audio and text samples are preloaded into its sample queues. The preloaded period is later queued into the player with preloaded samples immediately available and ready to be fed to the codec for rendering. Once opted-in, playlist preloading can be turned off again by using PreloadConfiguration.DEFAULT to disable playlist preloading: player.preloadConfiguration = PreloadConfiguration.DEFAULT New IAMF decoder and Kotlin listener extension The 1.5.0 release includes a new media3-decoder-iamf module, which allows playback of IAMF immersive audio tracks in MP4 files. Apps wanting to try this out will need to build the libiamf decoder locally. See the media3 README for full instructions. implementation ("androidx.media3:media3-decoder-iamf:1.5.0") This release also includes a new media3-common-ktx module, a home for Kotlin-specific functionality. The first version of this module contains a suspend function that lets the caller listen to Player.Listener.onEvents. This is a building block that’s used by the upcoming media3-ui-compose module (launching with media3 1.6.0) to power a Jetpack Compose playback UI. implementation ("androidx.media3:media3-common-ktx:1.5.0") Easier Player customization via delegation Media3 has provided a ForwardingPlayer implementation since version 1.0.0, and we have previously suggested that apps should use it when they want to customize the way certain Player operations work, by using the decorator pattern. One very common use-case is to allow or disallow certain player commands (in order to show/hide certain buttons in a UI). Unfortunately, doing this correctly with ForwardingPlayer is surprisingly hard and error-prone, because you have to consistently override multiple methods, and handle the listener as well. The example code to demonstrate how fiddly this is too long for this blog, so we’ve put it in a gist instead. In order to make these sorts of customizations easier, 1.5.0 includes a new ForwardingSimpleBasePlayer, which builds on the consistency guarantees provided by SimpleBasePlayer to make it easier to create consistent Player implementations following the decorator pattern. The same command-modifying Player is now much simpler to implement: class PlayerWithoutSeekToNext(player: Player) : ForwardingSimpleBasePlayer(player) { override fun getState(): State { val state = super.getState() return state .buildUpon() .setAvailableCommands( state.availableCommands.buildUpon().remove(COMMAND_SEEK_TO_NEXT).build() ) .build() } // We don't need to override handleSeek, because it is guaranteed not to be called for // COMMAND_SEEK_TO_NEXT since we've marked that command unavailable. } MediaSession: Command button for media items Command buttons for media items allow a session app to declare commands supported by certain media items that then can be conveniently displayed and executed by a MediaController or MediaBrowser: Screenshot: Command buttons for media items in the Media Center of Android Automotive OS. You'll find the detailed documentation on android.developer.com. This is the Media3 equivalent of the legacy “custom browse actions” API, with which Media3 is fully interoperable. Unlike the legacy API, command buttons for media items do not require a MediaLibraryService but are a feature of the Media3 MediaSession instead. Hence they are available for MediaController and MediaBrowser in the same way. If you encounter any issues, have feature requests, or want to share feedback, please let us know using the Media3 issue tracker on GitHub. We look forward to hearing from you! This blog post is a part of Camera and Media Spotlight Week. We're providing resources – blog posts, videos, sample code, and more – all designed to help you uplevel the media experiences in your app. To learn more about what Spotlight Week has to offer and how it can benefit you, be sure to read our overview blog post.
Posted by Robbie McLachlan – Developer Marketing This year #WeArePlay took us on a journey across the globe, spotlighting 300 people behind apps and games on Google Play. From a founder whose app uses AI to assist visually impaired people to a game where nimble-fingered players slice flying fruits and use special combos to beat their own high score, we met founders transforming ideas into thriving businesses. Let’s start by taking a look back at the people featured in our global film series. From a mother and son duo preserving African languages, to a founder whose app helps kids become published authors - check out the full playlist. We also continued our global tour around the world with: 153 new stories from the United States like Ashley’s Get Mom Strong, which gives access to rehabilitation and fitness plans to help moms heal and get strong after childbirth 49 new stories from Japan like Toshiya’s Mirairo ID, an app that empowers the disabled community by digitizing disability certificates 50 new stories from Australia, including apps like Tristan’s Bushfire.io, which supports communities during natural disasters And we released global collections of 36 stories, each with a theme reflecting the diversity of the app and game community on Google Play, including: LGBTQ+ founders creating safe spaces and fostering representation Women founders breaking barriers and building impactful businesses Creators turning personal passions—such as fitness, mental health, or creativity—into inspiring apps Founders building sports apps and games that bring players, fans, and communities together To the global community of app and game founders, thank you for sharing your inspiring journey. As we enter 2025, we look forward to discovering even more stories of the people behind games and apps businesses on Google Play. How useful did you find this blog post? ★ ★ ★ ★ ★
Posted by Matthew McCullough – VP of Product Management, Android Developer The second developer preview of Android 16 is now available to test with your apps. This build includes changes designed to enhance the app experience, improve battery life, and boost performance while minimizing incompatibilities, and your feedback is critical in helping us understand the full impact of this work. System triggered profiling ProfilingManager was added in Android 15, giving apps the ability to request profiling data collection using Perfetto on public devices in the field. To help capture challenging trace scenarios such as startups or ANRs, ProfilingManager now includes System Triggered Profiling. Apps can use ProfilingManager#addProfilingTriggers() to register interest in receiving information about these flows. Flows covered in this release include onFullyDrawn for activity based cold starts, and ANRs. val anrTrigger = ProfilingTrigger.Builder( ProfilingTrigger.TRIGGER_TYPE_ANR ) .setRateLimitingPeriodHours(1) .build() val startupTrigger: ProfilingTrigger = //... mProfilingManager.addProfilingTriggers(listOf(anrTrigger, startupTrigger)) Start component in ApplicationStartInfo ApplicationStartInfo was added in Android 15, allowing an app to see reasons for process start, start type, start times, throttling, and other useful diagnostic data. Android 16 adds getStartComponent() to distinguish what component type triggered the start, which can be helpful for optimizing the startup flow of your app. Richer Haptics Android has exposed limited control over the haptic actuator since its inception. Android 11 added support for more complex haptic effects that more advanced actuators can support through VibrationEffect.Compositions of device-defined semantic primitives. Android 16 adds haptic APIs that let apps define the amplitude and frequency curves of a haptic effect while abstracting away differences between device capabilities. Better job introspection Android 16 introduces JobScheduler#getPendingJobReasons(int jobId) which can return multiple reasons why a job is pending, due to both explicit constraints set by the developer and implicit constraints set by the system. We're also introducing JobScheduler#getPendingJobReasonsHistory(int jobId), which returns a list of the most recent constraint changes. The API can help you debug why your jobs may not be executing, especially if you're seeing reduced success rates with certain tasks or latency issues with job completion as well. This can also better help you understand if certain jobs are not completing due to system defined constraints versus explicitly set constraints. Adaptive refresh rate Adaptive refresh rate (ARR), introduced in Android 15, enables the display refresh rate on supported hardware to adapt to the content frame rate using discrete VSync steps. This reduces power consumption while eliminating the need for potentially jank-inducing mode-switching. Android 16 DP2 introduces hasArrSupport() and getSuggestedFrameRate(int) while restoring getSupportedRefreshRates() to make it easier for your apps to take advantage of ARR. RecyclerView 1.4 internally supports ARR when it is settling from a fling or smooth scroll, and we're continuing our work to add ARR support into more Jetpack libraries. This frame rate article covers many of the APIs you can use to set the frame rate so that your app can directly leverage ARR. Job execution optimizations Starting in Android 16, we're adjusting regular and expedited job execution runtime quota based on the following factors: Which app standby bucket the application is in; active standby buckets will be given a generous runtime quota. Jobs started while the app is visible to the user and continues after the app becomes invisible will adhere to the job runtime quota. Jobs that are executing concurrently with a foreground service will adhere to the job runtime quota. If you need to perform a data transfer that may take a long time consider using a user initiated data transfer. Note: To understand how to further debug and test the behavior change, read more about JobScheduler quota optimizations. Fully deprecating JobInfo#setImportantWhileForeground The JobInfo.Builder#setImportantWhileForeground(boolean) method indicates the importance of a job while the scheduling app is in the foreground or when temporarily exempted from background restrictions. This method has been deprecated since Android 12 (API 31). Starting in Android 16, it will no longer function effectively and calling this method will be ignored. This removal of functionality also applies to JobInfo#isImportantWhileForeground(). Starting in Android 16, if the method is called, the method will return false. Deprecated Disruptive Accessibility Announcements Android 16 DP2 deprecates accessibility announcements, characterized by the use of announceForAccessibility or the dispatch of TYPE_ANNOUNCEMENT AccessibilityEvents. They can create inconsistent user experiences for users of TalkBack and Android's screen reader, and alternatives better serve a broader range of user needs across a variety of Android's assistive technologies. Examples of alternatives: For significant UI changes like window changes, use Activity.setTitle(CharSequence) and setAccessibilityPaneTitle(java.lang.CharSequence). In Compose use Modifier.semantics { paneTitle = "paneTitle" } To inform the user of changes to critical UI, use setAccessibilityLiveRegion(int). In Compose use Modifier.semantics { liveRegion = LiveRegionMode.[Polite|Assertive] }. These should be used sparingly as they may generate announcements every time a View or composable is updated. To notify users about errors, send an AccessibilityEvent of type AccessibilityEvent#CONTENT_CHANGE_TYPE_ERROR and set AccessibilityNodeInfo#setError(CharSequence), or use TextView#setError(CharSequence). The deprecated announceForAccessibility API includes more detail on suggested alternatives. Cloud search in photo picker The photo picker provides a safe, built-in way for users to grant your app access to selected images and videos from both local and cloud storage, instead of their entire media library. Using a combination of Modular System Components through Google System Updates and Google Play services, it's supported back to Android 4.4 (API level 19). Integration requires just a few lines of code with the associated Android Jetpack library. The developer preview includes new APIs to enable searching from the cloud media provider for the Android photo picker. Search functionality in the photo picker is coming soon. Ranging with enhanced security Android 16 adds support for robust security features in WiFi location on supported devices with WiFi 6's 802.11az, allowing apps to combine the higher accuracy, greater scalability, and dynamic scheduling of the protocol with security enhancements including AES-256-based encryption and protection against MITM attacks. This allows it to be used more safely in proximity use cases, such as unlocking a laptop or a vehicle door. 802.11az is integrated with the Wi-Fi 6 standard, leveraging its infrastructure and capabilities for wider adoption and easier deployment. Health Connect updates Health Connect in the developer preview adds ACTIVITY_INTENSITY, a new datatype defined according to World Health Organization guidelines around moderate and vigorous activity. Each record requires the start time, the end time and whether the activity intensity is moderate or vigorous. Health Connect also contains updated APIs supporting health records. This allows apps to read and write medical records in FHIR format with explicit user consent. This API is currently in an early access program. Sign up if you'd like to be part of our early access program. Predictive back additions Android 16 adds new APIs to help you enable predictive back system animations in gesture navigation such as the back-to-home animation. Registering the onBackInvokedCallback with the new PRIORITY_SYSTEM_NAVIGATION_OBSERVER allows your app to receive the regular onBackInvoked call whenever the system handles a back navigation without impacting the normal back navigation flow. Android 16 additionally adds the finishAndRemoveTaskCallback() and moveTaskToBackCallback(). By registering these callbacks with the OnBackInvokedDispatcher, the system can trigger specific behaviors and play corresponding ahead-of-time animations when the back gesture is invoked. Two Android API releases in 2025 This preview is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. The Q2 major release will be the only release in 2025 to include planned behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; it will not include any app-impacting behavior changes. We'll continue to have quarterly Android releases. The Q1 and Q3 updates in-between the API releases will provide incremental updates to help ensure continuous quality. We’re actively working with our device partners to bring the Q2 release to as many devices as possible. There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level. How to get ready In addition to performing compatibility testing on the next major release, make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes as they become available for early testing. App compatibility The Android 16 Preview program runs from November 2024 until the final public release next year. At key development milestones, we'll deliver updates for your development and testing environments. Each update includes SDK tools, system images, emulators, API reference, and API diffs. We'll highlight critical APIs as they are ready to test in the preview program in blogs and on the Android 16 developer website. We’re targeting Late Q1 of 2025 for our Platform Stability milestone. At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in March 2025, and from that time you’ll have several months before the official release to do your final testing. Learn more in the release timeline details. Get started with Android 16 You can get started today with Developer Preview 2 by flashing a system image and updating the tools. If you are currently on Developer Preview 1, you will automatically get an over-the-air update to Developer Preview 2. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in the final release. For the best development experience with Android 16, we recommend that you use the latest preview of the Android Studio Ladybug feature drop. Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 16, and install your app onto a device or emulator running Android 16 and extensively test it. We’ll update the preview system images and SDK regularly throughout the Android 16 release cycle. This preview release is for developers only and not intended for daily consumer use. We're making it available by manual download. Once you’ve manually installed a preview build, you’ll automatically get future updates over-the-air for all later previews and Betas. If you've already installed Android 15 QPR Beta 2 and would like to flash Android 16 Developer Preview 2, you can do so without first having to wipe your device. As we reach our Beta releases, we'll be inviting consumers to try Android 16 as well, and we'll open up enrollment for Android 16 in the Android Beta program at that time. For complete information, visit the Android 16 developer site.
Posted by Donovan McMurray – Developer Relations Engineer Instagram, the popular photo and video sharing social networking service, is constantly delighting users with a best-in-class camera experience. Recently, Instagram launched another improvement on Android with their Night Mode implementation. As devices and their cameras become more and more capable, users expect better quality images in a wider variety of settings. Whether it’s a night out with friends or the calmness right after you get your baby to fall asleep, the special moments users want to capture often don’t have ideal lighting conditions. Now, when Instagram users on Android take a photo in low light environments, they’ll see a moon icon that allows them to activate Night Mode for better image quality. This feature is currently available to users with any Pixel device from the 6 series and up, a Samsung Galaxy S24Ultra, or a Samsung Flip6 or Fold6, with more devices to follow. Leveraging Device-specific Camera Technologies Android enables apps to take advantage of device-specific camera features through the Camera Extensions API. The Extensions framework currently provides functionality like Night Mode for low-light image captures, Bokeh for applying portrait-style background blur, and Face Retouch for beauty filters. All of these features are implemented by the Original Equipment Manufacturers (OEMs) in order to maximize the quality of each feature on the hardware it's running on. Furthermore, exposing this OEM-specific functionality through the Extensions API allows developers to use a consistent implementation across all of these devices, getting the best of both worlds: implementations that are tuned to a wide-range of devices with a unified API surface. According to Nilesh Patel, a Software Engineer at Instagram, “for Meta’s billions of users, having to write custom code for each new device is simply not scalable. It would also add unnecessary app size when Meta users download the app. Hence our guideline is ‘write once to scale to billions’, favoring platform APIs.” More and more OEMs are supporting Extensions, too! There are already over 120 different devices that support the Camera Extensions, representing over 75 million monthly active users. There’s never been a better time to integrate Extensions into your Android app to give your users the best possible camera experience. Impact on Instagram The results of adding Night Mode to Instagram have been very positive for Instagram users. Jin Cui, a Partner Engineer on Instagram, said “Night Mode has increased the number of photos captured and shared with the Instagram camera, since the quality of the photos are now visibly better in low-light scenes.” Compare the following photos to see just how big of a difference Night Mode makes. The first photo is taken in Instagram with Night Mode off, the second photo is taken in Instagram with Night Mode on, and the third photo is taken with the native camera app with the device’s own low-light processing enabled. Ensuring Quality through Image Test Suite (ITS) The Android Camera Image Test Suite (ITS) is a framework for testing images from Android cameras. ITS tests configure the camera and capture shots to verify expected image data. These tests are functional and ensure advertised camera features work as expected. A tablet mounted on one side of the ITS box displays the test chart. The device under test is mounted on the opposite side of the ITS box. Devices must pass the ITS tests for any feature that the device claims to support for apps to use, including the tests we have for the Night Mode Camera Extension. Regular field-of-view (RFoV) ITS box Rev1b showing the device mounting brackets The Android Camera team faced the challenge of ensuring the Night Mode Camera Extension feature functioned consistently across all devices in a scalable way. This required creating a testing environment with very low light and a wide dynamic range. This configuration was necessary to simulate real-world lighting scenarios, such as a city at night with varying levels of brightness and shadow, or the atmospheric lighting of a restaurant. The first step to designing the test was to define the specific lighting conditions to simulate. Field testing with a light meter in various locations and lighting conditions was conducted to determine the target lux level. The goal was to ensure the camera could capture clear images in low-light conditions, which led to the establishment of 3 lux as the target lux level. The figure below shows various lighting conditions and their respective lux value. Evaluation of scenes of varying lighting conditions measured with a Light Meter The next step was to develop a test chart to accurately measure a wide dynamic range in a low light environment. The team developed and iterated on several test charts and arrived at the following test chart shown below. This chart arranges a grid of squares in varying shades of grey. A red outline defines the test area for cropping. This enables excluding darker external regions. The grid follows a Hilbert curve pattern to minimize abrupt light or dark transitions. The design allows for both quantitative measurements and simulation of a broad range of light conditions. Low Light test chart displayed on tablet in ITS box The test chart captures an image using the Night Mode Camera Extension in low light conditions. The image is used to evaluate the improvement in the shadows and midtones while ensuring the highlights aren’t saturated. This evaluation involves two criteria: ensure the average luma value of the six darkest boxes is at least 85, and ensure the average luma contrast between these boxes is at least 17. The figure below shows the test capture and chart results. Night Mode Camera Extension capture and test chart result By leveraging the existing ITS infrastructure, the Android Camera team was able to provide consistent, high quality Night Mode Camera Extension captures. This gives application developers the confidence to integrate and enable Night Mode captures for their users. It also allows OEMs to validate their implementations and ensure users get the best quality capture. How to Implement Night Mode with Camera Extensions Camera Extensions are available to apps built with Camera2 or CameraX. In this section, we’ll walk through each of the features Instagram implemented. The code examples will use CameraX, but you’ll find links to the Camera2 documentation at each step. Enabling Night Mode Extension Night Mode involves combining multiple exposures into a single still photo for better quality shots in low-light environments. So first, you’ll need to check for Night Mode availability, and tell the camera system to start a Camera Extension session. With CameraX, this is done with an ExtensionsManager instead of the standard CameraManager. private suspend fun setUpCamera() { // Obtain an instance of a process camera provider. The camera provider // provides access to the set of cameras associated with the device. // The camera obtained from the provider will be bound to the activity lifecycle. val cameraProvider = ProcessCameraProvider.getInstance(application).await() // Obtain an instance of the extensions manager. The extensions manager // enables a camera to use extension capabilities available on the device. val extensionsManager = ExtensionsManager.getInstanceAsync( application, cameraProvider).await() // Select the camera. val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA // Query if extension is available. Not all devices will support // extensions or might only support a subset of extensions. if (extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)) { // Unbind all use cases before enabling different extension modes. try { cameraProvider.unbindAll() // Retrieve a night extension enabled camera selector val nightCameraSelector = extensionsManager.getExtensionEnabledCameraSelector( cameraSelector, ExtensionMode.NIGHT ) // Bind image capture and preview use cases with the extension enabled camera // selector. val imageCapture = ImageCapture.Builder().build() val preview = Preview.Builder().build() // Connect the preview to receive the surface the camera outputs the frames // to. This will allow displaying the camera frames in either a TextureView // or SurfaceView. The SurfaceProvider can be obtained from the PreviewView. preview.setSurfaceProvider(surfaceProvider) // Returns an instance of the camera bound to the lifecycle // Use this camera object to control various operations with the camera // Example: flash, zoom, focus metering etc. val camera = cameraProvider.bindToLifecycle( lifecycleOwner, nightCameraSelector, imageCapture, preview ) } catch (e: Exception) { Log.e(TAG, "Use case binding failed", e) } } else { // In the case where the extension isn't available, you should set up // CameraX normally with non-extension-enabled CameraSelector. } } To do this in Camera2, see the Create a CameraExtensionSession with the Camera2 Extensions API guide. Implementing the Progress Bar and PostView Image For an even more elevated user experience, you can provide feedback while the Night Mode capture is processing. In Android 14, we added callbacks for the progress and for post view, which is a temporary image capture before the Night Mode processing is complete. The below code shows how to use these callbacks in the takePicture() method. The actual implementation to update the UI is very app-dependent, so we’ll leave the actual UI updating code to you. // When setting up the ImageCapture.Builder, set postviewEnabled and // posviewResolutionSelector in order to get a PostView bitmap in the // onPostviewBitmapAvailable callback when takePicture() is called. val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isPostviewSupported = ImageCapture.getImageCaptureCapabilities(cameraInfo).isPostviewSupported val postviewResolutionSelector = ResolutionSelector.Builder() .setAspectRatioStrategy(AspectRatioStrategy( AspectRatioStrategy.RATIO_16_9_FALLBACK_AUTO_STRATEGY, AspectRatioStrategy.FALLBACK_RULE_AUTO)) .setResolutionStrategy(ResolutionStrategy( previewSize, ResolutionStrategy.FALLBACK_RULE_CLOSEST_LOWER_THEN_HIGHER )) .build() imageCapture = ImageCapture.Builder() .setTargetAspectRatio(AspectRatio.RATIO_16_9) .setPostviewEnabled(isPostviewSupported) .setPostviewResolutionSelector(postviewResolutionSelector) .build() // When the Night Mode photo is being taken, define these additional callbacks // to implement PostView and a progress indicator in your app. imageCapture.takePicture( outputFileOptions, Dispatchers.Default.asExecutor(), object : ImageCapture.OnImageSavedCallback { override fun onPostviewBitmapAvailable(bitmap: Bitmap) { // Add the Bitmap to your UI as a placeholder while the final result is processed } override fun onCaptureProcessProgressed(progress: Int) { // Use the progress value to update your UI; values go from 0 to 100. } } ) To accomplish this in Camera2, see the CameraFragment.kt file in the Camera2Extensions sample app. Implementing the Moon Icon Indicator Another user-focused design touch is showing the moon icon to let the user know that a Night Mode capture will happen. It’s also a good idea to let the user tap the moon icon to disable Night Mode capture. There’s an upcoming API in Android 16 next year to let you know when the device is in a low-light environment. Here are the possible values for the Night Mode Indicator API: UNKNOWN The camera is unable to reliably detect the lighting conditions of the current scene to determine if a photo will benefit from a Night Mode Camera Extension capture. OFF The camera has detected lighting conditions that are sufficiently bright. Night Mode Camera Extension is available but may not be able to optimize the camera settings to take a higher quality photo. ON The camera has detected low-light conditions. It is recommended to use Night Mode Camera Extension to optimize the camera settings to take a high-quality photo in the dark. Next Steps Read more about Android’s camera APIs in the Camera2 guides and the CameraX guides. Once you’ve got the basics down, check out the Android Camera and Media Dev Center to take your camera app development to the next level. For more details on upcoming Android features, like the Night Mode Indicator API, get started with the Android 16 Preview program.
Posted by Scott Nien – Software Engineer (scottnien@) Get ready to level up your Android camera apps! CameraX 1.4.0 just dropped with a load of awesome new features and improvements. We're talking expanded HDR capabilities, preview stabilization and the versatile effect framework, and a whole lot of cool stuff to explore. We will also explore how to seamlessly integrate CameraX with Jetpack Compose! Let's dive in and see how these enhancements can take your camera app to the next level. HDR preview and Ultra HDR High Dynamic Range (HDR) is a game-changer for photography, capturing a wider range of light and detail to create stunningly realistic images. With CameraX 1.3.0, we brought you HDR video recording capabilities, and now in 1.4.0, we're taking it even further! Get ready for HDR Preview and Ultra HDR. These exciting additions empower you to deliver an even richer visual experience to your users. HDR Preview This new feature allows you to enable HDR on Preview without needing to bind a VideoCapture use case. This is especially useful for apps that use a single preview stream for both showing preview on display and video recording with an OpenGL pipeline. To fully enable the HDR, you need to ensure your OpenGL pipeline is capable of processing the specific dynamic range format and then check the camera capability. See following code snippet as an example to enable HLG10 which is the baseline HDR standard that device makers must support on cameras with 10-bit output. // Declare your OpenGL pipeline supported dynamic range format. val openGLPipelineSupportedDynamicRange = setOf( DynamicRange.SDR, DynamicRange.HLG_10_BIT ) // Check camera dynamic range capabilities. val isHlg10Supported = cameraProvider.getCameraInfo(cameraSelector) .querySupportedDynamicRanges(openGLPipelineSupportedDynamicRange) .contains(DynamicRange.HLG_10_BIT) val preview = Preview.Builder().apply { if (isHlg10Supported) { setDynamicRange(DynamicRange.HLG_10_BIT) } } Ultra HDR Introducing Ultra HDR, a new format in Android 14 that lets users capture stunningly realistic photos with incredible dynamic range. And the best part? CameraX 1.4.0 makes it incredibly easy to add Ultra HDR capture to your app with just a few lines of code: val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA val cameraInfo = cameraProvider.getCameraInfo(cameraSelector) val isUltraHdrSupported = ImageCapture.getImageCaptureCapabilities(cameraInfo) .supportedOutputFormats .contains(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR) val imageCapture = ImageCapture.Builder().apply { if (isUltraHdrSupported) { setOutputFormat(ImageCapture.OUTPUT_FORMAT_JPEG_ULTRA_HDR) } }.build() Jetpack Compose support While this post focuses on 1.4.0, we're excited to announce the Jetpack Compose support in CameraX 1.5.0 alpha. We’re adding support for a Composable Viewfinder built on top of AndroidExternalSurface and AndroidEmbeddedExternalSurface. The CameraXViewfinder Composable hooks up a display surface to a CameraX Preview use case, handling the complexities of rotation, scaling and Surface lifecycle so you don’t need to. // in build.gradle implementation ("androidx.camera:camera-compose:1.5.0-alpha03") class PreviewViewModel : ViewModel() { private val _surfaceRequests = MutableStateFlow(null) val surfaceRequests: StateFlow get() = _surfaceRequests.asStateFlow() private fun produceSurfaceRequests(previewUseCase: Preview) { // Always publish new SurfaceRequests from Preview previewUseCase.setSurfaceProvider { newSurfaceRequest -> _surfaceRequests.value = newSurfaceRequest } } // ... } @Composable fun MyCameraViewfinder( viewModel: PreviewViewModel, modifier: Modifier = Modifier ) { val currentSurfaceRequest: SurfaceRequest? by viewModel.surfaceRequests.collectAsState() currentSurfaceRequest?.let { surfaceRequest -> CameraXViewfinder( surfaceRequest = surfaceRequest, implementationMode = ImplementationMode.EXTERNAL, // Or EMBEDDED modifier = modifier ) } } Learn more about unlocking the power of CameraX in Jetpack Compose, read Part 1 of the Getting Started with CameraX in Jetpack Compose blog series. Kotlin-friendly APIs CameraX is getting even more Kotlin-friendly! In 1.4.0, we've introduced two new suspend functions to streamline camera initialization and image capture. // CameraX initialization val cameraProvider = ProcessCameraProvider.awaitInstance() val imageProxy = imageCapture.takePicture() // Processing imageProxy imageProxy.close() Preview Stabilization and Mirror mode Preview Stabilization Preview stabilization mode was added in Android 13 to enable the stabilization on all non-RAW streams, including previews and MediaCodec input surfaces. Compared to the previous video stabilization mode, which may have inconsistent FoV (Field of View) between the preview and recorded video, this new preview stabilization mode ensures consistency and thus provides a better user experience. For apps that record the preview directly for video recording, this mode is also the only way to enable stabilization. Follow the code below to enable preview stabilization. Please note that once preview stabilization is turned on, it is not only applied to the Preview but also to the VideoCapture if it is bound as well. val isPreviewStabilizationSupported = Preview.getPreviewCapabilities(cameraProvider.getCameraInfo(cameraSelector)) .isStabilizationSupported val preview = Preview.Builder().apply { if (isPreviewStabilizationSupported) { setPreviewStabilizationEnabled(true) } }.build() MirrorMode While CameraX 1.3.0 introduced mirror mode for VideoCapture, we've now brought this handy feature to Preview in 1.4.0. This is especially useful for devices with outer displays, allowing you to create a more natural selfie experience when using the rear camera. To enable the mirror mode, simply call Preview.Builder.setMirrorMode APIs. This feature is supported for Android 13 and above. Real-time Effect CameraX 1.3.0 introduced the CameraEffect framework, giving you the power to customize your camera output with OpenGL. Now, in 1.4.0, we're taking it a step further. In addition to applying your own custom effects, you can now leverage a set of pre-built effects provided by CameraX and Media3, making it easier than ever to enhance your app's camera features. Overlay Effect The new camera-effects artifact aims to provide ready-to-use effect implementations, starting with the OverlayEffect. This effect lets you draw overlays on top of camera frames using the familiar Canvas API. The following sample code shows how to detect the QR code and draw the shape of the QR code once it is detected. By default, drawing is performed in surface frame coordinates. But what if you need to use camera sensor coordinates? No problem! OverlayEffect provides the Frame#getSensorToBufferTransform function, allowing you to apply the necessary transformation matrix to your overlayCanvas. In this example, we use CameraX's MLKit Vision APIs (MlKitAnalyzer) and specify COORDINATE_SYSTEM_SENSOR to obtain QR code corner points in sensor coordinates. This ensures accurate overlay placement regardless of device orientation or screen aspect ratio. // in build.gradle implementation ("androidx.camera:camera-effects:1.4.1}") implementation ("androidx.camera:camera-mlkit-vision:1.4.1") var qrcodePoints: Array? = null var qrcodeTimestamp = 0L val qrcodeBoxEffect = OverlayEffect( PREVIEW /* applied on the preview only */, 5, /* hold multiple frames in the queue so we can match analysis result with preview frame */, Handler(Looper.getMainLooper()), {} ) fun initCamera() { qrcodeBoxEffect.setOnDrawListener { frame -> if(frame.timestamp != qrcodeTimestamp) { // Do not change the drawing if the frame doesn’t match the analysis // result. return@setOnDrawListener true } frame.overlayCanvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR) qrcodePoints?.let { // Using sensor coordinates to draw. frame.overlayCanvas.setMatrix(frame.sensorToBufferTransform) val path = android.graphics.Path().apply { it.forEachIndexed { index, point -> if (index == 0) { moveTo(point.x.toFloat(), point.y.toFloat()) } else { lineTo(point.x.toFloat(), point.y.toFloat()) } } lineTo(it[0].x.toFloat(), it[0].y.toFloat()) } frame.overlayCanvas.drawPath(path, paint) } true } val imageAnalysis = ImageAnalysis.Builder() .build() .apply { setAnalyzer(executor, MlKitAnalyzer( listOf(barcodeScanner!!), COORDINATE_SYSTEM_SENSOR, executor ) { result -> val barcodes = result.getValue(barcodeScanner!!) qrcodePoints = barcodes?.takeIf { it.size > 0}?.get(0)?.cornerPoints // track the timestamp of the analysis result and release the // preview frame. qrcodeTimestamp = result.timestamp qrcodeBoxEffect.drawFrameAsync(qrcodeTimestamp) } ) } val useCaseGroup = UseCaseGroup.Builder() .addUseCase(preview) .addUseCase(imageAnalysis) .addEffect(qrcodeBoxEffect) .build() cameraProvider.bindToLifecycle( lifecycleOwner, cameraSelector, usecaseGroup) } Media3 Effect Want to add stunning camera effects to your CameraX app? Now you can tap into the power of Media3's rich effects framework! This exciting integration allows you to apply Media3 effects to your CameraX output, including Preview, VideoCapture, and ImageCapture. This means you can easily enhance your app with a wide range of professional-grade effects, from blurs and color filters to transitions and more. To get started, simply use the new androidx.camera:media3:media3-effect artifact. Here's a quick example of how to apply a Gaussian blur to your camera output: // in build.gradle implementation ("androidx.camera.media3:media3-effect:1.0.0-alpha01") implementation ("androidx.media3:media3-effect:1.5.0") import androidx.camera.media3.effect.Media3Effect val media3Effect = Media3Effect( requireContext(), PREVIEW or VIDEO_CAPTURE or IMAGE_CAPTURE, mainThreadExecutor(), {} ) // use grayscale effect media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()) cameraController.setEffects(setOf(media3Effect)) // or using UseCaseGroup API Here is what the effect looks like: Screen Flash Taking selfies in low light just got easier with CameraX 1.4.0! This release introduces a powerful new feature: screen flash. Instead of relying on a traditional LED flash which most selfie cameras don’t have, screen flash cleverly utilizes your phone's display. By momentarily turning the screen bright white, it provides a burst of illumination that helps capture clear and vibrant selfies even in challenging lighting conditions. Integrating screen flash into your CameraX app is flexible and straightforward. You have two main options: 1. Implement the ScreenFlash interface: This gives you full control over the screen flash behavior. You can customize the color, intensity, duration, and any other aspect of the flash. This is ideal if you need a highly tailored solution. 2. Use the built-in implementation: For a quick and easy solution, leverage the pre-built screen flash functionality in ScreenFlashView or PreviewView. This implementation handles all the heavy lifting for you. If you're already using PreviewView in your app, enabling screen flash is incredibly simple. Just enable it directly on the PreviewView instance. If you need more control or aren't using PreviewView, you can use ScreenFlashView directly. Here's a code example demonstrating how to enable screen flash: // case 1: PreviewView + CameraX core API. previewView.setScreenFlashWindow(activity.getWindow()); imageCapture.screenFlash = previewView.screenFlash imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN) // case 2: PreviewView + CameraController previewView.setScreenFlashWindow(activity.getWindow()); cameraController.setImageCaptureFlashMode(ImageCapture.FLASH_MODE_SCREEN); // case 3 : use ScreenFlashView screenFlashView.setScreenFlashWindow(activity.getWindow()); imageCapture.setScreenFlash(screenFlashView.getScreenFlash()); imageCapture.setFlashMode(ImageCapture.FLASH_MODE_SCREEN); Camera Extensions new features Camera Extensions APIs aim to help apps to access the cutting-edge capabilities previously available only on built-in camera apps. And the ecosystem is growing rapidly! In 2024, we've seen major players like Pixel, Samsung, Xiaomi, Oppo, OnePlus, Vivo, and Honor all embrace Camera Extensions, particularly for Night Mode and Bokeh Mode. CameraX 1.4.0 takes this even further by adding support for brand-new Android 15 Camera Extensions features, including: Postview: Provides a preview of the captured image almost instantly before the long-exposure shots are completed Capture Process Progress: Displays a progress indicator so users know how long capturing and processing will take, improving the experience for features like Night Mode Extensions Strength: Allows users to fine-tune the intensity of the applied effect Below is an example of the improved UX that uses postview and capture process progress features on Samsung S24 Ultra. Interested to know how this can be implemented? See the sample code below: val extensionsCameraSelector = extensionsManager .getExtensionEnabledCameraSelector(DEFAULT_BACK_CAMERA, extensionMode) val isPostviewSupported = ImageCapture.getImageCaptureCapabilities( cameraProvider.getCameraInfo(extensionsCameraSelector) ).isPostviewSupported val imageCapture = ImageCapture.Builder().apply { setPostviewEnabled(isPostviewSupported) }.build() imageCapture.takePicture(outputfileOptions, executor, object : OnImageSavedCallback { override fun onImageSaved(outputFileResults: OutputFileResults) { // final image saved. } override fun onPostviewBitmapAvailable(bitmap: Bitmap) { // Postview bitmap is available. } override fun onCaptureProcessProgressed(progress: Int) { // capture process progress update } } Important: If your app ran into the CameraX Extensions issue on Pixel 9 series devices, please use CameraX 1.4.1 instead. This release fixes a critical issue that prevented Night Mode from working correctly with takePicture. What's Next We hope you enjoy this new release. Our mission is to make camera development a joy, removing the friction and pain points so you can focus on innovation. With CameraX, you can easily harness the power of Android's camera capabilities and build truly amazing app experiences. Have questions or want to connect with the CameraX team? Join the CameraX developer discussion group or file a bug report: CameraX developers discussion group File a bug We can’t wait to see what you create!
Posted by Yacine Rezgui – Developer Relations Engineer, Steven Moreland – Staff Software Engineer Android is evolving to deliver even faster, more performant experiences. One key improvement is the adoption of a 16 KB memory page size. This change enables the operating system to manage memory more efficiently, leading to noticeable performance gains (5-10%) in both apps and games. We provided an in-depth technical explanation and highlighted the performance improvements in Adding 16 KB Page Size to Android. To help you test your app on 16 KB devices, this functionality is available as a developer option on Google Pixel 8 and 9 devices, and Samsung devices will soon offer similar support, as well as Xiaomi, vivo, and other Android OEMs. To ensure compatibility with 16 KB devices, apps that utilize native code, either directly or through libraries or SDKs, might require rebuilding. However, the transition is significantly easier than the previous shift from 32-bit to 64-bit architecture. This article will guide you through the necessary steps to prepare your apps for the upcoming devices. The next generation of devices is on its way, with the first models supporting 16 KB page sizes expected to arrive in a couple of years. Getting ready for 16 KB: SDK developers If you develop your own SDKs and libraries, we encourage you to update them to be 16 KB page size compatible and test them on 16 KB devices as soon as possible. This will give app developers ample time to incorporate the necessary changes. Registering with Play SDK Console is a great way to ensure you receive advanced notices like these in the future and in a timely manner. Getting ready for 16 KB: app developers with no native code Apps written in and with dependencies entirely in Kotlin or the Java programming languages will work as-is! Getting ready for 16 KB: app developers with native code To check if your app has native code, you can utilize tools like APK Analyzer in Android Studio. However, the only way to ensure app compatibility is to test. Rebuild your app To ensure your app works on devices with a 16 KB page size, follow these steps: 1. Upgrade your tools: Start by upgrading to Android Gradle Plugin (AGP) 8.5.1 or higher. These updated tools incorporate the necessary 16 KB page size configuration for your App Bundle and the APKs generated from it using bundletool. 2. Align your native code: If your app includes native code, use NDK version r28 or higher, or rebuild it with 16 KB page size alignment. You should also ensure that your native code does not rely on or hardcode the value of PAGE_SIZE. 3. Update SDKs and libraries: Confirm that all SDKs and libraries used in your app are compatible with 16 KB page size. If necessary, contact the SDK or library developers for updated versions. Test your app in 16 KB mode To make sure your application does not assume the page size to be 4 KB anywhere, test it with a 16 KB page size emulator or virtual device in addition to how you have been testing (with a 4 KB page size). This helps identify and resolve any compatibility issues from the move to 16 KB page sizes. You can also test on physical devices with the developer option available on Pixel 8, 8a, and 8 Pro starting with the Android 15 QPR1 and Pixel 9, 9 Pro, 9 Pro XL in the Android 15 QPR2 Beta 2, with more devices on the way. The Future is Faster and More Efficient The move to 16 KB page size benefits the Android ecosystem. It unlocks performance improvements, paves the way for future innovations, and provides users with smoother and richer app experiences. We'll continue to provide updates and resources to help you through this transition. Start preparing your apps today to ensure you're ready for the future of Android!
Posted by Robbie McLachlan – Developer Marketing In a year filled with iconic sports moments—from the Olympic and Paralympic Games in Paris to the UEFA Euro Cup in Germany—our celebration of app and game businesses continues with nine new #WeArePlay stories. These founders are building sports apps and games that unite players, fans, and communities—from immersive sports simulations to apps that motivate runners with rewards like vouchers and free gifts. Let’s take a look at some of my favourites. Immerse yourself into your favourite sport with Hao, Yukun, and Mingming's simulator games Hao, Yukun and Mingming, co-founders of Feamber GamesChengdu, China Hao always dreamed of creating video games. After studying computer science, he joined a gaming company where he met Yukun and Mingming. Their shared passion for game design and long conversations about graphics, movie scenes, and nostalgic childhood games inspired them to start Feamber Games. Specializing in realistic 3D sports simulations like pool and archery, they’ve added competitive elements to enhance the experience. Recently, they’ve expanded into immersive games that let players build business empires and manage hotels. Now, the trio is focused on growing their global audience. Anna’s boxing fitness app is a knockout, with tailored training and on-demand classes Anna, founder of BoxxLondon, UK Anna discovered her love for boxing at 11, staying dedicated to non-contact training throughout adulthood. After a career in accounting and becoming a mother, she struggled to attend classes, inspiring her to create Boxx – an app that brings boxing training to any location. Collaborating with fitness instructors, she developed personalized sessions, hybrid workouts, expert-led on-demand classes, and progress tracking. With hands-free guided audio and community features coming soon, Anna is regularly reviewing feedback to find innovative approaches to improve boxers’ experiences. Get active and track your progress with Yi Hern, Dana, and Pearl's running app Yi Hern, co-founder of JomRunCyberjaya, Malaysia After creating a successful augmented reality game, childhood friends Yi Hern, Dana, and Pearl decided to inspire people to stay active. Combining Yi Hern's engineering skills, Dana's visual arts expertise, and Pearl's scientific background, they developed JomRun – Let’s Run. The app allows runners to track their progress, earn rewards like vouchers and free gifts, and easily join marathons. With teams in Malaysia and Singapore, and plans to introduce new features, the trio is gearing up to expand across Southeast Asia. Ohjun and Jaeho’s volleyball game get high scores from players worldwide Ohjun and Jaeho, co-founders of SUNCYANSeoul, South Korea Ohjun and Jaeho, childhood friends from an online game development community, combined their love for game building and volleyball to create The Spike - Volleyball Story. After a successful test release on Google Play, the game gained popularity in South Korea, inspiring them to improve it and reach a global audience. They added new features like story and tournament modes, plus a complete UX overhaul, all to recreate the excitement of real-life volleyball. Now, they’re focused on creating even more thrilling sports games. How useful did you find this blog post? ★ ★ ★ ★ ★
Posted by Ben Weiss – Developer Relations Engineer, and Lauren Darcey – Senior Engineering Manager, Reddit Reddit is one of the world’s largest internet forums, bringing together countless communities looking for entertainment, answers to everyday questions, and so much more. Recently, the team optimized its Android app to reduce startup times and improve rendering performance using Baseline Profiles. But the team didn’t stop there. Reddit app developers also enabled Android’s R8 compiler in full mode to maximize bytecode optimization and used Jetpack Compose to rewrite legacy UI, improving both the user and developer experience. Maximizing optimization using Baseline Profiles and R8 full mode The Reddit Android app has undergone countless performance upgrades over the years. Reddit developers have long since cleared the list of quick and easy tasks for optimization, but the team still wants to improve the app, bringing its performance to the next level and ensuring it runs well on every Android device. “Reddit is looking for any strategic improvement to its app performance so we can make the app experience better for new and existing users,” said Rob McWhinnie, a staff engineer at Reddit. “Baseline Profiles fit this use case well since they are based on critical user journeys.” Reddit’s platform engineering team used screen-specific performance metrics and observability to help its feature teams improve key metrics like time to interactive and scroll performance. Baseline Profiles were a natural fit to help improve these metrics and the user experience behind them, so the team integrated them to make tracking and optimizing easier, using insights from geodata and device classes. The team built Baseline Profiles for five critical user journeys so far, like scrolling the home feed, logging in, launching the full-screen video player, navigating between subreddits and scrolling their feeds, and using the chat feature. Simplifying Baseline Profile management in their continuous integration processes, enabled Reddit to remove the need for manual maintenance and streamlining optimization. Now, Baseline Profiles are automatically regenerated for each release. Enabling Android’s R8 optimization compiler in full mode was another area Reddit engineers worked on. The team had already used R8 in compatibility mode, but some of Reddit’s legacy code would’ve made implementing R8’s more aggressive features difficult. The team worked through the app’s existing technical debt first, making it easier to integrate R8's full mode capabilities and maximize Android app optimization. Improvements with Baseline Profiles and R8 full mode Reddit's Baseline Profiles and R8 full mode optimization led to multiple performance improvements across the app, with early benchmarks of the first Baseline Profile for feeds showing a 51% median startup time improvement. While responses from Redditors initially confirmed large startup improvements, Baseline Profile optimizations for less frequent journeys, like logging in, saw fewer user reports. Baseline Profiles for the home feed had a 36% reduction in frozen frames' 95th percentile. Baseline Profiles for the community feed also delivered strong screen load and scroll performance improvements. At the 90th percentile, screen Time To Interactive improved by 12% and time to first draw decreased by 22%. Reddit’s scrolling performance also saw a 12% reduction in P90 slow frames. The upgrade to R8 full mode led to an increase in Google Play average ratings. The proportion of global positive ratings (fours and fives) increased by four percent, with a notable decrease in negative reports. R8 full mode also reduced total application-not-responding errors by almost 30%. Overall, the app saw cold start improvements of 20%, scroll performance improvements of 15%, and widespread enhancements in lower-end devices and emerging markets. Google Play vitals saw improvements in slow cold starts, a 10% reduction in excessive frozen frames, and a 30% reduction in excessive slow frames. Nearly 75% of screens, refactored using Jetpack Compose, experienced performance gains. Further optimizations using Jetpack Compose Reddit adopted Jetpack Compose years ago and has since rebuilt much of its UI with the toolkit, benefitting both the app and its design system. According to the Reddit team, Google’s ongoing support for Compose’s stability and performance made it a strong fit as Reddit scaled its app, allowing for more efficient feature development and better performance. One major example is Reddit’s feed rewrite using Compose, which resulted in more maintainable code and an improved developer experience. Compose enabled teams to focus on future work instead of being bogged down by legacy code, allowing them to fix bugs quickly and improve overall app stability. “The R8 and Compose upgrades were important to deploy in relative isolation and stabilize,” said Drew Heavner, a staff engineer at Reddit. “We feel like we got great outcomes from this work for all teams adopting our modern tech stack and Compose.” After upgrading to the September 2024 release of Compose, the latest iteration, Reddit saw significant performance gains across the board. Cold start times improved by 13%, excessive slow frames decreased by 25%, and frozen frames dropped by 10%. Low- and mid-tier devices saw even greater improvements where app start times improved by up to 40%, especially in markets with lower-performing devices. Screens using Reddit’s modern Compose-powered design stack showed substantial improvements in both slow and frozen frame rates. For example, the home feed saw a 23% reduction in frozen frames, and scrolling performance visibly improved according to internal reviews. These updates were well received among users and reflected a 17% increase in the app’s Google Play average rating. Up-leveling UX through optimization Adding value to an app isn’t just about introducing new features—it's about refining and optimizing the ones users already love. Investing in performance improvements made Reddit’s key features faster and more reliable, enhancing the overall user experience. These optimizations not only improved app startup and runtime performance but also simplified development workflows, increasing both developer satisfaction and app stability. The focus on high-traffic features, such as feeds, has demonstrated the power of performance tuning, with substantial gains in user engagement and satisfaction. As the app has become more efficient, both users and developers have benefitted from a cleaner codebase and faster performance. Looking ahead, Reddit plans to extend the usage of Baseline Profiles to other critical user journeys, including Reddit’s post and comment experiences, ensuring even more users benefit from these ongoing performance improvements. Reddit’s platform engineers also want to continue collaborating with feature teams to integrate performance improvements across the app. These efforts will ensure that as the app evolves, it remains a smooth, fast, and engaging experience for all Redditors. “Adding new features isn’t the only way to add value to an experience for users,” said Lauren Darcey, a senior engineering manager at Reddit. “When you find a feature that users love and engage with, taking the time to refine and optimize it can be the difference between a good and a great experience for your users.” Get started Improve your app performance using Baseline Profiles, R8 full mode, and Jetpack Compose.
Posted by Matthew McCullough – VP of Product Management, Android Developer Today, we're launching the developer preview of the Android XR SDK - a comprehensive development kit for Android XR. It's the newest platform in the Android family built for extended reality (XR) headsets (and glasses in the future!). You’ll have endless opportunities to create and develop experiences that blend digital and physical worlds, using familiar Android APIs, tools and open standards created for XR. All of this means: if you build for Android, you're already building for XR! Read on to get started with development for headsets. With the Android XR SDK you can: Break free of traditional screens by spatializing your app with rich 3D elements, spatial panels, and spatial audio that bring a natural sense of depth, scale, and tangible realism Transport your users to a fantastical virtual space, or engage with them in their own homes or workplaces Take advantage of natural, multimodal interaction capabilities such as hands and eyes "We believe Android XR is a game-changer for storytelling. It allows us to merge narrative depth with advanced interactive features, creating an immersive world where audiences can engage with characters and stories like never before." - Jed Weintrob, Partner at 30 Ninjas Your apps on Android XR The Android XR SDK is built on the existing foundations of Android app development. We're also bringing the Play Store to Android XR, where most Android apps will automatically be made available without any additional development effort. Users will be able to discover and use your existing apps in a whole new dimension. To differentiate your existing Compose app, you may opt-in, to automatically spatialize Material Design (M3) components and Compose for adaptive layouts in XR. Apps optimized for large screens take advantage of sizing capabilities in Android XR The Android XR SDK has something for every developer: Building with Kotlin and Android Studio? You'll feel right at home with the Jetpack XR SDK, a suite of familiar libraries and tools to simplify development and accelerate productivity. Using Unity’s real-time 3D engine? The Android XR Extensions for Unity provides the packages you need to build or port powerful, immersive experiences. Developing on the web? Use WebXR to add immersive experiences supported on Chrome. Working with native languages like C/C++? Android XR supports the OpenXR 1.1 standard. Creating with Jetpack XR SDK The Jetpack XR SDK includes new Jetpack libraries purpose-built for XR. The highlights include: Jetpack Compose for XR - enables you to declaratively create spatial UI layouts and spatialize your existing 2D UI built with Compose or Views Material Design for XR - includes components and layouts that automatically adapt for XR Jetpack SceneCore - provides the foundation for building custom 3D experiences ARCore for Jetpack XR - brings powerful perception capabilities for your app to understand the real world “With Android XR, we can bring Calm directly into your world, capturing the senses and allowing you to experience it in a deeper and more transformative way. By collaborating closely with the Android XR team on this cutting-edge technology, we’ve reimagined how to create a sense of depth and space, resulting in a level of immersion that instantly helps you feel more present, focused, and relaxed.” - Dan Szeto, Vice President at Calm Studios Kickstart your Jetpack XR SDK journey with the Hello XR Sample, a straightforward introduction to the essential features of Jetpack Compose for XR. Learn more about developing with the Jetpack XR SDK. The JetNews sample app is an Android large-screen app adapted for Android XR We're also introducing new tools and capabilities to the latest preview of Android Studio Meerkat to boost productivity and simplify your creation process for Android XR. Use the new Android XR Emulator to create a virtualized XR device for deploying and testing apps built with the Jetpack XR SDK. The emulator includes XR-specific controls for using a keyboard and mouse to navigate an emulated virtual space. Use the Android XR template to get a jump-start on creating an app with Jetpack Compose for XR. Use the updated Layout Inspector to inspect and debug spatialized UI components created with Jetpack Compose for XR. Learn more about the XR enabled tools in Android Studio and the Android XR Emulator. The Android XR Emulator in Android Studio has new controls to explore 3D space within the emulator Creating with Unity We've partnered with Unity to natively integrate their real-time 3D engine with Android XR starting with Unity 6. Unity is introducing the Unity OpenXR: Android XR package for bringing your multi-platform XR experiences to Android XR. Unity is adding Android XR support to these popular XR packages: OpenXR AR Foundation XR Interaction Toolkit XR Hands XR Composition Layers We're also rolling out the Android XR Extensions for Unity with samples and innovative features such as mouse interaction profile, environment blend mode, personalized hand mesh, object tracking, and more. "Having already brought Demeo to most commercially available platforms, it's safe to say we were impressed with the process of adapting the game to run on Android XR." – Johan Gastrin, CTO at Resolution Games Check out our getting started guide for unity and Unity’s blog post to learn more. Vacation Simulator has been updated to Unity 6 and supports Android XR Creating for the Web Chrome on Android XR supports the WebXR standard. If you're building for the web, you can enhance existing sites with 3D content or build new immersive experiences. You can also use full-featured frameworks like three.js, A-Frame, or PlayCanvas to create virtual worlds, or you can use a simpler API like model-viewer so your users can visualize products in an e-commerce site. And because WebXR is an open standard, the same experiences you build for mobile AR devices or dedicated VR hardware seamlessly work on Android XR. Learn more about developing with WebXR. Chrome on Android XR supports WebXR features including depth maps allowing virtual objects to interact with real world surfaces Built on Open Standards We’re continuing the Android tradition of building with open standards. At the heart of the Android perception stack is OpenXR - a high-performance, cross-platform API focused on portability. Android XR is compliant with OpenXR 1.1, and we’re also expanding the Open XR standards with leading-edge vendor extensions to introduce powerful world-sensing capabilities such as: AI-powered hand mesh, designed to adapt to the shape and size of hands to better represent the diversity of your users Detailed depth textures that allow real world objects to occlude virtual content Sophisticated light estimation, for lighting your digital content to match real-world lighting conditions New trackables that let you bring real world objects like laptops, phones, keyboards, and mice into a virtual environment The Android XR SDK also supports open standard formats such as glTF 2.0 for 3D models and OpenEXR for high-dynamic-range environments. Building the future together We couldn't be more proud or excited to be announcing the Developer Preview of the Android XR SDK. We’re releasing this developer preview, because we want to build the future of XR together with you. We welcome your feedback and can’t wait to work with you and build your ideas and suggestions into the platform. Your passion, expertise, and bold ideas are absolutely essential as we continue to build Android XR. We look forward to interacting with your apps, reimagined to take advantage of the unique spatial capabilities of Android XR, using familiar tools like Android Studio and Jetpack Compose. We’re eager to visit the amazing 3D worlds you build using powerful tools and open standards like Unity and OpenXR. Most of all, we can’t wait to go on this journey with all of you that make up the amazing community of Android and Unity developers. To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries and resources you need to create with the Android XR SDK! If you are interested in getting access to prerelease hardware and collaborating with the Android XR team, express your interest to participate in an Android XR Developer Bootcamp in 2025 by filling out this form.
Posted by Sam Bright – VP & GM, Google Play + Developer Ecosystem Hello everyone, Thank you for making this year another incredible one! Your innovative experiences continue to inspire us and bring joy to billions. We recently celebrated some of your amazing work in our Best of 2024 awards, showcasing moments of delight across phones, large-screen devices, watches, and PCs. This year, we shared our vision for the next phase of Play where Play leans into being more than a store and becomes a dynamic platform that connects people with your content, when and where they need it most. To help people discover all you have to offer, truly engage with your experiences, and keep them coming back for more, we’re making Play: A destination for discovery: Helping people find their new favorite apps and games and the content within The best place for gaming: So people can play more of the games they love across more surfaces, with exclusive rewards available only through Play Points, and Go beyond the store: Where people can get relevant content from installed apps directly on their home screen through our new Collections experience Check out the video above, or keep reading for some of the key features we've launched this year to help you succeed at every stage of your app’s lifecycle. New tools and features built in 2024 Launch with confidence Launching a new app or update is a critical moment and we want to make this process as smooth and successful as possible. Pre-review checks help you catch policy and compatibility issues before launch. The new quality panel gives you a centralized view of your app's quality so you can proactively find and address issues like crashes and ANRs, and see recommendations related to user experience. And with SDK Console, we’re connecting you with SDK owners who can alert you in Android Studio and Play Console when new versions may address quality issues or help your app or game comply with Play policies. Features like quality panel help you proactively find and address issues before you launch, helping you have a smooth and successful experience Accelerate your growth and deepen your engagement with users We've made Google Play even more content-forward with a visually engaging design that helps people discover the best of what you have to offer, wherever they are. We integrated Gemini models to make it easier for everyone to find what they're looking for with AI-generated app review summaries, FAQs, and app highlights, providing key information at a glance. Seamless app discovery helps users enjoy amazing experiences across their devices. Now, when people search for apps on their phone, they'll easily discover and install relevant apps for their TV, watch, and more. Enhanced custom store listings give you even more ways to tailor your content. And now, with the ability to segment by search keyword, you can connect with users who are actively searching for the specific benefits your app offers. Play Console will even give you keyword suggestions. Deep links help you create seamless web-to-app journeys to take users directly to the content they want, right inside your app. And now, we’ve made it even easier for you to manage and experiment with these deep links in Play Console, where you can make quick changes without waiting to publish a new app release. App highlights is one of our latest AI-powered features making it easier for users to discover their next favorite apps. Optimize revenue with Google Play Commerce We're continuing to make it easier and more convenient for over 2.5 billion users in over 190 markets to have seamless and secure purchase experiences. This year, we've helped over half a billion people be ready to make purchases by proactively encouraging them to set up payment and authentication methods in advance. With new secure biometric authentication options like fingerprint and facial recognition, checkout is now faster and more secure. Our extensive payment method library, which includes over 300 local forms of payment in more than 65 markets, continues to grow. This year, we added CashApp (US), Blik Banking (Poland), Pix (Brazil), and MoMo (Vietnam). Expanded payment options give more ways for users to pay for content. Parents with Google Family setup can now approve their child's in-app purchases from any OS, not just on Android devices. And new subscription platform improvements, like flexible payment plans for long-term subscriptions, give users more options throughout the purchase experience, which helps drive higher conversions and new subscribers. Flexible payment plans give users more options throughout the purchase experience, helping drive higher conversions and new subscribers for your app Reinforcing trust, safety, and security We continue to invest in more ways to protect users, your business, and the ecosystem. This includes actively combating bad actors who try to deceive users or spread malware, and giving you tools to combat abuse. Google Play Protect scans 200 billion apps daily. When it finds a potentially harmful app, we let people know and may even disable particularly dangerous apps. Easier automatic app updates help ensure users have the latest features and improved security. Users with limited Wi-Fi access have the option to get their app updates over mobile data, and within their data budgets. We also launched a new tool that empowers you to prompt users for timely updates. Play Integrity API helps you detect suspicious activity so you can decide how to respond to abuse, like fraud, cheating, or data theft. Now, Play integrity verdicts are faster, more resilient, and more privacy-friendly. These are just the highlights. To see how we're continuously improving the experience, check out our quarterly roundup of programs and launches on The Latest. Investing in our app and game community We’re continuing to help app and game businesses of all sizes reach their full potential. This year, we’ve doubled the size of our global Indie Games Accelerator program and selected 60 game studios from around the world to participate in a 10-week program of masterclasses, workshops, and access to industry experts. Ten studios from across Latin America were selected to receive a share of $2 million in equity-free funding and hands-on guidance from the Google Play team as part of our Indie Games Fund. 500 aspiring developers in Indonesia participated in our Google Play x Unity Game Developer Training Program to build top-notch skills in game design, development, and monetization to kick-start their game development careers. And the ChangGoo initiative in Korea has nurtured a thriving startup ecosystem, supporting over 500 startups and attracting over KRW 147.6 billion in investments. And with another year of #WeArePlay, we shared and celebrated the stories of 300 app and game businesses from all over the world. Take a look back at just a few of the inspiring founders we’ve featured. Looking ahead I’m excited about the future of Google Play as a dynamic platform that connects users with your amazing content, wherever they are. Next year, we're going to continue helping you maximize your investments on Play by: Leaning into content-rich and interactive experiences for apps both within and beyond the Play store, Building on our gaming destination to make it even more personalized, engaging, and part of daily routines, and, Simplifying the payment and checkout experience for your apps and content. Thanks again for your continued partnership and the innovation you’ve put into your apps and games. From our team to yours, happy holidays and best wishes for an amazing 2025! Sam Bright VP & GM, Google Play + Developer Ecosystem
Posted by Mike Taylor (Privacy Sandbox), and Mihai Cîrlănaru (Web on Android) The User-Agent string has been reduced in Chrome on Desktop and Chrome on Android platforms since Chrome 107. Beginning in Android 16, the default User-Agent string in Android WebView will be similarly reduced. Updated User-Agent string The default, reduced WebView User-Agent string is as follows: Mozilla/5.0 (Linux; Android 10; K; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/125.000 Mobile Safari/537.36 As seen in the diagram, the OS, CPU, and Build information will be reduced to the static "Linux; Android 10; K" string. Minor/build/patch version information is also reduced to "0.0.0" The rest of the default User-Agent remains unchanged (and is unchanging). How can I detect WebView via the User-Agent string? Sites can continue to look for the wv token in the User-Agent string, unless an application has decided to override the User-Agent string. Does WebView support User-Agent Client Hints? Android WebView has supported User-Agent Client Hints since version 116, but only for applications that send the default User-Agent string. Will a custom WebView User-Agent string be affected? The ability to set a custom User-Agent via setUserAgentString() won’t be affected - and applications that choose to do so won’t send the reduced User-Agent string.
Posted by Mindy Brooks – Senior Director, Android Platform App developers play a vital role in shaping how people of all ages interact with technology. Whether your app content is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. Google is here to support you in that work. Today, we’re sharing some important reminders and updates on how we empower developers to build high-quality, engaging, and age-appropriate apps across the Android ecosystem. Help Determine Android User Age with Digital IDs Understanding a user's age range can be critical for providing minors with safer and more appropriate app experiences, as well as complying with local age-related regulations. Android’s new Credential Manager API, now in Beta for Digital IDs, addresses this challenge by helping developers verify a user’s age with a digital ID saved to any digital wallet application. Importantly, Android’s Credential Manager was built with both safety and privacy at its core – it minimizes data exposure by only sharing information necessary with developers and asks the user for explicit permission to share an age signal. We encourage you to try out the Beta API for yourself and look forward to hearing your feedback. While digital IDs are still in their early days, we’re continuing to work with governments on further adoption to strengthen this solution. Android is also exploring how the API can support a range of age assurance methods, helping developers to safely confirm the age of their users, especially for users that can't or don't want to use a digital ID. Please keep in mind that ID-based solutions are just one tool that developers can use to determine age and the best approach will depend on your app. Shield Young Users from Inappropriate Content on Google Play As part of our continued commitment to creating a safe and positive environment for children across the Play Store, we recently launched the Restrict Declared Minors (RDM) setting within the Google Play Console that allows developers to designate their app as inappropriate for minors. When enabled, Google Play users with declared ages below 18 will not be able to download or purchase the app nor will they be able to continue subscriptions or make new purchases if the app is already installed. Beyond Play’s broader kids safety policies, this new setting gives developers an additional tool to proactively prevent minors from accessing content that may be unsuitable for them. It also empowers developers to take a more proactive role in ensuring their apps reach the appropriate audience. As a reminder, this feature is simply one tool of many to keep your apps safe and we are continuing to improve it based on early feedback. Developers remain solely responsible for compliance with relevant laws and regulations. You can learn more about opting in to RDM here. Develop Teacher Approved Apps and Games on Google Play Great content for kids can take many forms, whether that’s sparking curiosity, helping kids learn, or just plain fun. Google Play’s Teacher Approved program highlights high-quality apps that are reviewed and rated by teachers and child development specialists. Our team of teachers and experts across the world review and rate apps on factors like age-appropriateness, quality of experience, enrichment, and delight. For added transparency, we include information in the app listing about why the app was rated highly to help parents determine if the app is right for their child. Apps in the program also must meet strict privacy and security requirements. Building a teacher-approved app not only helps raise app quality for kids – it can also increase your reach and engagement. All apps in this program are eligible to appear and be featured on Google Play's Kids tab where families go to easily discover quality apps and games. Please visit Google Play Academy for more information about how to design high-quality apps for kids. Stay Updated With Google Play’s Families Policies Google Play policies provide additional protections for children and families. Our Families policies require that apps and games targeted to children have appropriate content, show ads suitable for children, and meet other requirements including ones related to personally identifiable information. We frequently update and strengthen these policies to ensure that Google Play remains a place where families can find safe and high-quality content for their children. This includes our new Child Safety Standards Policy for social and dating apps that goes into effect in January. Developers can showcase compliance with Play’s Families policies with a special badge on the Google Play Data safety section. This is another great way that you can better help families find apps that meet their needs, while supporting Play’s commitment to provide users more transparency and control over their data. To display the badge, please visit the "Security practices" section of your Data Safety form in your Google Play Developer Console. Additional Resources Protecting kids online is a responsibility we all share and we hope these reminders are helpful as you prepare for 2025. We’re grateful for your partnership in making Android and Google Play fantastic platforms for delightful, high-quality content for kids and families. For more resources: Learn more about Android’s Credential Manager API. Watch our interactive Play Academy courses on complying with Play’s Families policies, including SDK requirements, selecting your target age and content settings, and more. Review the updated Child Safety Standards Policy ahead of the January deadline.
You can subscribe to this RSS to get more information