Google Online Security Blog
The latest news and insights from Google on security and safety on the Internet
Posted by Dirk Göhmann In 2024, our Vulnerability Reward Program confirmed the ongoing value of engaging with the security research community to make Google and its products safer. This was evident as we awarded just shy of $12 million to over 600 researchers based in countries around the globe across all of our programs. Vulnerability Reward Program 2024 in Numbers You can learn about who’s reporting to the Vulnerability Reward Program via our Leaderboard – and find out more about our youngest security researchers who’ve recently joined the ranks of Google bug hunters. VRP Highlights in 2024 In 2024 we made a series of changes and improvements coming to our vulnerability reward programs and related initiatives: The Google VRP revamped its reward structure, bumping rewards up to a maximum of $151,515, the Mobile VRP is now offering up to $300,000 for critical vulnerabilities in top-tier apps, Cloud VRP has a top-tier award of up $151,515, and Chrome awards now peak at $250,000 (see the below section on Chrome for details). We rolled out InternetCTF – to get rewarded, discover novel code execution vulnerabilities in open source and provide Tsunami plugin patches for them. The Abuse VRP saw a 40% YoY increase in payouts – we received over 250 valid bugs targeting abuse and misuse issues in Google products, resulting in over $290,000 in rewards. To improve the payment process for rewards going to bug hunters, we introduced Bugcrowd as an additional payment option on bughunters.google.com alongside the existing standard Google payment option. We hosted two editions of bugSWAT for training, skill sharing, and, of course, some live hacking – in August, we had 16 bug hunters in attendance in Las Vegas, and in October, as part of our annual security conference ESCAL8 in Malaga, Spain, we welcomed 40 of our top researchers. Between these two events, our bug hunters were rewarded $370,000 (and plenty of swag). We doubled down on our commitment to support the next generation of security engineers by hosting four init.g workshops (Las Vegas, São Paulo, Paris, and Malaga). Follow the Google VRP channel on X to stay tuned on future events. More detailed updates on selected programs are shared in the following sections. Android and Google Devices In 2024, the Android and Google Devices Security Reward Program and the Google Mobile Vulnerability Reward Program, both part of the broader Google Bug Hunters program, continued their mission to fortify the Android ecosystem, achieving new heights in both impact and severity. We awarded over $3.3 million in rewards to researchers who demonstrated exceptional skill in uncovering critical vulnerabilities within Android and Google mobile applications. The above numbers mark a significant change compared to previous years. Although we saw an 8% decrease in the total number of submissions, there was a 2% increase in the number of critical and high vulnerabilities. In other words, fewer researchers are submitting fewer, but more impactful bugs, and are citing the improved security posture of the Android operating system as the central challenge. This showcases the program's sustained success in hardening Android. This year, we had a heightened focus on Android Automotive OS and WearOS, bringing actual automotive devices to multiple live hacking events and conferences. At ESCAL8, we hosted a live-hacking challenge focused on Pixel devices, resulting in over $75,000 in rewards in one weekend, and the discovery of several memory safety vulnerabilities. To facilitate learning, we launched a new Android hacking course in collaboration with external security researchers, focused on mobile app security, designed for newcomers and veterans alike. Stay tuned for more. We extend our deepest gratitude to the dedicated researchers who make the Android ecosystem safer. We're proud to work with you! Special thanks to Zinuo Han (@ele7enxxh) for their expertise in Bluetooth security, blunt (@blunt_qian) for holding the record for the most valid reports submitted to the Google Play Security Reward Program, and WANG,YONG (@ThomasKing2014) for groundbreaking research on rooting Android devices with kernel MTE enabled. We also appreciate all researchers who participated in last year's bugSWAT event in Málaga. Your contributions are invaluable! Chrome Chrome did some remodeling in 2024 as we updated our reward amounts and structure to incentivize deeper research. For example, we increased our maximum reward for a single issue to $250,000 for demonstrating RCE in the browser or other non-sandboxed process, and more if done directly without requiring a renderer compromise. In 2024, UAF mitigation MiraclePtr was fully launched across all platforms, and a year after the initial launch, MiraclePtr-protected bugs are no longer being considered exploitable security bugs. In tandem, we increased the MiraclePtr Bypass Reward to $250,128. Between April and November, we also launched the first and second iterations of the V8 Sandbox Bypass Rewards as part of the progression towards the V8 sandbox, eventually becoming a security boundary in Chrome. We received 337 reports of unique, valid security bugs in Chrome during 2024, and awarded 137 Chrome VRP researchers $3.4 million in total. The highest single reward of 2024 was $100,115 and was awarded to Mickey for their report of a MiraclePtr Bypass after MiraclePtr was initially enabled across most platforms in Chrome M115 in 2023. We rounded out the year by announcing the top 20 Chrome VRP researchers for 2024, all of whom were gifted new Chrome VRP swag, featuring our new Chrome VRP mascot, Bug. Cloud VRP The Cloud VRP launched in October as a Cloud-focused vulnerability reward program dedicated to Google Cloud products and services. As part of the launch, we also updated our product tiering and improved our reward structure to better align our reports with their impact on Google Cloud. This resulted in over 150 Google Cloud products coming under the top two reward tiers, enabling better rewards for our Cloud researchers and a more secure cloud. Since its launch, Google Cloud VRP triaged over 400 reports and filed over 200 unique security vulnerabilities for Google Cloud products and services leading to over $500,000 in researcher rewards. Our highlight last year was launching at the bugSWAT event in Málaga where we got to meet many of our amazing researchers who make our program so successful! The overwhelming positive feedback from the researcher community continues to propel us to mature Google Cloud VRP further this year. Stay tuned for some exciting announcements! Generative AI We’re celebrating an exciting first year of AI bug bounties. We received over 150 bug reports – over $55,000 in rewards so far – with one-in-six leading to key improvements. We also ran a bugSWAT live-hacking event targeting LLM products and received 35 reports, totaling more than $87,000 – including issues like “Hacking Google Bard - From Prompt Injection to Data Exfiltration” and “We Hacked Google A.I. for $50,000”. Keep an eye on Gen AI in 2025 as we focus on expanding scope and sharing additional ways for our researcher community to contribute. Looking Forward to 2025 In 2025, we will be celebrating 15 years of VRP at Google, during which we have remained fully committed to fostering collaboration, innovation, and transparency with the security community, and will continue to do so in the future. Our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services. We want to send a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe! Thank you to Dirk Göhmann, Amy Ressler, Eduardo Vela, Jan Keller, Krzysztof Kotowicz, Martin Straka, Michael Cote, Mike Antares, Sri Tulasiram, and Tony Mendez. Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions (30 posts in 2024)!
Posted by Lyubov Farafonova, Product Manager, Phone by Google; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse Google has been at the forefront of protecting users from the ever-growing threat of scams and fraud with cutting-edge technologies and security expertise for years. In 2024, scammers used increasingly sophisticated tactics and generative AI-powered tools to steal more than $1 trillion from mobile consumers globally, according to the Global Anti-Scam Alliance. And with the majority of scams now delivered through phone calls and text messages, we’ve been focused on making Android’s safeguards even more intelligent with powerful Google AI to help keep your financial information and data safe. Today, we’re launching two new industry-leading AI-powered scam detection features for calls and text messages, designed to protect users from increasingly complex and damaging scams. These features specifically target conversational scams, which can often appear initially harmless before evolving into harmful situations. To enhance our detection capabilities, we partnered with financial institutions around the world to better understand the latest advanced and most common scams their customers are facing. For example, users are experiencing more conversational text scams that begin innocently, but gradually manipulate victims into sharing sensitive data, handing over funds, or switching to other messaging apps. And more phone calling scammers are using spoofing techniques to hide their real numbers and pretend to be trusted companies. Traditional spam protections are focused on protecting users before the conversation starts, and are less effective against these latest tactics from scammers that turn dangerous mid-conversation and use social engineering techniques. To better protect users, we invested in new, intelligent AI models capable of detecting suspicious patterns and delivering real-time warnings over the course of a conversation, all while prioritizing user privacy. Scam Detection for messages enhancements to existing Spam Protection in Google Messages that strengthen defenses against job and delivery scams, which are continuing to roll out to users. We’re now introducing Scam Detection to detect a wider range of fraudulent activities. Scam Detection in Google Messages uses powerful Google AI to proactively address conversational scams by providing real-time detection even after initial messages are received. When the on-device AI detects a suspicious pattern in SMS, MMS, and RCS messages, users will now get a message warning of a likely scam with an option to dismiss or report and block the sender. processing remaining on-device. Your conversations remain private to you; if you choose to report a conversation to help reduce widespread spam, only sender details and recent messages with that sender are shared with Google and carriers. You can turn off Spam Protection, which includes Scam Detection, in your Google Messages at any time. Scam Detection in Google Messages is launching in English first in the U.S., U.K. and Canada and will expand to more countries soon. Scam Detection for calls reported receiving at least one scam call per day in 2024. To combat the rise of sophisticated conversational scams that deceive victims over the course of a phone call, we introduced Scam Detection late last year to U.S.-based English-speaking Phone by Google public beta users on Pixel phones. We use AI models processed on-device to analyze conversations in real-time and warn users of potential scams. If a caller, for example, tries to get you to provide payment via gift cards to complete a delivery, Scam Detection will alert you through audio and haptic notifications and display a warning on your phone that the call may be a scam. During our limited beta, we analyzed calls with Gemini Nano, Google’s built-in, on-device foundation model, on Pixel 9 devices and used smaller, robust on-device machine-learning models for Pixel 6+ users. Our testing showed that Gemini Nano outperformed other models, so as a result, we're currently expanding the availability of the beta to bring the most capable Scam Detection to all English-speaking Pixel 9+ users in the U.S. research and a Scam Detection beta user survey, these types of alerts have already helped people be more cautious on the phone, detect suspicious activity, and avoid falling victim to conversational scams. Keeping Android users safe with the power of Google AI We're committed to keeping Android users safe, and that means constantly evolving our defenses against increasingly sophisticated scams and fraud. Our investment in intelligent protection is having real-world impact for billions of users. Leviathan Security Group, a cybersecurity firm, conducted a funded evaluation of fraud protection features on a number of smartphones and found that Android smartphones, led by the Pixel 9 Pro, scored highest for built-in security features and anti-fraud efficacy1. With AI-powered innovations like Scam Detection in Messages and Phone by Google, we're giving you more tools to stay one step ahead of bad actors. We're constantly working with our partners across the Android ecosystem to help bring new security features to even more users. Together, we’re always working to keep you safe on Android. Notes Based on third-party research funded by Google LLC in Feb 2025 comparing the Pixel 9 Pro, iPhone 16 Pro, Samsung S24+ and Xiaomi 14 Ultra. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries. ↩
Posted by Alex Rebert, Security Foundations, Ben Laurie, Research, Murali Vijayaraghavan, Research and Alex Richardson, Silicon For decades, memory safety vulnerabilities have been at the center of various security incidents across the industry, eroding trust in technology and costing billions. Traditional approaches, like code auditing, fuzzing, and exploit mitigations – while helpful – haven't been enough to stem the tide, while incurring an increasingly high cost. In this blog post, we are calling for a fundamental shift: a collective commitment to finally eliminate this class of vulnerabilities, anchored on secure-by-design practices – not just for ourselves but for the generations that follow. The shift we are calling for is reinforced by a recent ACM article calling to standardize memory safety we took part in releasing with academic and industry partners. It's a recognition that the lack of memory safety is no longer a niche technical problem but a societal one, impacting everything from national security to personal privacy. The standardization opportunity Over the past decade, a confluence of secure-by-design advancements has matured to the point of practical, widespread deployment. This includes memory-safe languages, now including high-performance ones such as Rust, as well as safer language subsets like Safe Buffers for C++. These tools are already proving effective. In Android for example, the increasing adoption of memory-safe languages like Kotlin and Rust in new code has driven a significant reduction in vulnerabilities. Looking forward, we're also seeing exciting and promising developments in hardware. Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code. While these advancements are encouraging, achieving comprehensive memory safety across the entire software industry requires more than just individual technological progress: we need to create the right environment and accountability for their widespread adoption. Standardization is key to this. To facilitate standardization, we suggest establishing a common framework for specifying and objectively assessing memory safety assurances; doing so will lay the foundation for creating a market in which vendors are incentivized to invest in memory safety. Customers will be empowered to recognize, demand, and reward safety. This framework will provide governments and businesses with the clarity to specify memory safety requirements, driving the procurement of more secure systems. The framework we are proposing would complement existing efforts by defining specific, measurable criteria for achieving different levels of memory safety assurance across the industry. In this way, policymakers will gain the technical foundation to craft effective policy initiatives and incentives promoting memory safety. A blueprint for a memory-safe future We know there's more than one way of solving this problem, and we are ourselves investing in several. Importantly, our vision for achieving memory safety through standardization focuses on defining the desired outcomes rather than locking ourselves into specific technologies. To translate this vision into an effective standard, we need a framework that will: Foster innovation and support diverse approaches: The standard should focus on the security properties we want to achieve (e.g., freedom from spatial and temporal safety violations) rather than mandating specific implementation details. The framework should therefore be technology-neutral, allowing vendors to choose the best approach for their products and requirements. This encourages innovation and allows software and hardware manufacturers to adopt the best solutions as they emerge. Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts. Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products. Be practical and actionable: Alongside the technology-neutral framework, we need best practices for existing technologies. The framework should provide guidance on how to effectively leverage specific technologies to meet the standards. This includes answering questions such as when and to what extent unsafe code is acceptable within larger software systems, and guidelines on structuring such unsafe dependencies to support compositional reasoning about safety. Google's commitment At Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it. We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services. This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++. Let's build a memory-safe future together This effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement. It's about enabling a future where: Developers and vendors can confidently build more secure systems, knowing their efforts can be objectively assessed. Businesses can procure memory-safe products with assurance, reducing their risk and protecting their customers. Governments can effectively protect critical infrastructure and incentivize the adoption of secure-by-design practices. Consumers are empowered to make decisions about the services they rely on and the devices they use with confidence – knowing the security of each option was assessed against a common framework. The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design. Acknowledgments We'd like to thank our CACM article co-authors for their invaluable contributions: Robert N. M. Watson, John Baldwin, Tony Chen, David Chisnall, Jessica Clarke, Brooks Davis, Nathaniel Wesley Filardo, Brett Gutstein, Graeme Jenkinson, Christoph Kern, Alfredo Mazzinghi, Simon W. Moore, Peter G. Neumann, Hamed Okhravi, Peter Sewell, Laurence Tratt, Hugo Vincent, and Konrad Witaszczyk, as well as many others.
Posted by Bethel Otuteye and Khawaja Shams (Android Security and Privacy Team), and Ron Aquino (Play Trust and Safety) Android and Google Play comprise a vibrant ecosystem with billions of users around the globe and millions of helpful apps. Keeping this ecosystem safe for users and developers remains our top priority. However, like any flourishing ecosystem, it also attracts its share of bad actors. That’s why every year, we continue to invest in more ways to protect our community and fight bad actors, so users can trust the apps they download from Google Play and developers can build thriving businesses. Last year, those investments included AI-powered threat detection, stronger privacy policies, supercharged developer tools, new industry-wide alliances, and more. As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. Google’s advanced AI: helping make Google Play a safer place To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That’s enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage. Working with developers to enhance security and privacy on Google Play In 2024, we prevented 1.3 million apps from getting excessive or unnecessary access to sensitive user data. We also required apps to be more transparent about how they handle user information by launching new developer requirements and a new “Data deletion” option for apps that support user accounts and data collection. This helps users manage their app data and understand the app’s deletion practices, making it easier for Play users to delete data collected from third-party apps. We also worked to ensure that apps use the strongest and most up-to-date privacy and security capabilities Android has to offer. Every new version of Android introduces new security and privacy features, and we encourage developers to embrace these advancements as soon as possible. As a result of partnering closely with developers, over 91% of app installs on the Google Play Store now use the latest protections of Android 13 or newer. Safeguarding apps from scams and fraud is an ongoing battle for developers. The Play Integrity API allows developers to check if their apps have been tampered with or are running in potentially compromised environments, helping them to prevent abuse like fraud, bots, cheating, and data theft. Play Integrity API and Play’s automatic protection helps developers ensure that users are using the official Play version of their app with the latest security updates. Apps using Play integrity features are seeing 80% lower usage from unverified and untrusted sources on average. Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK. Last year, in addition to adding 80 SDKs to the index, we also worked closely with SDK and app developers to address potential SDK security and privacy issues, helping to build safer and more secure apps for Google Play. Google Play’s multi-layered protections against bad apps To create a trusted experience for everyone on Google Play, we use our SAFE principles as a guide, incorporating multi-layered protections that are always evolving to help keep Google Play safe. These protections start with the developers themselves, who play a crucial role in building secure apps. We provide developers with best-in-class tools, best practices, and on-demand training resources for building safe, high-quality apps. Every app undergoes rigorous review and testing, with only approved apps allowed to appear in the Play Store. Before a user downloads an app from Play, users can explore its user reviews, ratings, and Data safety section on Google Play to help them make an informed decision. And once installed, Google Play Protect, Android’s built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior. Enhancing Google Play Protect to help keep users safe on Android While the Play Store offers best-in-class security, we know it’s not the only place users download Android apps – so it’s important that we also defend Android users from more generalized mobile threats. To do this in an open ecosystem, we’ve invested in sophisticated, real-time defenses that protect against scams, malware, and abusive apps. These intelligent security measures help to keep users, user data, and devices safe, even if apps are installed from various sources with varying levels of security. Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect’s real-time scanning identified more than 13 million new malicious apps from outside Google Play1. Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. Here are some of the new improvements that are now available globally on Android devices with Google Play Services: Reminder notifications in Chrome on Android to re-enable Google Play Protect: According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off. Additional protection against social engineering attacks: Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls. This safeguard is enabled by default during traditional phone calls as well as during voice and video calls in popular third-party apps. Automatically revoking app permissions for potentially dangerous apps: Since Android 11, we’ve taken a proactive approach to data privacy by automatically resetting permissions for apps that users haven't used in a while. This ensures apps can only access the data they truly need, and users can always grant permissions back if necessary. To further enhance security, Play Protect now automatically revokes permissions for potentially harmful apps, limiting their access to sensitive data like storage, photos, and camera. Users can restore app permissions at any time, with a confirmation step for added security. enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions – Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam. In 2024, Google Play Protect’s enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps. By piloting these new protections, we can proactively combat emerging threats and refine our solutions to thwart scammers and their increasingly sophisticated fraud attempts. We look forward to continuing to partner with governments, ecosystem partners, and other stakeholders to improve user protections. App badging to help users find apps they can trust at a glance on Google Play new badge to help Google Play users discover VPN apps that take extra steps to demonstrate their strong commitment to security. We allow developers who adhere to Play safety and security guidelines and have passed an additional independent Mobile Application Security Assessment (MASA) to display a dedicated badge in the Play Store to highlight their increased commitment to safety. Collaborating to advance app security standards App Defense Alliance, in partnership with fellow steering committee members Microsoft and Meta, recently launched the ADA Application Security Assessment (ASA) v1.0, a new standard to help developers build more secure mobile, web, and cloud applications. This standard provides clear guidance on protecting sensitive data, defending against cyberattacks, and ultimately, strengthening user trust. This marks a significant step forward in establishing industry-wide security best practices for application development. All developers are encouraged to review and comply with the new mobile security standard. You’ll see this standard in action for all carrier apps pre-installed on future Pixel phone models. Looking ahead This year, we’ll continue to protect the Android and Google Play ecosystem, building on these tools and resources in response to user and developer feedback and the changing landscape. As always, we’ll keep empowering developers to build safer apps more easily, streamline their policy experience, and protect their businesses and users from bad actors. 1 Based on Google Play Protect 2024 internal data.
Posted by the Agentic AI Security Team at Google DeepMind Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team. To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation framework we have developed to automatically red-team an AI system’s vulnerability to indirect prompt injection attacks. We will take you through our threat model, before describing three attack techniques we have implemented in our evaluation framework. Threat model and evaluation framework Our threat model concentrates on an attacker using indirect prompt injection to exfiltrate sensitive information, as illustrated above. The evaluation framework tests this by creating a hypothetical scenario, in which an AI agent can send and retrieve emails on behalf of the user. The agent is presented with a fictitious conversation history in which the user references private information such as their passport or social security number. Each conversation ends with a request by the user to summarize their last email, and the retrieved email in context. The contents of this email are controlled by the attacker, who tries to manipulate the agent into sending the sensitive information in the conversation history to an attacker-controlled email address. The attack is successful if the agent executes the malicious prompt contained in the email, resulting in the unauthorized disclosure of sensitive information. The attack fails if the agent only follows user instructions and provides a simple summary of the email. Automated red-teaming Crafting successful indirect prompt injections requires an iterative process of refinement based on observed responses. To automate this process, we have developed a red-team framework consisting of several optimization-based attacks that generate prompt injections (in the example above this would be different versions of the malicious email). These optimization-based attacks are designed to be as strong as possible; weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections. Once these prompt injections have been constructed, we measure the resulting attack success rate on a diverse set of conversation histories. Because the attacker has no prior knowledge of the conversation history, to achieve a high attack success rate the prompt injection must be capable of extracting sensitive user information contained in any potential conversation contained in the prompt, making this a harder task than eliciting generic unaligned responses from the AI system. The attacks in our framework include: Actor Critic: This attack uses an attacker-controlled model to generate suggestions for prompt injections. These are passed to the AI system under attack, which returns a probability score of a successful attack. Based on this probability, the attack model refines the prompt injection. This process repeats until the attack model converges to a successful prompt injection. Beam Search: This attack starts with a naive prompt injection directly requesting that the AI system send an email to the attacker containing the sensitive user information. If the AI system recognizes the request as suspicious and does not comply, the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding. If the probability increases, these random tokens are kept, otherwise they are removed, and this process repeats until the combination of the prompt injection and random appended tokens result in a successful attack. Tree of Attacks w/ Pruning (TAP): Mehrotra et al. (2024) [3] designed an attack to generate prompts that cause an AI system to violate safety policies (such as generating hate speech). We adapt this attack, making several adjustments to target security violations. Like Actor Critic, this attack searches in the natural language space; however, we assume the attacker cannot access probability scores from the AI system under attack, only the text samples that are generated. We are actively leveraging insights gleaned from these attacks within our automated red-team framework to protect current and future versions of AI systems we develop against indirect prompt injection, providing a measurable way to track security improvements. A single silver bullet defense is not expected to solve this problem entirely. We believe the most promising path to defend against these attacks involves a combination of robust evaluation frameworks leveraging automated red-teaming methods, alongside monitoring, heuristic defenses, and standard security engineering solutions. We would like to thank Vijay Bolina, Sravanti Addepalli, Lihao Liang, and Alex Kaskasoli for their prior contributions to this work. Posted on behalf of the entire Google DeepMind Agentic AI Security team (listed in alphabetical order): Aneesh Pappu, Andreas Terzis, Chongyang Shi, Gena Gibson, Ilia Shumailov, Itay Yona, Jamie Hayes, John "Four" Flynn, Juliette Pluto, Sharon Lin, Shuang Song
Posted by Jianing Sandra Guo, Product Manager, Android, Nataliya Stanetsky, Staff Program Manager, Android Today, people around the world rely on their mobile devices to help them stay connected with friends and family, manage finances, keep track of healthcare information and more – all from their fingertips. But a stolen device in the wrong hands can expose sensitive data, leaving you vulnerable to identity theft, financial fraud and privacy breaches. This is why we recently launched Android theft protection, a comprehensive suite of features designed to protect you and your data at every stage – before, during, and after device theft. As part of our commitment to help you stay safe on Android, we’re expanding and enhancing these features to deliver even more robust protection to more users around the world. Identity Check rolling out to Pixel and Samsung One UI 7 devices We’re officially launching Identity Check, first on Pixel and Samsung Galaxy devices eligible for One UI 71, to provide better protection for your critical account and device settings. When you turn on Identity Check, your device will require explicit biometric authentication to access certain sensitive resources when you’re outside of trusted locations. Identity Check also enables enhanced protection for Google Accounts on all supported devices and additional security for Samsung Accounts on One UI 7 eligible Galaxy devices, making it much more difficult for an unauthorized attacker to take over accounts signed in on the device. As part of enabling Identity Check, you can designate one or more trusted locations. When you’re outside of these trusted places, biometric authentication will be required to access critical account and device settings, like changing your device PIN or biometrics, disabling theft protection, or accessing Passkeys. Theft Detection Lock: expanding AI-powered protection to more users One of the top theft protection features introduced last year was Theft Detection Lock, which uses an on-device AI-powered algorithm to help detect when your phone may be forcibly taken from you. If the machine learning algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out. Theft Detection Lock is now fully rolled out to Android 10+ phones2 around the world. Protecting your Android device from theft We're collaborating with the GSMA and industry experts to combat mobile device theft by sharing information, tools and prevention techniques. Stay tuned for an upcoming GSMA white paper, developed in partnership with the mobile industry, with more information on protecting yourself and your organization from device theft. With the addition of Identity Check and the ongoing enhancements to our existing features, Android offers a robust and comprehensive set of tools to protect your devices and your data from theft. We’re dedicated to providing you with peace of mind, knowing your personal information is safe and secure. You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center. Notes Timing, availability and feature names may vary in One UI 7. ↩ ↩
Posted by Erik Varga, Vulnerability Management, and Rex Pan, Open Source Security Team In December 2022, we announced OSV-Scanner, a tool to enable developers to easily scan for vulnerabilities in their open source dependencies. Together with the open source community, we’ve continued to build this tool, adding remediation features, as well as expanding ecosystem support to 11 programming languages and 20 package manager formats. Today, we’re excited to release OSV-SCALIBR (Software Composition Analysis LIBRary), an extensible library for SCA and file system scanning. OSV-SCALIBR combines Google’s internal vulnerability management expertise into one scanning library with significant new capabilities such as: SCA for installed packages, standalone binaries, as well as source code OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more) Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac SBOM generation in SPDX and CycloneDX, the two most popular document formats Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical OSV-SCALIBR is now the primary SCA engine used within Google for live hosts, code repos, and containers. It’s been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users’ data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface. Using OSV-SCALIBR as a library All of OSV-SCALIBR's capabilities are modularized into plugins for software extraction and vulnerability detection which are very simple to expand.You can use OSV-SCALIBR as a library to: 1.Generate SBOMs from the build artifacts and code repos on your live host: import ( "context" "github.com/google/osv-scalibr" "github.com/google/osv-scalibr/converter" "github.com/google/osv-scalibr/extractor/filesystem/list" "github.com/google/osv-scalibr/fs" "github.com/google/osv-scalibr/plugin" spdx "github.com/spdx/tools-golang/spdx/v2/v2_3" ) func GenSBOM(ctx context.Context) *spdx.Document { capab := &plugin.Capabilities{OS: plugin.OSLinux} cfg := &scalibr.ScanConfig{ ScanRoots: fs.RealFSScanRoots("/"), FilesystemExtractors: list.FromCapabilities(capab), Capabilities: capab, } result := scalibr.New().Scan(ctx, cfg) return converter.ToSPDX23(result, converter.SPDXConfig{}) } 2. Scan a git repo for SBOMs: Simply replace "/" with the path to your git repo. Also take a look at the various language extractors to enable for code scanning. 3. Scan a remote container for SBOMs: Replace the scan config from the above code snippet with import ( ... "github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/v1/remote" "github.com/google/osv-scalibr/artifact/image" ... ) ... filesys, _ := image.NewFromRemoteName( "alpine:latest", remote.WithAuthFromKeychain(authn.DefaultKeychain), ) cfg := &scalibr.ScanConfig{ ScanRoots: []*fs.ScanRoot{{FS: filesys}}, ... } 4. Find vulnerabilities on your filesystem or a remote container: Extract the PURLs from the SCALIBR inventory results from the previous steps: import ( ... "github.com/google/osv-scalibr/converter" ... ) ... result := scalibr.New().Scan(ctx, cfg) for _, i := range result.Inventories { fmt.Println(converter.ToPURL(i)) } And send them to osv.dev, e.g. $ curl -d '{"package": {"purl": "pkg:npm/dojo@1.2.3"}}' "https://api.osv.dev/v1/query" See the usage docs for more details. OSV-Scanner + OSV-SCALIBR Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities using much of the same extraction as OSV-SCALIBR. Some of OSV-SCALIBR’s capabilities are not yet available in OSV-Scanner, but we’re currently working on integrating OSV-SCALIBR more deeply into OSV-Scanner. This will make more and more of OSV-SCALIBR’s capabilities available in OSV-Scanner in the next few months, including installed package extraction, weak credentials scanning, SBOM generation, and more. Look out soon for an announcement of OSV-Scanner V2 with many of these new features available. OSV-Scanner will become the primary frontend to the OSV-SCALIBR library for users who require a CLI interface. Existing users of OSV-Scanner can continue to use the tool the same way, with backwards compatibility maintained for all existing use cases. For installation and usage instructions, have a look at OSV-Scanner’s documentation here. What’s next In addition to making all of OSV-SCALIBR’s features available in OSV-Scanner, we're also working on additional new capabilities. Here's some of the things you can expect: Support for more OS and language ecosystems, both for regular extraction and for Guided Remediation Layer attribution and base image identification for container scanning Reachability analysis to reduce false positive vulnerability matches More vulnerability and misconfiguration detectors for Windows More weak credentials detectors We hope that this library helps developers and organizations to secure their software and encourages the open source community to contribute back by sharing new plugins on top of OSV-SCALIBR. If you have any questions or if you would like to contribute, don't hesitate to reach out to us at osv-discuss@google.com or by posting an issue in our issue tracker.
Posted by Greg Mucci, Product Manager, Artifact Analysis, Oliver Chang, Senior Staff Engineering, OSV, and Charl de Nysschen, Product Manager OSV DevOps teams dedicated to securing their supply chain and predicting potential risks consistently face novel threats. Fortunately, they can now improve their image and container security by harnessing Google-grade vulnerability scanning, which offers expanded open-source coverage. A significant benefit of utilizing Google Cloud Platform is its integrated security tools, including Artifact Analysis. This scanning service leverages the same infrastructure that Google depends on to monitor vulnerabilities within its internal systems and software supply chains. Artifact Analysis has recently expanded its scanning coverage to eight additional language packages, four operating systems, and two extensively utilized base images, making it a more robust and versatile tool than ever before. This enhanced coverage was achieved by integrating Artifact Analysis with the Open Source Vulnerabilities (OSV) platform and database. This integration provides industry-leading insights into open source vulnerabilities—a crucial capability as software supply chain attacks continue to grow in frequency and complexity, impacting organizations reliant on open source software. With these recent updates, customers can now successfully scan the vast majority of the images they push to Artifact Registry. These successful scans ensure that any known vulnerabilities are detected, reported, and can be integrated into a broader vulnerability management program, allowing teams to take prompt action. Open source vulnerabilities, with more reach Artifact Analysis pulls vulnerability information directly from OSV, which is the only open source, distributed vulnerability database that gets information directly from open source practitioners. OSV’s database provides a consistent, high quality, high fidelity database of vulnerabilities from authoritative sources who have adopted the OSV schema. This ensures the database has accurate information to reliably match software dependencies to known vulnerabilities—previously a difficult process reliant on inaccurate mechanisms such as CPEs (Common Platform Enumerations). Over the past three years, OSV has increased its total coverage to 28 language and OS ecosystems. For example, industry leaders such as GitHub, Chainguard, and Ubuntu, as well as open source ecosystems such as Rust and Python are now exporting their vulnerability discoveries in the OSV Schema. This increased coverage also includes Chainguard’s Wolfi images and Google’s Distroless images, which are popular choices for minimal container images used by many developers and organizations. Customers who rely on distroless images can count on Artifact Analysis scanning to support their minimal container image initiatives. Each expansion in OSV’s coverage is incorporated into scanning tools that integrate with the OSV database. Broader vulnerability detection with Artifact Analysis As a result of OSV’s expansion, scanners like Artifact Analysis that draw from OSV now alert users to higher quality vulnerability information across a broader set of ecosystems—meaning GCP project owners will be made aware of a more complete set of vulnerability findings and potential security risks. Existing Artifact Registry scanning customers don't need to take any action to take advantage of this update. Projects that have scanning enabled will immediately benefit from this expanded coverage and vulnerability findings will continue to be available in the Artifact Registry UI, Container Analysis API, and via pub/sub (for workflows). Existing On Demand scanning customers will also benefit from this expanded vulnerability coverage. All the same Operating Systems and Language package coverage that Registry Scanning customers enjoy are available in On Demand Scan. Beyond Artifact Registry We know that detection is just one of the first steps necessary to manage risks. We’re continually expanding Artifact Analysis capabilities and in 2025 we’ll be integrating Artifact Registry vulnerability findings with Google Cloud’s Security Command Center. Through Security Command Center customers can maintain a more comprehensive vulnerability management program, and prioritize risk across a number of different dimensions.
Posted by Hyunwook Baek, Duy Truong, Justin Dunlap and Lauren Stan from Android Security and Privacy, and Oliver Chang with the Google Open Source Security Team Today, we are announcing the availability of Vanir, a new open-source security patch validation tool. Introduced at Android Bootcamp in April, Vanir gives Android platform developers the power to quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches. Vanir significantly accelerates patch validation by automating this process, allowing OEMs to ensure devices are protected with critical security updates much faster than traditional methods. This strengthens the security of the Android ecosystem, helping to keep Android users around the world safe. By open-sourcing Vanir, we aim to empower the broader security community to contribute to and benefit from this tool, enabling wider adoption and ultimately improving security across various ecosystems. While initially designed for Android, Vanir can be easily adapted to other ecosystems with relatively small modifications, making it a versatile tool for enhancing software security across the board. In collaboration with the Google Open Source Security Team, we have incorporated feedback from our early adopters to improve Vanir and make it more useful for security professionals. This tool is now available for you to start developing on top of, and integrating into, your systems. The Android ecosystem relies on a multi-stage process for vulnerability mitigation. When a new vulnerability is discovered, upstream AOSP developers create and release upstream patches. The downstream device and chip manufacturers then assess the impact on their specific devices and backport the necessary fixes. This process, while effective, can present scalability challenges, especially for manufacturers managing a diverse range of devices and old models with complex update histories. Managing patch coverage across diverse and customized devices often requires considerable effort due to the manual nature of backporting. To streamline the vital security workflow, we developed Vanir. Vanir provides a scalable and sustainable solution for security patch adoption and validation, helping to ensure Android devices receive timely protection against potential threats. The power of Vanir Source-code-based static analysis Vanir’s first-of-its-kind approach to Android security patch validation uses source-code-based static analysis to directly compare the target source code against known vulnerable code patterns. Vanir does not rely on traditional metadata-based validation mechanisms, such as version numbers, repository history and build configs, which can be prone to errors. This unique approach enables Vanir to analyze entire codebases with full history, individual files, or even partial code snippets. A main focus of Vanir is to automate the time consuming and costly process of identifying missing security patches in the open source software ecosystem. During the early development of Vanir, it became clear that manually identifying a high-volume of missing patches is not only labor intensive but also can leave user devices inadvertently exposed to known vulnerabilities for a period of time. To address this, Vanir utilizes novel automatic signature refinement techniques and multiple pattern analysis algorithms, inspired by the vulnerable code clone detection algorithms proposed by Jang et al. [1] and Kim et al. [2]. These algorithms have low false-alarm rates and can effectively handle broad classes of code changes that might appear in code patch processes. In fact, based on our 2-year operation of Vanir, only 2.72% of signatures triggered false alarms. This allows Vanir to efficiently find missing patches, even with code changes, while minimizing unnecessary alerts and manual review efforts. Vanir's source-code-based approach also enables rapid scaling across any ecosystem. It can generate signatures for any source files written in supported languages. Vanir's signature generator automatically generates, tests, and refines these signatures, allowing users to quickly create signatures for new vulnerabilities in any ecosystem simply by providing source files with security patches. Android’s successful use of Vanir highlights its efficiency compared to traditional patch verification methods. A single engineer used Vanir to generate signatures for over 150 vulnerabilities and verify missing security patches across its downstream branches – all within just five days. Vanir for Android Currently Vanir supports C/C++ and Java targets and covers 95% of Android kernel and userspace CVEs with public security patches. Google Android Security team consistently incorporates the latest CVEs into Vanir’s coverage to provide a complete picture of the Android ecosystem’s patch adoption risk profile. The Vanir signatures for Android vulnerabilities are published through the Open Source Vulnerabilities (OSV) database. This allows Vanir users to seamlessly protect their codebases against latest Android vulnerabilities without any additional updates. Currently, there are over 2,000 Android vulnerabilities in OSV, and finishing scanning an entire Android source tree can take 10-20 minutes with a modern PC. Flexible integration, adoption and expansion. Vanir is developed not only as a standalone application but also as a Python library. Users who want to integrate automated patch verification processes with their continuous build or test chain may easily achieve it by wiring their build integration tool with Vanir scanner libraries. For instance, Vanir is integrated with a continuous testing pipeline in Google, ensuring all security patches are adopted in ever-evolving Android codebase and their first-party downstream branches. Vanir is also fully open-sourced, and under BSD-3 license. As Vanir is not fundamentally limited to the Android ecosystem, you may easily adopt Vanir for the ecosystem that you want to protect by making relatively small modifications in Vanir. In addition, since Vanir’s underlying algorithm is not limited to security patch validation, you may modify the source and use it for different purposes such as licensed code detection or code clone detection. The Android Security team welcomes your contributions to Vanir for any direction that may expand its capability and scope. You can also contribute to Vanir by providing vulnerability data with Vanir signatures to OSV. Vanir Results Since early last year, we have partnered with several Android OEMs to test the tool’s effectiveness. Internally we have been able to integrate the tool into our build system continuously testing against over 1,300 vulnerabilities. Currently Vanir covers 95% of all Android, Wear, and Pixel vulnerabilities with public fixes across Android Kernel and Userspace. It has a 97% accuracy rate, which has saved our internal teams over 500 hours to date in patch fix time. Next steps We are happy to announce that Vanir is now available for public use. Vanir is not technically limited to Android, and we are also actively exploring problems that Vanir may help address, such as general C/C++ dependency management via integration with OSV-scanner. If you are interested in using or contributing to Vanir, please visit github.com/google/vanir. Please join our public community to submit your feedback and questions on the tool. We look forward to working with you on Vanir!
Posted by Oliver Chang, Dongge Liu and Jonathan Metzman, Google Open Source Security Team Recently, OSS-Fuzz reported 26 new vulnerabilities to open source project maintainers, including one vulnerability in the critical OpenSSL library (CVE-2024-9143) that underpins much of internet infrastructure. The reports themselves aren’t unusual—we’ve reported and helped maintainers fix over 11,000 vulnerabilities in the 8 years of the project. But these particular vulnerabilities represent a milestone for automated vulnerability finding: each was found with AI, using AI-generated and enhanced fuzz targets. The OpenSSL CVE is one of the first vulnerabilities in a critical piece of software that was discovered by LLMs, adding another real-world example to a recent Google discovery of an exploitable stack buffer underflow in the widely used database engine SQLite. This blog post discusses the results and lessons over a year and a half of work to bring AI-powered fuzzing to this point, both in introducing AI into fuzz target generation and expanding this to simulate a developer’s workflow. These efforts continue our explorations of how AI can transform vulnerability discovery and strengthen the arsenal of defenders everywhere. The story so far In August 2023, the OSS-Fuzz team announced AI-Powered Fuzzing, describing our effort to leverage large language models (LLM) to improve fuzzing coverage to find more vulnerabilities automatically—before malicious attackers could exploit them. Our approach was to use the coding abilities of an LLM to generate more fuzz targets, which are similar to unit tests that exercise relevant functionality to search for vulnerabilities. The ideal solution would be to completely automate the manual process of developing a fuzz target end to end: Drafting an initial fuzz target. Fixing any compilation issues that arise. Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues. Running the corrected fuzz target for a longer period of time, and triaging any crashes to determine the root cause. Fixing vulnerabilities. In August 2023, we covered our efforts to use an LLM to handle the first two steps. We were able to use an iterative process to generate a fuzz target with a simple prompt including hardcoded examples and compilation errors. In January 2024, we open sourced the framework that we were building to enable an LLM to generate fuzz targets. By that point, LLMs were reliably generating targets that exercised more interesting code coverage across 160 projects. But there was still a long tail of projects where we couldn’t get a single working AI-generated fuzz target. To address this, we’ve been improving the first two steps, as well as implementing steps 3 and 4. New results: More code coverage and discovered vulnerabilities We’re now able to automatically gain more coverage in 272 C/C++ projects on OSS-Fuzz (up from 160), adding 370k+ lines of new code coverage. The top coverage improvement in a single project was an increase from 77 lines to 5434 lines (a 7000% increase). This led to the discovery of 26 new vulnerabilities in projects on OSS-Fuzz that already had hundreds of thousands of hours of fuzzing. The highlight is CVE-2024-9143 in the critical and well-tested OpenSSL library. We reported this vulnerability on September 16 and a fix was published on October 16. As far as we can tell, this vulnerability has likely been present for two decades and wouldn’t have been discoverable with existing fuzz targets written by humans. Another example was a bug in the project cJSON, where even though an existing human-written harness existed to fuzz a specific function, we still discovered a new vulnerability in that same function with an AI-generated target. One reason that such bugs could remain undiscovered for so long is that line coverage is not a guarantee that a function is free of bugs. Code coverage as a metric isn’t able to measure all possible code paths and states—different flags and configurations may trigger different behaviors, unearthing different bugs. These examples underscore the need to continue to generate new varieties of fuzz targets even for code that is already fuzzed, as has also been shown by Project Zero in the past (1, 2). New improvements To achieve these results, we’ve been focusing on two major improvements: Automatically generate more relevant context in our prompts. The more complete and relevant information we can provide the LLM about a project, the less likely it would be to hallucinate the missing details in its response. This meant providing more accurate, project-specific context in prompts, such as function, type definitions, cross references, and existing unit tests for each project. To generate this information automatically, we built new infrastructure to index projects across OSS-Fuzz. LLMs turned out to be highly effective at emulating a typical developer’s entire workflow of writing, testing, and iterating on the fuzz target, as well as triaging the crashes found. Thanks to this, it was possible to further automate more parts of the fuzzing workflow. This additional iterative feedback in turn also resulted in higher quality and greater number of correct fuzz targets. The workflow in action Our LLM can now execute the first four steps of the developer’s process (with the fifth soon to come). 1. Drafting an initial fuzz target A developer might check the source code, existing documentation and unit tests, as well as usages of the target function when to draft an initial fuzz target. An LLM can fulfill this role here, if we provide a prompt with this information and ask it to come up with a fuzz target. Prompt: Your goal is to write a fuzzing harness for the provided function-under-test signature using LLVMFuzzerTestOneInput. It is important that the provided solution compiles and actually calls the function-under-test specified by the function signature: unsigned char * buffer_append_base64_decode(buffer *, const char *, size_t, base64_charset) Here is the source code of the function being tested: unsigned char* buffer_append_base64_decode(buffer *out, const char* in, size_t in_length, base64_charset charset) { const size_t reserve = 3*(in_length/4) + 3; unsigned char * const result = (unsigned char *) buffer_string_prepare_append(out, reserve); const size_t out_pos = li_base64_dec(result, reserve, in, in_length, charset); buffer_commit(out, out_pos); return (out_pos || !in_length) ? result : NULL; } Here is the source code for functions which reference the function being tested: … 2. Fixing any compilation issues that arise. Once a developer has a candidate target, they would try to compile it and look at any compilation issues that arise. Again, we can prompt an LLM with details of the compilation errors so it can provide fixes. Prompt: Below is the error to fix: The code has the following build issues: server_fuzzer.cc:12:46: error: only virtual member functions can be marked 'override' 12 | bool OnData(const char* data, size_t size) override { | ^~~~~~~~ server_fuzzer.cc:22:39: error: no type named 'MultiPartFile' in namespace 'httplib' 22 | const httplib::MultiPartFile& file) override { | ~~~~~~~~~^ server_fuzzer.cc:12:27: warning: unused parameter 'data' [-Wunused-parameter] 12 | bool OnData(const char* data, size_t size) override { | ^ server_fuzzer.cc:37:5: error: use of undeclared 'operator()' 37 | operator()(request_data.data(), receiver); | ^ server_fuzzer.cc:27:23: warning: private field 'provider_' is not used [-Wunused-private-field] 27 | FuzzedDataProvider& provider_; | ^ Example of compilation errors that an LLM was able to fix 3. Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues. Once all compilation errors are fixed, a developer would try running the fuzz target for a short period of time to see if there were any mistakes that led it to instantly crash, suggesting an error with the target rather than a bug discovered in the project. The following is an example of an LLM fixing a semantic issue with the fuzzing setup: LLM Response: The issue is that `sqlite3_blob_open` requires the table to exist before you can open a blob on it. The fix ensures that the table exists by executing a `CREATE TABLE IF NOT EXISTS` statement before calling `sqlite3_blob_open`. Here's a breakdown of the changes: 1. **Create Table Query:** A string `createTableQuery` is constructed to hold the SQL statement for creating the table. It uses the fuzzed values for the table and column names. 2. **Prepare Statement:** `sqlite3_prepare_v2` is used to prepare the SQL statement from `createTableQuery`. This compiles the SQL into a byte-code program that SQLite can execute. 3. **Execute Statement:** `sqlite3_step` executes the prepared statement. Since it's a `CREATE TABLE` statement, it doesn't return any data, but it creates the table if it doesn't exist. 4. **Finalize Statement:** `sqlite3_finalize` destroys the prepared statement object when it's no longer needed. By adding these steps before calling `sqlite3_blob_open`, we ensure that the table exists, preventing the null pointer access issue. 4. Running the corrected fuzz target for a longer period of time, and triaging any crashes. At this point, the fuzz target is ready to run for an extended period of time on a suitable fuzzing infrastructure, such as ClusterFuzz. Any discovered crashes would then need to be triaged, to determine the root causes and whether they represented legitimate vulnerabilities (or bugs in the fuzz target). An LLM can be prompted with the relevant context (stacktraces, fuzz target source code, relevant project source code) to perform this triage. Prompt: Given the following crash report, fuzz driver code and relevant project function code, analyze the cause of the crash. First, only answer “Crash is caused by bug in fuzz driver” or “Crash is caused by bug in project”. Second, offer a succinct and to-the-point analysis. Below is crash report: AddressSanitizer: heap-use-after-free on address 0x502000000630 at pc 0x560036939242 bp 0x7ffd65dca850 sp 0x7ffd65dca010 READ of size 1 at 0x502000000630 thread T0 SCARINESS: 40 (1-byte-read-heap-use-after-free) #0 0x560036939241 in __asan_memcpy /src/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cpp:63:3 #1 0x56003697d30b in EmitLiteral /src/snappy/snappy.cc:661:5 #2 0x56003697d30b in snappy::internal::CompressFragment(char const*, unsigned long, char*, unsigned short*, int) /src/snappy/snappy.cc:954:10 In this example, the LLM correctly determines this is a bug in the fuzz target, rather than a bug in the project being fuzzed. LLM Response: Crash is caused by bug in fuzz driver. The fuzz driver creates a vector of `iovec` structures, each pointing to a chunk of memory owned by the `FuzzedDataProvider`. However, the `data_chunk` vector, which holds the actual data pointed to by `iov_base`, is deallocated at the end of each fuzzer iteration when it goes out of scope. This leaves the `iov_base` pointers dangling. When `snappy::CompressFromIOVec` tries to access the memory pointed to by these dangling pointers, a heap-use-after-free occurs. 5. Fixing vulnerabilities. The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability. We don’t have anything we can share here today, but we’re collaborating with various researchers to make this a reality and look forward to sharing results soon. Up next Improving automated triaging: to get to a point where we’re confident about not requiring human review. This will help automatically report new vulnerabilities to project maintainers. There are likely more than the 26 vulnerabilities we’ve already reported upstream hiding in our results. Agent-based architecture: which means letting the LLM autonomously plan out the steps to solve a particular problem by providing it with access to tools that enable it to get more information, as well as to check and validate results. By providing LLM with interactive access to real tools such as debuggers, we’ve found that the LLM is more likely to arrive at a correct result. Integrating our research into OSS-Fuzz as a feature: to achieve a more fully automated end-to-end solution for vulnerability discovery and patching. We hope OSS-Fuzz will be useful for other researchers to evaluate AI-powered vulnerability discovery ideas and ultimately become a tool that will enable defenders to find more vulnerabilities before they get exploited. For more information, check out our open source framework at oss-fuzz-gen. We’re hoping to continue to collaborate on this area with other researchers. Also, be sure to check out the OSS-Fuzz blog for more technical updates.
Posted by Alex Rebert and Max Shavrick, Security Foundations, and Kinuko Yasuda, Core Developer Attackers regularly exploit spatial memory safety vulnerabilities, which occur when code accesses a memory allocation outside of its intended bounds, to compromise systems and sensitive data. These vulnerabilities represent a major security risk to users. Based on an analysis of in-the-wild exploits tracked by Google's Project Zero, spatial safety vulnerabilities represent 40% of in-the-wild memory safety exploits over the past decade: Breakdown of memory safety CVEs exploited in the wild by vulnerability class.1 Google is taking a comprehensive approach to memory safety. A key element of our strategy focuses on Safe Coding and using memory-safe languages in new code. This leads to an exponential decline in memory safety vulnerabilities and quickly improves the overall security posture of a codebase, as demonstrated by our post about Android's journey to memory safety. However, this transition will take multiple years as we adapt our development practices and infrastructure. Ensuring the safety of our billions of users therefore requires us to go further: we're also retrofitting secure-by-design principles to our existing C++ codebase wherever possible. To that end, we're working towards bringing spatial memory safety into as many of our C++ codebases as possible, including Chrome and the monolithic codebase powering our services. We’ve begun by enabling hardened libc++, which adds bounds checking to standard C++ data structures, eliminating a significant class of spatial safety bugs. While C++ will not become fully memory-safe, these improvements reduce risk as discussed in more detail in our perspective on memory safety, leading to more reliable and secure software. This post explains how we're retrofitting hardened libc++ across our codebases and showcases the positive impact it's already having, including preventing exploits, reducing crashes, and improving code correctness. Bounds-checked data structures: The foundation for spatial safety One of our primary strategies for improving spatial safety in C++ is to implement bounds checking for common data structures, starting with hardening the C++ standard library (in our case, LLVM’s libc++). Hardened libc++, recently added by open source contributors, introduces a set of security checks designed to catch vulnerabilities such as out-of-bounds accesses in production. For example, hardened libc++ ensures that every access to an element of a std::vector stays within its allocated bounds, preventing attempts to read or write beyond the valid memory region. Similarly, hardened libc++ checks that a std::optional isn't empty before allowing access, preventing access to uninitialized memory. This approach mirrors what's already standard practice in many modern programming languages like Java, Python, Go, and Rust. They all incorporate bounds checking by default, recognizing its crucial role in preventing memory errors. C++ has been a notable exception, but efforts like hardened libc++ aim to close this gap in our infrastructure. It’s also worth noting that similar hardening is available in other C++ standard libraries, such as libstdc++. Raising the security baseline across the board Building on the successful deployment of hardened libc++ in Chrome in 2022, we've now made it default across our server-side production systems. This improves spatial memory safety across our services, including key performance-critical components of products like Search, Gmail, Drive, YouTube, and Maps. While a very small number of components remain opted out, we're actively working to reduce this and raise the bar for security across the board, even in applications with lower exploitation risk. The performance impact of these changes was surprisingly low, despite Google's modern C++ codebase making heavy use of libc++. Hardening libc++ resulted in an average 0.30% performance impact across our services (yes, only a third of a percent). This is due to both the compiler's ability to eliminate redundant checks during optimization, and the efficient design of hardened libc++. While a handful of performance-critical code paths still require targeted use of explicitly unsafe accesses, these instances are carefully reviewed for safety. Techniques like profile-guided optimizations further improved performance, but even without those advanced techniques, the overhead of bounds checking remains minimal. We actively monitor the performance impact of these checks and work to minimize any unnecessary overhead. For instance, we identified and fixed an unnecessary check, which led to a 15% reduction in overhead (reduced from 0.35% to 0.3%), and contributed the fix back to the LLVM project to share the benefits with the broader C++ community. While hardened libc++'s overhead is minimal for individual applications in most cases, deploying it at Google's scale required a substantial commitment of computing resources. This investment underscores our dedication to enhancing the safety and security of our products. From tests to production Enabling libc++ hardening wasn't a simple flip of a switch. Rather, it required a multi-stage rollout to avoid accidentally disrupting users or creating an outage: Testing: We first enabled hardened libc++ in our tests over a year ago. This allowed us to identify and fix hundreds of previously undetected bugs in our code and tests. Baking: We let the hardened runtime "bake" in our testing and pre-production environments, giving developers time to adapt and address any new issues that surfaced. We also conducted extensive performance evaluations, ensuring minimal impact to our users' experience. Gradual Production Rollout: We then rolled out hardened libc++ to production over several months, starting with a small set of services and gradually expanding to our entire infrastructure. We closely monitored the rollout, promptly addressing any crashes or performance regressions. Quantifiable impact In just a few months since enabling hardened libc++ by default, we've already seen benefits. Preventing exploits: Hardened libc++ has already disrupted an internal red team exercise and would have prevented another one that happened before we enabled hardening, demonstrating its effectiveness in thwarting exploits. The safety checks have uncovered over 1,000 bugs, and would prevent 1,000 to 2,000 new bugs yearly at our current rate of C++ development. Improved reliability and correctness: The process of identifying and fixing bugs uncovered by hardened libc++ led to a 30% reduction in our baseline segmentation fault rate across production, indicating improved code reliability and quality. Beyond crashes, the checks also caught errors that would have otherwise manifested as unpredictable behavior or data corruption. Moving average of segfaults across our fleet over time, before and after enablement. Easier debugging: Hardened libc++ enabled us to identify and fix multiple bugs that had been lurking in our code for more than a decade. The checks transform many difficult-to-diagnose memory corruptions into immediate and easily debuggable errors, saving developers valuable time and effort. Bridging the gap with memory-safe languages While libc++ hardening provides immediate benefits by adding bounds checking to standard data structures, it's only one piece of the puzzle when it comes to spatial safety. We're expanding bounds checking to other libraries and working to migrate our code to Safe Buffers, requiring all accesses to be bounds checked. For spatial safety, both hardened data structures, including their iterators, and Safe Buffers are necessary. Beyond improving the safety of our C++, we're also focused on making it easier to interoperate with memory-safe languages. Migrating our C++ to Safe Buffers shrinks the gap between the languages, which simplifies interoperability and potentially even an eventual automated translation. Building a safer C++ ecosystem Hardened libc++ is a practical and effective way to enhance the safety, reliability, and debuggability of C++ code with minimal overhead. Given this, we strongly encourage organizations using C++ to enable their standard library's hardened mode universally by default. At Google, enabling hardened libc++ is only the first step in our journey towards a spatially safe C++ codebase. By expanding bounds checking, migrating to Safe Buffers, and actively collaborating with the broader C++ community, we aim to create a future where spatial safety is the norm. Acknowledgements We’d like to thank Emilia Kasper, Chandler Carruth, Duygu Isler, Matthew Riley, and Jeff Vander Stoep for their helpful feedback. We also extend our thanks to the libc++ community for developing the hardening mode that made this work possible. Based on manual analysis of CVEs from July 15, 2014 to Dec 14, 2023. Note that we could not classify 11% of CVEs.. ↩
Posted by Lyubov Farafonova, Product Manager and Steve Kafka, Group Product Manager, Android User safety is at the heart of everything we do at Google. Our mission to make technology helpful for everyone means building features that protect you while keeping your privacy top of mind. From Gmail’s defenses that stop more than 99.9% of spam, phishing and malware, to Google Messages’ advanced security that protects users from 2 billion suspicious messages a month and beyond, we're constantly developing and expanding protection features that help keep you safe. We're introducing two new real-time protection features that enhance your safety, all while safeguarding your privacy: Scam Detection in Phone by Google to protect you from scams and fraud, and Google Play Protect live threat detection with real-time alerts to protect you from malware and dangerous apps. These new security features are available first on Pixel, and are coming soon to more Android devices. More intelligent AI-powered protection against scams Scammers steal over $1 trillion dollars a year from people, and phone calls are their favorite way to do it. Even more alarming, scam calls are evolving, becoming increasingly more sophisticated, damaging and harder to identify. That’s why we’re using the best of Google AI to identify and stop scams before they can do harm with Scam Detection. Real-time protection, built with your privacy in mind. Real-time defense, right on your device: Scam Detection uses powerful on-device AI to notify you of a potential scam call happening in real-time by detecting conversation patterns commonly associated with scams. For example, if a caller claims to be from your bank and asks you to urgently transfer funds due to an alleged account breach, Scam Detection will process the call to determine whether the call is likely spam and, if so, can provide an audio and haptic alert and visual warning that the call may be a scam. Private by design, you’re always in control: We’ve built Scam Detection to protect your privacy and ensure you’re always in control of your data. Scam Detection is off by default, and you can decide whether you want to activate it for future calls. At any time, you can turn it off for all calls in the Phone app Settings, or during a particular call. The AI detection model and processing are fully on-device, which means that no conversation audio or transcription is stored on the device, sent to Google servers or anywhere else, or retrievable after the call. Cutting-edge AI protection, now on more Pixel phones: Gemini Nano, our advanced on-device AI model, powers Scam Detection on Pixel 9 series devices. As part of our commitment to bring powerful AI features to even more devices, this AI-powered protection is available to Pixel 6+ users thanks to other robust Google on-device machine learning models. We’re now rolling out Scam Detection to English-speaking Phone by Google public beta users in the U.S. with a Pixel 6 or newer device. Menu -> Help & Feedback -> Send Feedback. We look forward to learning from this beta and your feedback, and we’ll share more about Scam Detection in the months ahead. More real-time alerts to protect you from bad apps Google Play Protect works non-stop to protect you in real-time from malware and unsafe apps. Play Protect analyzes behavioral signals related to the use of sensitive permissions and interactions with other apps and services. With live threat detection, if a harmful app is found, you'll now receive a real-time alert, allowing you to take immediate action to protect your device. By looking at actual activity patterns of apps, live threat detection can now find malicious apps that try extra hard to hide their behavior or lie dormant for a time before engaging in suspicious activity. Private Compute Core, which allows us to protect users without collecting data. Live threat detection with real-time alerts in Google Play Protect are now available on Pixel 6+ devices and will be coming to additional phone makers in the coming months.
Posted by Jan Jedrzejowicz, Director of Product, Android and Business Communications; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse; Stephan Somogyi, Product Lead, User Protection; Branden Archer, Software Engineer Every day, over a billion people use Google Messages to communicate. That’s why we’ve made security a top priority, building in powerful on-device, AI-powered filters and advanced security that protects users from 2 billion suspicious messages a month. With end-to-end encrypted1 RCS conversations, you can communicate privately with other Google Messages RCS users. And we’re not stopping there. We're committed to constantly developing new controls and features to make your conversations on Google Messages even more secure and private. As part of cybersecurity awareness month, we're sharing five new protections to help keep you safe while using Google Messages on Android: Enhanced detection protects you from package delivery and job scams. Google Messages is adding new protections against scam texts that may seem harmless at first but can eventually lead to fraud. For Google Messages beta users2, we’re rolling out enhanced scam detection, with improved analysis of scammy texts, starting with a focus on package delivery and job seeking messages. When Google Messages suspects a potential scam text, it will automatically move the message into your spam folder or warn you. Google Messages uses on-device machine learning models to classify these scams, so your conversations stay private and the content is never sent to Google unless you report spam. We’re rolling this enhancement out now to Google Messages beta users who have spam protection enabled. Intelligent warnings alert you about potentially dangerous links. In the past year, we’ve been piloting more protections for Google Messages users when they receive text messages with potentially dangerous links. In India, Thailand, Malaysia and Singapore, Google Messages warns users when they get a link from unknown senders and blocks messages with links from suspicious senders. We’re in the process of expanding this feature globally later this year. Controls to turn off messages from unknown international senders. In some cases, scam text messages come from international numbers. Soon, you will be able to automatically hide messages from international senders who are not existing contacts so you don’t have to interact with them. If enabled, messages from international non-contacts will automatically be moved to the “Spam & blocked” folder. This feature will roll out first as a pilot in Singapore later this year before we look at expanding to more countries. Sensitive Content Warnings give you control over seeing and sending images that may contain nudity. At Google, we aim to provide users with a variety of ways to protect themselves against unwanted content, while keeping them in control of their data. This is why we’re introducing Sensitive Content Warnings for Google Messages. Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. All of this happens on-device to protect your privacy and keep end-to-end encrypted message content private to only sender and recipient. Sensitive Content Warnings doesn’t allow Google access to the contents of your images, nor does Google know that nudity may have been detected. This feature is opt-in for adults, managed via Android Settings, and is opt-out for users under 18 years of age. Sensitive Content Warnings will be rolling out to Android 9+ devices including Android Go devices3 with Google Messages in the coming months. More confirmation about who you’re messaging. To help you avoid sophisticated messaging threats where an attacker tries to impersonate one of your contacts, we’re working to add a contact verifying feature to Android. This new feature will allow you to verify your contacts' public keys so you can confirm you’re communicating with the person you intend to message. We’re creating a unified system for public key verification across different apps, which you can verify through QR code scanning or number comparison. This feature will be launching next year for Android 9+ devices, with support for messaging apps including Google Messages. These are just some of the new and upcoming features that you can use to better protect yourself when sending and receiving messages. Download Google Messages from the Google Play Store to enjoy these protections and controls and learn more about Google Messages here. Notes End-to-end encryption is currently available between Google Messages users. Availability of RCS varies by region and carrier. ↩ ↩ ↩
Posted by Alex Rebert, Security Foundations, and Chandler Carruth, Jen Engel, Andy Qin, Core Developers Error-prone interactions between software and memory1 are widely understood to create safety issues in software. It is estimated that about 70% of severe vulnerabilities2 in memory-unsafe codebases are due to memory safety bugs. Malicious actors exploit these vulnerabilities and continue to create real-world harm. In 2023, Google’s threat intelligence teams conducted an industry-wide study and observed a close to all-time high number of vulnerabilities exploited in the wild. Our internal analysis estimates that 75% of CVEs used in zero-day exploits are memory safety vulnerabilities. At Google, we have been mindful of these issues for over two decades, and are on a journey to continue advancing the state of memory safety in the software we consume and produce. Our Secure by Design commitment emphasizes integrating security considerations, including robust memory safety practices, throughout the entire software development lifecycle. This proactive approach fosters a safer and more trustworthy digital environment for everyone. This post builds upon our previously reported Perspective on Memory Safety, and introduces our strategic approach to memory safety. Our journey so far Google's journey with memory safety is deeply intertwined with the evolution of the software industry itself. In our early days, we recognized the importance of balancing performance with safety. This led to the early adoption of memory-safe languages like Java and Python, and the creation of Go. Today these languages comprise a large portion of our code, providing memory safety among other benefits. Meanwhile, the rest of our code is predominantly written in C++, previously the optimal choice for high-performance demands. We recognized the inherent risks associated with memory-unsafe languages and developed tools like sanitizers, which detect memory safety bugs dynamically, and fuzzers like AFL and libfuzzer, which proactively test the robustness and security of a software application by repeatedly feeding unexpected inputs. By open-sourcing these tools, we've empowered developers worldwide to reduce the likelihood of memory safety vulnerabilities in C and C++ codebases. Taking this commitment a step further, we provide continuous fuzzing to open-source projects through OSS-Fuzz, which helped get over 8800 vulnerabilities identified and subsequently fixed across 850 projects. Today, with the emergence of high-performance memory-safe languages like Rust, coupled with a deeper understanding of the limitations of purely detection-based approaches, we are focused primarily on preventing the introduction of security vulnerabilities at scale. Going forward: Google's two-pronged approach Google's long-term strategy for tackling memory safety challenges is multifaceted, recognizing the need to address both existing codebases and future development, while maintaining the pace of business. Our long-term objective is to progressively and consistently integrate memory-safe languages into Google's codebases while phasing out memory-unsafe code in new development. Given the amount of C++ code we use, we anticipate a residual amount of mature and stable memory-unsafe code will remain for the foreseeable future. Graphic of memory-safe language growth as memory-unsafe code is hardened and gradually decreased over time. Migration to Memory-Safe Languages (MSLs) The first pillar of our strategy is centered on further increasing the adoption of memory-safe languages. These languages drastically drive down the risk of memory-related errors through features like garbage collection and borrow checking, embodying the same Safe Coding3 principles that successfully eliminated other vulnerability classes like cross-site scripting (XSS) at scale. Google has already embraced MSLs like Java, Kotlin, Go, and Python for a large portion of our code. Our next target is to ramp up memory-safe languages with the necessary capabilities to address the needs of even more of our low-level environments where C++ has remained dominant. For example, we are investing to expand Rust usage at Google beyond Android and other mobile use cases and into our server, application, and embedded ecosystems. This will unlock the use of MSLs in low-level code environments where C and C++ have typically been the language of choice. In addition, we are exploring more seamless interoperability with C++ through Carbon, as a means to accelerate even more of our transition to MSLs. In Android, which runs on billions of devices and is one of our most critical platforms, we've already made strides in adopting MSLs, including Rust, in sections of our network, firmware and graphics stacks. We specifically focused on adopting memory safety in new code instead of rewriting mature and stable memory-unsafe C or C++ codebases. As we've previously discussed, this strategy is driven by vulnerability trends as memory safety vulnerabilities were typically introduced shortly before being discovered. As a result, the number of memory safety vulnerabilities reported in Android has decreased dramatically and quickly, dropping from more than 220 in 2019 to a projected 36 by the end of this year, demonstrating the effectiveness of this strategic shift. Given that memory-safety vulnerabilities are particularly severe, the reduction in memory safety vulnerabilities is leading to a corresponding drop in vulnerability severity, representing a reduction in security risk. Risk Reduction for Memory-Unsafe Code While transitioning to memory-safe languages is the long-term strategy, and one that requires investment now, we recognize the immediate responsibility we have to protect the safety of our billions of users during this process. This means we cannot ignore the reality of a large codebase written in memory-unsafe languages (MULs) like C and C++. Therefore the second pillar of our strategy focuses on risk reduction & containment of this portion of our codebase. This incorporates: C++ Hardening: We are retrofitting safety at scale in our memory-unsafe code, based on our experience eliminating web vulnerabilities. While we won't make C and C++ memory safe, we are eliminating sub-classes of vulnerabilities in the code we own, as well as reducing the risks of the remaining vulnerabilities through exploit mitigations. We have allocated a portion of our computing resources specifically to bounds-checking the C++ standard library across our workloads. While bounds-checking overhead is small for individual applications, deploying it at Google's scale requires significant computing resources. This underscores our deep commitment to enhancing the safety and security of our products and services. Early results are promising, and we'll share more details in a future post. In Chrome, we have also been rolling out MiraclePtr over the past few years, which effectively mitigated 57% of use-after-free vulnerabilities in privileged processes, and has been linked to a decrease of in-the-wild exploits. Security Boundaries: We are continuing4 to strengthen critical components of our software infrastructure through expanded use of isolation techniques like sandboxing and privilege reduction, limiting the potential impact of vulnerabilities. For example, earlier this year, we shipped the beta release of our V8 heap sandbox and included it in Chrome's Vulnerability Reward Program. Bug Detection: We are investing in bug detection tooling and innovative research such as Naptime and making ML-guided fuzzing as effortless and wide-spread as testing. While we are increasingly shifting towards memory safety by design, these tools and techniques remain a critical component of proactively identifying and reducing risks, especially against vulnerability classes currently lacking strong preventative controls. In addition, we are actively working with the semiconductor and research communities on emerging hardware-based approaches to improve memory safety. This includes our work to support and validate the efficacy of Memory Tagging Extension (MTE). Device implementations are starting to roll out, including within Google’s corporate environment. We are also conducting ongoing research into Capability Hardware Enhanced RISC Instructions (CHERI) architecture which can provide finer grained memory protections and safety controls, particularly appealing in security-critical environments like embedded systems. Looking ahead We believe it’s important to embrace the opportunity to achieve memory safety at scale, and that it will have a positive impact on the safety of the broader digital ecosystem. This path forward requires continuous investment and innovation to drive safety and velocity, and we remain committed to the broader community to walk this path together. We will provide future publications on memory safety that will go deeper into specific aspects of our strategy. Notes Anderson, J. Computer Security Technology Planning Study Vol II. ESD-TR-73-51, Vol. II, Electronic Systems Division, Air Force Systems Command, Hanscom Field, Bedford, MA 01730 (Oct. 1972). https://seclab.cs.ucdavis.edu/projects/history/papers/ande72.pdf ↩ https://www.memorysafety.org/docs/memory-safety/#how-common-are-memory-safety-vulnerabilities ↩ Developer Ecosystems for Software Safety. Commun. ACM 67, 6 (June 2024), 52–60. https://doi.org/10.1145/3651621 ↩ https://seclab.stanford.edu/websec/chromium/chromium-security-architecture.pdf ↩
Posted by Jianing Sandra Guo, Product Manager and Nataliya Stanetsky, Staff Program Manager, Android 97 phones are robbed or stolen every hour in Brazil. The GSM Association reports millions of devices stolen every year, and the numbers continue to grow. With our phones becoming increasingly central to storing sensitive data, like payment information and personal details, losing one can be an unsettling experience. That’s why we developed and thoroughly beta tested, a full suite of features designed to protect you and your data at every stage – before, during, and after device theft. These advanced theft protection features are now available to users around the world through Android 15 and a Google Play Services update (Android 10+ devices). AI-powered protection for your device the moment it is stolen Theft Detection Lock uses powerful AI to proactively protect you at the moment of a theft attempt. By using on-device machine learning, Theft Detection Lock is able to analyze various device signals to detect potential theft attempts. If the algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out. To protect your sensitive data if your phone is stolen, Theft Detection Lock uses device sensors to identify theft attempts. We’re working hard to bring this feature to as many devices as possible. This feature is rolling out gradually to ensure compatibility with various devices, starting today with Android devices that cover 90% of active users worldwide. Check your theft protection settings page periodically to see if your device is currently supported. In addition to Theft Detection Lock, Offline Device Lock protects you if a thief tries to take your device offline to extract data or avoid a remote wipe via Android’s Find My Device. If an unlocked device goes offline for prolonged periods, this feature locks the screen to ensure your phone can’t be used in the hands of a thief. If your Android device does become lost or stolen, Remote Lock can quickly help you secure it. Even if you can’t remember your Google account credentials in the moment of theft, you can use any device to visit Android.com/lock and lock your phone with just a verified phone number. Remote Lock secures your device while you regain access through Android’s Find My Device – which lets you secure, locate or remotely wipe your device. As a security best practice, we always recommend backing up your device on a continuous basis, so remotely wiping your device is not an issue. These features are now available on most Android 10+ devices1 via a Google Play Services update and must be enabled in settings. Advanced security to deter theft before it happens Changes to sensitive settings like Find My Device now require your PIN, password, or biometric authentication. Multiple failed login attempts, which could be a sign that a thief is trying to guess your password, will lock down your device, preventing unauthorized access. And enhanced factory reset protection makes it even harder for thieves to reset your device without your Google account credentials, significantly reducing its resale value and protecting your data. Identity Check, an opt-in feature that will add an extra layer of protection by requiring biometric authentication when accessing critical Google account and device settings, like changing your PIN, disabling theft protection, or accessing Passkeys from an untrusted location. This helps prevent unauthorized access even if your device PIN is compromised. Real-world protection for billions of Android users You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center. Notes Android Go smartphones, tablets and wearables are not supported ↩
Posted by Adrian Taylor, Security Engineer, Chrome .code { font-family: "Courier New", Courier, monospace; font-size: 11.8px; font-weight: bold; background-color: #f4f4f4; padding: 2px; border: 1px solid #ccc; border-radius: 2px; white-space: pre-wrap; display: inline-block; line-height: 12px; } .highlight { color: red; } Chrome’s user interface (UI) code is complex, and sometimes has bugs. Are those bugs security bugs? Specifically, if a user’s clicks and actions result in memory corruption, is that something that an attacker can exploit to harm that user? Our security severity guidelines say “yes, sometimes.” For example, an attacker could very likely convince a user to click an autofill prompt, but it will be much harder to convince the user to step through a whole flow of different dialogs. Even if these bugs aren’t the most easily exploitable, it takes a great deal of time for our security shepherds to make these determinations. User interface bugs are often flakey (that is, not reliably reproducible). Also, even if these bugs aren’t necessarily deemed to be exploitable, they may still be annoying crashes which bother the user. It would be great if we could find these bugs automatically. If only the whole tree of Chrome UI controls were exposed, somehow, such that we could enumerate and interact with each UI control automatically. Aha! Chrome exposes all the UI controls to assistive technology. Chrome goes to great lengths to ensure its entire UI is exposed to screen readers, braille devices and other such assistive tech. This tree of controls includes all the toolbars, menus, and the structure of the page itself. This structural definition of the browser user interface is already sometimes used in other contexts, for example by some password managers, demonstrating that investing in accessibility has benefits for all users. We’re now taking that investment and leveraging it to find security bugs, too. Specifically, we’re now “fuzzing” that accessibility tree - that is, interacting with the different UI controls semi-randomly to see if we can make things crash. This technique has a long pedigree. Screen reader technology is a bit different on each platform, but on Linux the tree can be explored using Accerciser. Screenshot of Accerciser showing the tree of UI controls in Chrome All we have to do is explore the same tree of controls with a fuzzer. How hard can it be? “We do this not because it is easy, but because we thought it would be easy” - Anon. Actually we never thought this would be easy, and a few different bits of tech have had to fall into place to make this possible. Specifically, There are lots of combinations of ways to interact with Chrome. Truly randomly clicking on UI controls probably won’t find bugs - we would like to leverage coverage-guided fuzzing to help the fuzzer select combinations of controls that seem to reach into new code within Chrome. We need any such bugs to be genuine. We therefore need to fuzz the actual Chrome UI, or something very similar, rather than exercising parts of the code in an unrealistic unit-test-like context. That’s where our InProcessFuzzer framework comes into play - it runs fuzz cases within a Chrome browser_test; essentially a real version of Chrome. But such browser_tests have a high startup cost. We need to amortize that cost over thousands of test cases by running a batch of them within each browser invocation. Centipede is designed to do that. But each test case won’t be idempotent. Within a given invocation of the browser, the UI state may be successively modified by each test case. We intend to add concatenation to centipede to resolve this. Chrome is a noisy environment with lots of timers, which may well confuse coverage-guided fuzzers. Gathering coverage for such a large binary is slow in itself. So, we don’t know if coverage-guided fuzzing will successfully explore the UI paths here. All of these concerns are common to the other fuzzers which run in the browser_test context, most notably our new IPC fuzzer (blog posts to follow). But the UI fuzzer presented some specific challenges. Finding UI bugs is only useful if they’re actionable. Ideally, that means: Our fuzzing infrastructure gives a thorough set of diagnostics. It can bisect to find when the bug was introduced and when it was fixed. It can minimize complex test cases into the smallest possible reproducer. The test case is descriptive and says which UI controls were used, so a human may be able to reproduce it. Fuzzers are unlikely to stumble across these control names by chance, even with the instrumentation applied to string comparisons. In fact, this by-name approach turned out to be only 20% as effective as picking controls by ordinal. To resolve this we added a custom mutator which is smart enough to put in place control names and roles which are known to exist. We randomly use this mutator or the standard libprotobuf-mutator in order to get the best of both worlds. This approach has proven to be about 80% as quick as the original ordinal-based mutator, while providing stable test cases. Chart of code coverage achieved by minutes fuzzing with different strategies So, does any of this work? We don’t know yet! - and you can follow along as we find out. The fuzzer found a couple of potential bugs (currently access restricted) in the accessibility code itself but hasn’t yet explored far enough to discover bugs in Chrome’s fundamental UI. But, at the time of writing, this has only been running on our ClusterFuzz infrastructure for a few hours, and isn’t yet working on our coverage dashboard. If you’d like to follow along, keep an eye on our coverage dashboard as it expands to cover UI code.
Posted by Sherk Chung, Stephan Chen, Pixel team, and Roger Piqueras Jover, Ivan Lozano, Android team Pixel phones have earned a well-deserved reputation for being security-conscious. In this blog, we'll take a peek under the hood to see how Pixel mitigates common exploits on cellular basebands. Smartphones have become an integral part of our lives, but few of us think about the complex software that powers them, especially the cellular baseband – the processor on the device responsible for handling all cellular communication (such as LTE, 4G, and 5G). Most smartphones use cellular baseband processors with tight performance constraints, making security hardening difficult. Security researchers have increasingly exploited this attack vector and routinely demonstrated the possibility of exploiting basebands used in popular smartphones. The good news is that Pixel has been deploying security hardening mitigations in our basebands for years, and Pixel 9 represents the most hardened baseband we've shipped yet. Below, we’ll dive into why this is so important, how specifically we’ve improved security, and what this means for our users. The Cellular Baseband The cellular baseband within a smartphone is responsible for managing the device's connectivity to cellular networks. This function inherently involves processing external inputs, which may originate from untrusted sources. For instance, malicious actors can employ false base stations to inject fabricated or manipulated network packets. In certain protocols like IMS (IP Multimedia Subsystem), this can be executed remotely from any global location using an IMS client. The firmware within the cellular baseband, similar to any software, is susceptible to bugs and errors. In the context of the baseband, these software vulnerabilities pose a significant concern due to the heightened exposure of this component within the device's attack surface. There is ample evidence demonstrating the exploitation of software bugs in modem basebands to achieve remote code execution, highlighting the critical risk associated with such vulnerabilities. The State of Baseband Security Baseband security has emerged as a prominent area of research, with demonstrations of software bug exploitation featuring in numerous security conferences. Many of these conferences now also incorporate training sessions dedicated to baseband firmware emulation, analysis, and exploitation techniques. Recent reports by security researchers have noted that most basebands lack exploit mitigations commonly deployed elsewhere and considered best practices in software development. Mature software hardening techniques that are commonplace in the Android operating system, for example, are often absent from cellular firmwares of many popular smartphones. There are clear indications that exploit vendors and cyber-espionage firms abuse these vulnerabilities to breach the privacy of individuals without their consent. For example, 0-day exploits in the cellular baseband are being used to deploy the Predator malware in smartphones. Additionally, exploit marketplaces explicitly list baseband exploits, often with relatively low payouts, suggesting a potential abundance of such vulnerabilities. These vulnerabilities allow attackers to gain unauthorized access to a device, execute arbitrary code, escalate privileges, or extract sensitive information. Recognizing these industry trends, Android and Pixel have proactively updated their Vulnerability Rewards Program in recent years, placing a greater emphasis on identifying and addressing exploitable bugs in connectivity firmware. Building a Fortress: Proactive Defenses in the Pixel Modem In response to the rising threat of baseband security attacks, Pixel has incrementally incorporated many of the following proactive defenses over the years, with the Pixel 9 phones (Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Fold) showcasing the latest features: Bounds Sanitizer: Buffer overflows occur when a bug in code allows attackers to cram too much data into a space, causing it to spill over and potentially corrupt other data or execute malicious code. Bounds Sanitizer automatically adds checks around a specific subset of memory accesses to ensure that code does not access memory outside of designated areas, preventing memory corruption. Integer Overflow Sanitizer: Numbers matter, and when they get too large an “overflow” can cause them to be incorrectly interpreted as smaller values. The reverse can happen as well, a number can overflow in the negative direction as well and be incorrectly interpreted as a larger value. These overflows can be exploited by attackers to cause unexpected behavior. Integer Overflow Sanitizer adds checks around these calculations to eliminate the risk of memory corruption from this class of vulnerabilities. Stack Canaries: Stack canaries are like tripwires set up to ensure code executes in the expected order. If a hacker tries to exploit a vulnerability in the stack to change the flow of execution without being mindful of the canary, the canary "trips," alerting the system to a potential attack. Control Flow Integrity (CFI): Similar to stack canaries, CFI makes sure code execution is constrained along a limited number of paths. If an attacker tries to deviate from the allowed set of execution paths, CFI causes the modem to restart rather than take the unallowed execution path. Auto-Initialize Stack Variables: When memory is designated for use, it’s not normally initialized in C/C+ as it is expected the developer will correctly set up the allocated region. When a developer fails to handle this correctly, the uninitialized values can leak sensitive data or be manipulated by attackers to gain code execution. Pixel phones automatically initialize stack variables to zero, preventing this class of vulnerabilities for stack data. We also leverage a number of bug detection tools, such as address sanitizer, during our testing process. This helps us identify software bugs and patch them prior to shipping devices to our users. The Pixel Advantage: Combining Protections for Maximum Security Security hardening is difficult and our work is never done, but when these security measures are combined, they significantly increase Pixel 9’s resilience to baseband attacks. Pixel's proactive approach to security demonstrates a commitment to protecting its users across the entire software stack. Hardening the cellular baseband against remote attacks is just one example of how Pixel is constantly working to stay ahead of the curve when it comes to security. Special thanks to our colleagues who supported our cellular baseband hardening efforts: Dominik Maier, Shawn Yang, Sami Tolvanen, Pirama Arumuga Nainar, Stephen Hines, Kevin Deus, Xuan Xing, Eugene Rodionov, Stephan Somogyi, Wes Johnson, Suraj Harjani, Morgan Shen, Valery Wu, Clint Chen, Cheng-Yi He, Estefany Torres, Hungyen Weng, Jerry Hung, Sherif Hanna
Posted by Alex Gough, Chrome Security Team The Chrome Security Team is constantly striving to make it safer to browse the web. We invest in mechanisms to make classes of security bugs impossible, mitigations that make it more difficult to exploit a security bug, and sandboxing to reduce the capability exposed by an isolated security issue. When choosing where to invest it is helpful to consider how bad actors find and exploit vulnerabilities. In this post we discuss several axes along which to evaluate the potential harm to users from exploits, and how they apply to the Chrome browser. Historically the Chrome Security Team has made major investments and driven the web to be safer. We pioneered browser sandboxing, site isolation and the migration to an encrypted web. Today we’re investing in Rust for memory safety, hardening our existing C++ code-base, and improving detection with GWP-asan and lightweight use-after-free (UAF) detection. Considerations of user-harm and attack utility shape our vulnerability severity guidelines and payouts for bugs reported through our Vulnerability Rewards Program. In the longer-term the Chrome Security Team advocates for operating system improvements like less-capable lightweight processes, less-privileged GPU and NPU containers, improved application isolation, and support for hardware-based isolation, memory safety and flow control enforcement. When contemplating a particular security change it is easy to fall into a trap of security nihilism. It is tempting to reject changes that do not make exploitation impossible but only make it more difficult. However, the scale we are operating at can still make incremental improvements worthwhile. Over time, and over the population that uses Chrome and browsers based on Chromium, these improvements add up and impose real costs on attackers. Threat Model for Code Execution User Harm ⇔ Attacker Utility Good Bugs and Bad Bugs Reliable Low-interaction Ubiquitous Fast Scriptable weird machine. Bugs that are adjacent to a scripting engine like JavaScript are easier to trigger - making some bugs in third party libraries more serious in Chrome than they might be in other contexts. Bugs in a tightly coupled API like WebGPU are easy to script. Chrome extensions can manipulate Chrome’s internal state and user-interface (for example, they can open, close and rearrange tabs), making some user-interaction scriptable. Easy to Test Silent Long-lived Targeted Easy to escalate Easy to find Difficult to find Attacker Goals & Economics Avoid linear thinking slices of swiss-cheese). In reality the terrain presented to the universe of attackers is a complex web of latent possibilities, some known to some, and many yet to be discovered. This is more than ‘attackers think in graphs’, as we must acknowledge that a defensive intervention can succeed even if it does not prevent every attacker from reaching every possible person they wish to exploit. Conclusion Resources Project Zero: What is a "good" memory corruption vulnerability? An Introduction to Exploit Reliability & What is a "good" Linux Kernel bug? (Isosceles) Zero Day Markets with Mark Dowd (Security Cryptography Whatever podcast) Escaping the Sandbox (Chrome and Adobe Pdf Reader) on Windows, Zer0Con 2024, Zhiniang Peng, R4nger, Q4n Exploring Memory Safety in Critical Open Source Projects (CISA.gov)
Posted by Jeff Vander Stoep - Android team, and Alex Rebert - Security Foundations Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages. This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective. We’ll also share updated data on how the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages. Counterintuitive results 1 as new memory unsafe development slows down, and new memory safe development starts to take over: The math vulnerabilities decay exponentially. They have a half-life. The distribution of vulnerability lifetime follows an exponential distribution given an average vulnerability lifetime λ: 2 published in 2022 in Usenix Security confirmed this phenomenon. Researchers found that the vast majority of vulnerabilities reside in new or recently modified code: observation, published in 2021, that the density of Android’s memory safety bugs decreased with the age of the code, primarily residing in recent changes. This leads to two important takeaways: The problem is overwhelmingly with new code, necessitating a fundamental change in how we develop code. Code matures and gets safer with time, exponentially, making the returns on investments like rewrites diminish over time as code gets older. For example, based on the average vulnerability lifetimes, 5-year-old code has a 3.4x (using lifetimes from the study) to 7.4x (using lifetimes observed in Android and Chromium) lower vulnerability density than new code. In real life, as with our simulation, when we start to prioritize prevention, the situation starts to rapidly improve. In practice on Android reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping3. Note that the data for 2024 is extrapolated to the full year (represented as 36, but currently at 27 after the September security bulletin). previous post, memory safety vulnerabilities tend to be significantly more severe, more likely to be remotely reachable, more versatile, and more likely to be maliciously exploited than other vulnerability types. As the number of memory safety vulnerabilities have dropped, the overall security risk has dropped along with it. Evolution of memory safety strategies 1st generation: reactive patching. The initial focus was mainly on fixing vulnerabilities reactively. For problems as rampant as memory safety, this incurs ongoing costs on the business and its users. Software manufacturers have to invest significant resources in responding to frequent incidents. This leads to constant security updates, leaving users vulnerable to unknown issues, and frequently albeit temporarily vulnerable to known issues, which are getting exploited ever faster. 2nd generation: proactive mitigating. The next approach consisted of reducing risk in vulnerable software, including a series of exploit mitigation strategies that raised the costs of crafting exploits. However, these mitigations, such as stack canaries and control-flow integrity, typically impose a recurring cost on products and development teams, often putting security and other product requirements in conflict: They come with performance overhead, impacting execution speed, battery life, tail latencies, and memory usage, sometimes preventing their deployment. Attackers are seemingly infinitely creative, resulting in a cat-and-mouse game with defenders. In addition, the bar to develop and weaponize an exploit is regularly being lowered through better tooling and other advancements. 3rd generation: proactive vulnerability discovery. The following generation focused on detecting vulnerabilities. This includes sanitizers, often paired with fuzzing like libfuzzer, many of which were built by Google. While helpful, these methods address the symptoms of memory unsafety, not the root cause. They typically require constant pressure to get teams to fuzz, triage, and fix their findings, resulting in low coverage. Even when applied thoroughly, fuzzing does not provide high assurance, as evidenced by vulnerabilities found in extensively fuzzed code. Products across the industry have been significantly strengthened by these approaches, and we remain committed to responding to, mitigating, and proactively hunting for vulnerabilities. Having said that, it has become increasingly clear that those approaches are not only insufficient for reaching an acceptable level of risk in the memory-safety domain, but incur ongoing and increasing costs to developers, users, businesses, and products. As highlighted by numerous government agencies, including CISA, in their secure-by-design report, "only by incorporating secure by design practices will we break the vicious cycle of constantly creating and applying fixes." The fourth generation: high-assurance prevention success in eliminating other vulnerability classes like XSS. The foundation of this shift is Safe Coding, which enforces security invariants directly into the development platform through language features, static analysis, and API design. The result is a secure by design ecosystem providing continuous assurance at scale, safe from the risk of accidentally introducing vulnerabilities. The shift from previous generations to Safe Coding can be seen in the quantifiability of the assertions that are made when developing code. Instead of focusing on the interventions applied (mitigations, fuzzing), or attempting to use past performance to predict future security, Safe Coding allows us to make strong assertions about the code's properties and what can or cannot happen based on those properties. Safe Coding's scalability lies in its ability to reduce costs by: Breaking the arms race: Instead of an endless arms race of defenders attempting to raise attackers’ costs by also raising their own, Safe Coding leverages our control of developer ecosystems to break this cycle by focusing on proactively building secure software from the start. Commoditizing high assurance memory safety: Rather than precisely tailoring interventions to each asset's assessed risk, all while managing the cost and overhead of reassessing evolving risks and applying disparate interventions, Safe Coding establishes a high baseline of commoditized security, like memory-safe languages, that affordably reduces vulnerability density across the board. Modern memory-safe languages (especially Rust) extend these principles beyond memory safety to other bug classes. Increasing productivity: Safe Coding improves code correctness and developer productivity by shifting bug finding further left, before the code is even checked in. We see this shift showing up in important metrics such as rollback rates (emergency code revert due to an unanticipated bug). The Android team has observed that the rollback rate of Rust changes is less than half that of C++. From lessons to action Interoperability is the new rewrite Kotlin. To that end, earlier this year, Google provided a $1,000,000 grant to the Rust Foundation, in addition to developing interoperability tooling like Crubit and autocxx. Role of previous generations More selective use of proactive mitigations: We expect less reliance on exploit mitigations as we transition to memory-safe code, leading to not only safer software, but also more efficient software. For instance, after removing the now unnecessary sandbox, Chromium's Rust QR code generator is 20 times faster. Decreased use, but increased effectiveness of proactive detection: We anticipate a decreased reliance on proactive detection approaches like fuzzing, but increased effectiveness, as achieving comprehensive coverage over small well-encapsulated code snippets becomes more feasible. Final thoughts even in large existing systems. The concept is simple: once we turn off the tap of new vulnerabilities, they decrease exponentially, making all of our code safer, increasing the effectiveness of security design, and alleviating the scalability challenges associated with existing memory safety strategies such that they can be applied more effectively in a targeted manner. This approach has proven successful in eliminating entire vulnerability classes and its effectiveness in tackling memory safety is increasingly evident based on more than half a decade of consistent results in Android. We'll be sharing more about our secure-by-design efforts in the coming months. Acknowledgements Notes Simulation was based on numbers similar to Android and other Google projects. The code base doubles every 6 years. The average lifetime for vulnerabilities is 2.5 years. It takes 10 years to transition to memory safe languages for new code, and we use a sigmoid function to represent the transition. Note that the use of the sigmoid function is why the second chart doesn’t initially appear to be exponential. ↩ "How Long Do Vulnerabilities Live in the Code? A Large-Scale Empirical Measurement Study on FOSS Vulnerability Lifetimes". USENIX Security 22. ↩ ↩
Posted by Xuan Xing, Eugene Rodionov, Jon Bottarini, Adam Bacchus - Android Red Team; Amit Chaudhary, Lyndon Fawcett, Joseph Artgole - Arm Product Security Team Who cares about GPUs? CVE-2023-4295, CVE-2023-21106, CVE-2021-0884, and more. Most exploitable GPU vulnerabilities are in the implementation of the GPU kernel mode modules. These modules are pieces of code that load/unload during runtime, extending functionality without the need to reboot the device. Proactive testing is good hygiene as it can lead to the detection and resolution of new vulnerabilities before they’re exploited. It’s also one of the most complex investigations to do as you don’t necessarily know where the vulnerability will appear (that’s the point!). By combining the expertise of Google’s engineers with IP owners and OEMs, we can ensure the Android ecosystem retains a strong measure of integrity. Why investigate GPUs? Functionality vs. Security Tradeoffs Nobody wants a slow, unresponsive device; any hits to GPU performance could result in a noticeably degraded user experience. As such, the GPU software stack in Android relies on an in-process HAL model where the API & user space drivers communicating with the GPU kernel mode module are running directly within the context of apps, thus avoiding IPC (interprocess communication). This opens the door for potentially untrusted code from a third party app being able to directly access the interface exposed by the GPU kernel module. If there are any vulnerabilities in the module, the third party app has an avenue to exploit them. As a result, a potentially untrusted code running in the context of the third party application is able to directly access the interface exposed by the GPU kernel module and exploit potential vulnerabilities in the kernel module. Variety & Memory Safety Additionally, the implementation of GPU subsystems (and kernel modules specifically) from major OEMs are increasingly complex. Kernel modules for most GPUs are typically written in memory unsafe languages such as C, which are susceptible to memory corruption vulnerabilities like buffer overflow. Can someone do something about this? Android Red Team The Android Red Team performs time-bound security assessment engagements on all aspects of the Android open source codebase and conducts regular security reviews and assessments of internal Android components. Throughout these engagements, the Android Red Team regularly collaborates with 3rd party software and hardware providers to analyze and understand proprietary and “closed source” code repositories and relevant source code that are utilized by Android products with the sole objective to identify security risks and potential vulnerabilities before they can be exploited by adversaries outside of Android. This year, the Android Red Team collaborated directly with our industry partner, Arm, to conduct the Mali GPU engagement and further secure millions of Android devices. Arm Product Security and GPU Teams Arm has a central product security team that sets the policy and practice across the company. They also have dedicated product security experts embedded in engineering teams. Arm operates a systematic approach which is designed to prevent, discover, and eliminate security vulnerabilities. This includes a Security Development Lifecycle (SDL), a Monitoring capability, and Incident Response. For this collaboration the Android Red Teams were supported by the embedded security experts based in Arm’s GPU engineering team. Working together to secure Android devices Assess the impact on the broadest segment of the Android Ecosystem: The Arm Mali GPU is one of the most used GPUs by original equipment manufacturers (OEMs) and is found in many popular mobile devices. By focusing on the Arm Mali GPU, the Android Red Team could assess the security of a GPU implementation running on millions of Android devices worldwide. Evaluate the reference implementation and vendor-specific changes: Phone manufacturers often modify the upstream implementation of GPUs. This tailors the GPU to the manufacturer's specific device(s). These modifications and enhancements are always challenging to make, and can sometimes introduce security vulnerabilities that are not present in the original version of the GPU upstream. In this specific instance, the Google Pixel team actively worked with the Android Red Team to better understand and secure the modifications they made for Pixel devices. Improvements Testing the kernel driver Pixel security bulletin 2023-12-01. Testing the firmware the July 2024 Android Security Bulletin. CVE-2024-0153 happens when GPU firmware handles certain instructions. When handling such instructions, the firmware copies register content into a buffer. There are size checks before the copy operation. However, under very specific conditions, an out-of-bounds write happens to the destination buffer, leading to a buffer overflow. When carefully manipulated, this overflow will overwrite some other important structures following the buffer, causing code execution inside of the GPU firmware. The conditions necessary to reach and potentially exploit this issue are very complex as it requires a deep understanding of how instructions are executed. With collective expertise, the Android Red Team and Arm were able to verify the exploitation path and leverage the issue to gain limited control of GPU firmware. This eventually circled back to the kernel to obtain privilege escalation. Arm did an excellent job to respond quickly and remediate the issue. Altogether, this highlights the strength of collaboration between both teams to dive deeper. Time to Patch STS) tests were built to help partners automatically check their builds for missing Mali kbase patches. (Security Test Suite is software provided by Google to help partners automate the process of checking their builds for missing security patches.) What’s Next? product security page. The Android Red Team and Arm continue to work together to proactively raise the bar on GPU security. With thorough testing, rapid fixing, and updates to the security test suite, we’re improving the ecosystem for Android users. The Android Red Team looks forward to replicating this working relationship with other ecosystem partners to make devices more secure.
Posted by David Adrian, David Benjamin, Bob Beck & Devon O'Brien, Chrome Team We previously posted about experimenting with a hybrid post-quantum key exchange, and enabling it for 100% of Chrome Desktop clients. The hybrid key exchange used both the pre-quantum X25519 algorithm, and the new post-quantum algorithm Kyber. At the time, the NIST standardization process for Kyber had not yet finished. Since then, the Kyber algorithm has been standardized with minor technical changes and renamed to the Module Lattice Key Encapsulation Mechanism (ML-KEM). We have implemented ML-KEM in Google’s cryptography library, BoringSSL, which allows for it to be deployed and utilized by services that depend on this library. The changes to the final version of ML-KEM make it incompatible with the previously deployed version of Kyber. As a result, the codepoint in TLS for hybrid post-quantum key exchange is changing from 0x6399 for Kyber768+X25519, to 0x11EC for ML-KEM768+X25519. To handle this, we will be making the following changes in Chrome 1311: Chrome will switch from supporting Kyber to ML-KEM Chrome will offer a key share prediction for hybrid ML-KEM (codepoint 0x11EC) The PostQuantumKeyAgreementEnabled flag and enterprise policy will apply to both Kyber and ML-KEM Chrome will no longer support hybrid Kyber (codepoint 0x6399) Chrome will not support Kyber and ML-KEM at the same time. We made this decision for several reasons: Kyber was always experimental, so we think continuing to support it risks ossification on non-standard algorithms. Post-quantum cryptography is too big to be able to offer two post-quantum key share predictions at the same time. Server operators can temporarily support both algorithms at the same time to maintain post-quantum security with a broader set of clients, as they update over time. We do not want to regress any clients’ post-quantum security, so we are waiting until Chrome 131 to make this change so that server operators have a chance to update their implementations. Longer term, we hope to avoid the chicken-and-egg problem for post-quantum key share predictions through our emerging IETF draft for key share prediction. This allows servers to broadcast what algorithms they support in DNS, so that clients can predict a key share that a server is known to support. This avoids the risk of an extra round trip, which can be particularly costly when using large post-quantum algorithms. We’re excited to continue to improve security for Chrome users, against both current and future computers. Notes Chrome Canary, Dev, and Beta may see these changes prior to Chrome 131. ↩
Posted by Ivan Lozano and Dominik Maier, Android Team Android's use of safe-by-design principles drives our adoption of memory-safe languages like Rust, making exploitation of the OS increasingly difficult with every release. To provide a secure foundation, we’re extending hardening and the use of memory-safe languages to low-level firmware (including in Trusty apps). In this blog post, we'll show you how to gradually introduce Rust into your existing firmware, prioritizing new code and the most security-critical code. You'll see how easy it is to boost security with drop-in Rust replacements, and we'll even demonstrate how the Rust toolchain can handle specialized bare-metal targets. Drop-in Rust replacements for C code are not a novel idea and have been used in other cases, such as librsvg’s adoption of Rust which involved replacing C functions with Rust functions in-place. We seek to demonstrate that this approach is viable for firmware, providing a path to memory-safety in an efficient and effective manner. Memory Safety for Firmware Firmware serves as the interface between hardware and higher-level software. Due to the lack of software security mechanisms that are standard in higher-level software, vulnerabilities in firmware code can be dangerously exploited by malicious actors. Modern phones contain many coprocessors responsible for handling various operations, and each of these run their own firmware. Often, firmware consists of large legacy code bases written in memory-unsafe languages such as C or C++. Memory unsafety is the leading cause of vulnerabilities in Android, Chrome, and many other code bases. Rust provides a memory-safe alternative to C and C++ with comparable performance and code size. Additionally it supports interoperability with C with no overhead. The Android team has discussed Rust for bare-metal firmware previously, and has developed training specifically for this domain. Incremental Rust Adoption Our incremental approach focusing on replacing new and highest risk existing code (for example, code which processes external untrusted input) can provide maximum security benefits with the least amount of effort. Simply writing any new code in Rust reduces the number of new vulnerabilities and over time can lead to a reduction in the number of outstanding vulnerabilities. You can replace existing C functionality by writing a thin Rust shim that translates between an existing Rust API and the C API the codebase expects. The C API is replicated and exported by the shim for the existing codebase to link against. The shim serves as a wrapper around the Rust library API, bridging the existing C API and the Rust API. This is a common approach when rewriting or replacing existing libraries with a Rust alternative. Challenges and Considerations There are several challenges you need to consider before introducing Rust to your firmware codebase. In the following section we address the general state of no_std Rust (that is, bare-metal Rust code), how to find the right off-the-shelf crate (a rust library), porting an std crate to no_std, using Bindgen to produce FFI bindings, how to approach allocators and panics, and how to set up your toolchain. The Rust Standard Library and Bare-Metal Environments Rust's standard library consists of three crates: core, alloc, and std. The core crate is always available. The alloc crate requires an allocator for its functionality. The std crate assumes a full-blown operating system and is commonly not supported in bare-metal environments. A third-party crate indicates it doesn’t rely on std through the crate-level #![no_std] attribute. This crate is said to be no_std compatible. The rest of the blog will focus on these. Choosing a Component to Replace When choosing a component to replace, focus on self-contained components with robust testing. Ideally, the components functionality can be provided by an open-source implementation readily available which supports bare-metal environments. Parsers which handle standard and commonly used data formats or protocols (such as, XML or DNS) are good initial candidates. This ensures the initial effort focuses on the challenges of integrating Rust with the existing code base and build system rather than the particulars of a complex component and simplifies testing. This approach eases introducing more Rust later on. Choosing a Pre-Existing Crate (Rust Library) Picking the right open-source crate (Rust library) to replace the chosen component is crucial. Things to consider are: Is the crate well maintained, for example, are open issues being addressed and does it use recent crate versions? How widely used is the crate? This may be used as a quality signal, but also important to consider in the context of using crates later on which may depend on it. Does the crate have acceptable documentation? Does it have acceptable test coverage? Additionally, the crate should ideally be no_std compatible, meaning the standard library is either unused or can be disabled. While a wide range of no_std compatible crates exist, others do not yet support this mode of operation – in those cases, see the next section on converting a std library to no_std. By convention, crates which optionally support no_std will provide an std feature to indicate whether the standard library should be used. Similarly, the alloc feature usually indicates using an allocator is optional. Note: Even when a library declares #![no_std] in its source, there is no guarantee that its dependencies don’t depend on std. We recommend looking through the dependency tree to ensure that all dependencies support no_std, or test whether the library compiles for a no_std target. The only way to know is currently by trying to compile the crate for a bare-metal target. For example, one approach is to run cargo check with a bare-metal toolchain provided through rustup: $ rustup target add aarch64-unknown-none $ cargo check --target aarch64-unknown-none --no-default-features Porting a std Library to no_std If a library does not support no_std, it might still be possible to port it to a bare-metal environment – especially file format parsers and other OS agnostic workloads. Higher-level functionality such as file handling, threading, and async code may present more of a challenge. In those cases, such functionality can be hidden behind feature flags to still provide the core functionality in a no_std build. To port a std crate to no_std (core+alloc): In the cargo.toml file, add a std feature, then add this std feature to the default features Add the following lines to the top of the lib.rs: #![no_std] #[cfg(feature = "std")] extern crate std; extern crate alloc; Then, iteratively fix all occurring compiler errors as follows: Move any use directives from std to either core or alloc. Add use directives for all types that would otherwise automatically be imported by the std prelude, such as alloc::vec::Vec and alloc::string::String. Hide anything that doesn't exist in core or alloc and cannot otherwise be supported in the no_std build (such as file system accesses) behind a #[cfg(feature = "std")] guard. Anything that needs to interact with the embedded environment may need to be explicitly handled, such as functions for I/O. These likely need to be behind a #[cfg(not(feature = "std"))] guard. Disable std for all dependencies (that is, change their definitions in Cargo.toml, if using Cargo). This needs to be repeated for all dependencies within the crate dependency tree that do not support no_std yet. Custom Target Architectures There are a number of officially supported targets by the Rust compiler, however, many bare-metal targets are missing from that list. Thankfully, the Rust compiler lowers to LLVM IR and uses an internal copy of LLVM to lower to machine code. Thus, it can support any target architecture that LLVM supports by defining a custom target. Defining a custom target requires a toolchain built with the channel set to dev or nightly. Rust’s Embedonomicon has a wealth of information on this subject and should be referred to as the source of truth. To give a quick overview, a custom target JSON file can be constructed by finding a similar supported target and dumping the JSON representation: $ rustc --print target-list [...] armv7a-none-eabi [...] $ rustc -Z unstable-options --print target-spec-json --target armv7a-none-eabi This will print out a target JSON that looks something like: $ rustc --print target-spec-json -Z unstable-options --target=armv7a-none-eabi { "abi": "eabi", "arch": "arm", "c-enum-min-bits": 8, "crt-objects-fallback": "false", "data-layout": "e-m:e-p:32:32-Fi8-i64:64-v128:64:128-a:0:32-n32-S64", [...] } This output can provide a starting point for defining your target. Of particular note, the data-layout field is defined in the LLVM documentation. Once the target is defined, libcore and liballoc (and libstd, if applicable) must be built from source for the newly defined target. If using Cargo, building with -Z build-std accomplishes this, indicating that these libraries should be built from source for your target along with your crate module: # set build-std to the list of libraries needed cargo build -Z build-std=core,alloc --target my_target.json Building Rust With LLVM Prebuilts If the bare-metal architecture is not supported by the LLVM bundled internal to the Rust toolchain, a custom Rust toolchain can be produced with any LLVM prebuilts that support the target. The instructions for building a Rust toolchain can be found in detail in the Rust Compiler Developer Guide. In the config.toml, llvm-config must be set to the path of the LLVM prebuilts. You can find the latest Rust Toolchain supported by a particular version of LLVM by checking the release notes and looking for releases which bump up the minimum supported LLVM version. For example, Rust 1.76 bumped the minimum LLVM to 16 and 1.73 bumped the minimum LLVM to 15. That means with LLVM15 prebuilts, the latest Rust toolchain that can be built is 1.75. Creating a Drop-In Rust Shim To create a drop-in replacement for the C/C++ function or API being replaced, the shim needs two things: it must provide the same API as the replaced library and it must know how to run in the firmware’s bare-metal environment. Exposing the Same API The first is achieved by defining a Rust FFI interface with the same function signatures. We try to keep the amount of unsafe Rust as minimal as possible by putting the actual implementation in a safe function and exposing a thin wrapper type around. For example, the FreeRTOS coreJSON example includes a JSON_Validate C function with the following signature: JSONStatus_t JSON_Validate( const char * buf, size_t max ); We can write a shim in Rust between it and the memory safe serde_json crate to expose the C function signature. We try to keep the unsafe code to a minimum and call through to a safe function early: #[no_mangle] pub unsafe extern "C" fn JSON_Validate(buf: *const c_char, len: usize) -> JSONStatus_t { if buf.is_null() { JSONStatus::JSONNullParameter as _ } else if len == 0 { JSONStatus::JSONBadParameter as _ } else { json_validate(slice_from_raw_parts(buf as _, len).as_ref().unwrap()) as _ } } // No more unsafe code in here. fn json_validate(buf: &[u8]) -> JSONStatus { if serde_json::from_slice::(buf).is_ok() { JSONStatus::JSONSuccess } else { ILLEGAL_DOC } } Note: This is a very simple example. For a highly resource constrained target, you can avoid alloc and use serde_json_core, which has even lower overhead but requires pre-defining the JSON structure so it can be allocated on the stack. For further details on how to create an FFI interface, the Rustinomicon covers this topic extensively. Calling Back to C/C++ Code In order for any Rust component to be functional within a C-based firmware, it will need to call back into the C code for things such as allocations or logging. Thankfully, there are a variety of tools available which automatically generate Rust FFI bindings to C. That way, C functions can easily be invoked from Rust. The standard means of doing this is with the Bindgen tool. You can use Bindgen to parse all relevant C headers that define the functions Rust needs to call into. It's important to invoke Bindgen with the same CFLAGS as the code in question is built with, to ensure that the bindings are generated correctly. Experimental support for producing bindings to static inline functions is also available. Hooking Up The Firmware’s Bare-Metal Environment Next we need to hook up Rust panic handlers, global allocators, and critical section handlers to the existing code base. This requires producing definitions for each of these which call into the existing firmware C functions. The Rust panic handler must be defined to handle unexpected states or failed assertions. A custom panic handler can be defined via the panic_handler attribute. This is specific to the target and should, in most cases, either point to an abort function for the current task/process, or a panic function provided by the environment. If an allocator is available in the firmware and the crate relies on the alloc crate, the Rust allocator can be hooked up by defining a global allocator implementing GlobalAlloc. If the crate in question relies on concurrency, critical sections will need to be handled. Rust's core or alloc crates do not directly provide a means for defining this, however the critical_section crate is commonly used to handle this functionality for a number of architectures, and can be extended to support more. It can be useful to hook up functions for logging as well. Simple wrappers around the firmware’s existing logging functions can expose these to Rust and be used in place of print or eprint and the like. A convenient option is to implement the Log trait. Fallible Allocations and alloc Rusts alloc crate normally assumes that allocations are infallible (that is, memory allocations won’t fail). However due to memory constraints this isn’t true in most bare-metal environments. Under normal circumstances Rust panics and/or aborts when an allocation fails; this may be acceptable behavior for some bare-metal environments, in which case there are no further considerations when using alloc. If there’s a clear justification or requirement for fallible allocations however, additional effort is required to ensure that either allocations can’t fail or that failures are handled. One approach is to use a crate that provides statically allocated fallible collections, such as the heapless crate, or dynamic fallible allocations like fallible_vec. Another is to exclusively use try_* methods such as Vec::try_reserve, which check if the allocation is possible. Rust is in the process of formalizing better support for fallible allocations, with an experimental allocator in nightly allowing failed allocations to be handled by the implementation. There is also the unstable cfg flag for alloc called no_global_oom_handling which removes the infallible methods, ensuring they are not used. Build Optimizations Building the Rust library with LTO is necessary to optimize for code size. The existing C/C++ code base does not need to be built with LTO when passing -C lto=true to rustc. Additionally, setting -C codegen-unit=1 results in further optimizations in addition to reproducibility. If using Cargo to build, the following Cargo.toml settings are recommended to reduce the output library size: [profile.release] panic = "abort" lto = true codegen-units = 1 strip = "symbols" # opt-level "z" may produce better results in some circumstances opt-level = "s" Passing the -Z remap-cwd-prefix=. flag to rustc or to Cargo via the RUSTFLAGS env var when building with Cargo to strip cwd path strings. In terms of performance, Rust demonstrates similar performance to C. The most relevant example may be the Rust binder Linux kernel driver, which found “that Rust binder has similar performance to C binder”. When linking LTO’d Rust staticlibs together with C/C++, it’s recommended to ensure a single Rust staticlib ends up in the final linkage, otherwise there may be duplicate symbol errors when linking. This may mean combining multiple Rust shims into a single static library by re-exporting them from a wrapper module. Memory Safety for Firmware, Today Using the process outlined in this blog post, You can begin to introduce Rust into large legacy firmware code bases immediately. Replacing security critical components with off-the-shelf open-source memory-safe implementations and developing new features in a memory safe language will lead to fewer critical vulnerabilities while also providing an improved developer experience. Special thanks to our colleagues who have supported and contributed to these efforts: Roger Piqueras Jover, Stephan Chen, Gil Cukierman, Andrew Walbran, and Erik Gilling
Posted by Dave Kleidermacher, VP Engineering, Android Security and Privacy, and Giles Hogben, Senior Director, Privacy Engineering, Android Your smartphone holds a lot of your personal information to help you get things done every day. On Android, we are seamlessly integrating the latest artificial intelligence (AI) capabilities, like Gemini as a trusted assistant – capable of handling life's essential tasks. As such, ensuring your privacy and security on Android is paramount. As a pioneer in responsible AI and cutting-edge privacy technologies like Private Compute Core and federated learning, we made sure our approach to the assistant experience with Gemini on Android is aligned with our existing Secure AI framework, AI Principles and Privacy Principles. We’ve always safeguarded your data with an integrated stack of world-class secure infrastructure and technology, delivering end-to-end protection in a way that only Google can. From privacy on-device when handling sensitive data to the world’s best cloud infrastructure, here are six key ways we keep your information private and protected. We don’t hand off to a third-party AI provider. Gemini Apps can help you with complex, personal tasks from creating workout routines to helping you get started on a resume. And it does the hard work for you all within Google's ecosystem. The core processing is done by Gemini within Google's secure cloud infrastructure and there are no handoffs to third-party chatbots or AI providers that you may not know or trust. On-device AI privacy for sensitive tasks, even when offline. For some AI features, like Summarize in Recorder on Pixel, that benefit from additional data privacy or processing efficiency, we utilize on-device AI. Gemini Nano, the first multimodal model designed to run on mobile devices, delivers on-device AI processing for some of your most sensitive tasks without relying on cloud connectivity. You can enjoy features like summarizing text even when you’re offline. World-class cloud infrastructure that is secure by default, private by design. For AI tasks that use data already in the cloud or have complex demands that require more processing power than what’s possible on-device, we use Google’s highly secure cloud infrastructure. Backed by Google’s world-class security and privacy infrastructure and processes, these data centers benefit from the same robust defenses that have kept Google products safe for billions of users for more than 20 years. So you can ask Gemini to find details in your lease agreement saved in your Google Drive and that data is protected by advanced monitoring against unauthorized access or misuse. We also enforce strict software supply chain controls to ensure that only approved and verified code runs in our cloud environment. Control how you interact with Gemini Apps. review your Gemini Apps chats, pin them, or delete them. Android also gives you control over how apps such as Gemini respond when your device is locked. Pioneering new privacy technologies. sealed computing technology, which can be used to process sensitive workloads for enhanced privacy in a secure cloud enclave. Sealed computing ensures no one, including Google, can access the data. It can be thought of as extending the user’s device and its security boundaries into our cloud infrastructure, providing a virtual smartphone in the sky. A new level of transparency. Transparency is in Android’s open-source DNA. Android binary transparency already allows anyone to verify the operating system code against a transparency log to ensure it hasn't been tampered with, much like matching fingerprint biometrics to confirm someone's identity. Binary transparency is extended in sealed computing environments to include reproducible builds. This ensures anyone can rebuild the trusted firmware base and verify that the resulting binaries match what is remotely attested as running in production and published in public transparency logs. Our Commitment to Safeguarding Your Data Just like with all Google products, we believe you should be able to enjoy the benefits of Android without having to worry about security and privacy. That's why we invest so much in building world-class protections into our products and services from the start. We look forward to continuing to make AI helpful and intuitive, allowing you to focus on what matters most, while we take care of safeguarding your data. Keep a lookout for more information about our end-to-end approach to AI privacy in an upcoming whitepaper.
Posted by Royal Hansen, VP, Privacy, Safety and Security Engineering, Google, and Phil Venables, VP, TI Security & CISO, Google Cloud The National Institute of Standards and Technology (NIST) just released three finalized standards for post-quantum cryptography (PQC) covering public key encapsulation and two forms of digital signatures. In progress since 2016, this achievement represents a major milestone towards standards development that will keep information on the Internet secure and confidential for many years to come. Here's a brief overview of what PQC is, how Google is using PQC, and how other organizations can adopt these new standards. You can also read more about PQC and Google's role in the standardization process in this 2022 post from Cloud CISO Phil Venables. What is PQC? quantum computers are still years away, but computer scientists have known for decades that a cryptographically relevant quantum computer (CRQC) could break existing forms of asymmetric key cryptography. PQC is the effort to defend against that risk, by defining standards and collaboratively implementing new algorithms that will resist attacks by both classical and quantum computers. You don't need a quantum computer to use post-quantum cryptography, or to prepare. All of the standards released by NIST today run on the classical computers we currently use. How is encryption at risk? Stored Data Through an attack known as Store Now, Decrypt Later, encrypted data captured and saved by attackers is stored for later decryption, with the help of as-yet unbuilt quantum computers Hardware Products Defenders must ensure that future attackers cannot forge a digital signature and implant compromised firmware, or software updates, on pre-quantum devices that are still in use For more information on CRQC-related risks, see our PQC Threat Model post. How can organizations prepare for PQC migrations? crypto agility best practices can be enacted anytime: Cryptographic inventory Understanding where and how organizations are using cryptography includes knowing what cryptographic algorithms are in use, and critically, managing key material safely and securely Key rotation Any new cryptographic system will require the ability to generate new keys and move them to production without causing outages. Just like testing recovery from backups, regularly testing key rotation should be part of any good resilience plan Abstraction layers You can use a tool like Tink, Google's multi-language, cross-platform open source library, designed to make it easy for non-specialists to use cryptography safely, and to switch between cryptographic algorithms without extensive code refactoring End-to-end testing PQC algorithms have different properties. Notably, public keys, ciphertexts, and signatures are significantly larger. Ensure that all layers of the stack function as expected Our 2022 paper "Transitioning organizations to post-quantum cryptography" provides additional recommendations to help organizations prepare and this recent post from the Google Security Blog has more detail on cryptographic agility and key rotation. Google's PQC Commitments testing PQC in Chrome in 2016 and has been using PQC to protect internal communications since 2022. In May 2024, Chrome enabled ML-KEM by default for TLS 1.3 and QUIC on desktop. ML-KEM is also enabled on Google servers. Connections between Chrome Desktop and Google's products, such as Cloud Console or Gmail, are already experimentally protected with post-quantum key exchange. Google engineers have contributed to the standards released by NIST, as well as standards created by ISO, and have submitted Internet Drafts to the IETF for Trust Expressions, Merkle Tree Certificates, and managing state for hash-based signatures. Tink, Google's open source library that provides secure and easy-to-use cryptographic APIs, already provides experimental PQC algorithms in C++, and our engineers are working with partners to produce formally verified PQC implementations that can be used at Google, and beyond. As we make progress on our own PQC transition, Google will continue to provide PQC updates on Google services, with updates to come from Android, Chrome, Cloud, and others.
Posted by Nataliya Stanetsky and Roger Piqueras Jover, Android Security & Privacy Team Cell-site simulators, also known as False Base Stations (FBS) or Stingrays, are radio devices that mimic real cell sites in order to lure mobile devices to connect to them. These devices are commonly used for security and privacy attacks, such as surveillance and interception of communications. In recent years, carriers have started reporting new types of abuse perpetrated with FBSs for the purposes of financial fraud. In particular, there is increasingly more evidence of the exploitation of weaknesses in cellular communication standards leveraging cell-site simulators to inject SMS phishing messages directly into smartphones. This method to inject messages entirely bypasses the carrier network, thus bypassing all the sophisticated network-based anti-spam and anti-fraud filters. Instances of this new type of fraud, which carriers refer to as SMS Blaster fraud, have been reported in Vietnam, France, Norway, Thailand and multiple other countries. GSMA’s Fraud and Security Group (FASG) has developed a briefing paper for GSMA members to raise awareness of SMS Blaster fraud and provide guidelines and mitigation recommendations for carriers, OEMs and other stakeholders. The briefing paper, available for GSMA members only, calls out some Android-specific recommendations and features that can help effectively protect our users from this new type of fraud. What are SMS Blasters? Smishing (SMS phishing) payloads into user devices. Fraudsters typically do this by driving around with portable FBS devices, and there have even been reports of fraudsters carrying these devices in their backpacks. lack of mutual authentication in 2G and force connections to be unencrypted, which enables a complete Person-in-the-Middle (PitM) position to inject SMS payloads. SMS Blasters are sold on the internet and do not require deep technical expertise. They are simple to set up and ready to operate, and users can easily configure them to imitate a particular carrier or network using a mobile app. Users can also easily configure and customize the SMS payload as well as its metadata, including for example the sender number. SMS Blasters are very appealing to fraudsters given their great return on investment. Spreading SMS phishing messages commonly yields a small return as it is very difficult to get these messages to fly undetected by sophisticated anti-spam filters. A very small subset of messages eventually reach a victim. In contrast, injecting messages with an SMS blaster entirely bypasses the carrier network and its anti-fraud and anti-spam filters, guaranteeing that all messages will reach a victim. Moreover, using an FBS the fraudster can control all fields of the message. One can make the message look like it is coming from the legitimate SMS aggregator of a bank, for example. In a recent attack that impacted hundreds of thousands of devices, the messages masqueraded as a health insurance notice. Although the type of abuse carriers are uncovering recently is financial fraud, there is precedent for the use of rogue cellular base stations to disseminate malware, for example injecting phishing messages with a url to download the payload. It is important to note that users are still vulnerable to this type of fraud as long as mobile devices support 2G, regardless of the status of 2G in their local carrier. Android protects users from phishing and fraud disable 2G at the modem level, a feature first adopted by Pixel. This option, if used, completely mitigates the risk from SMS Blasters. This feature has been available since Android 12 and requires devices to conform to Radio HAL 1.6+. Android also has an option to disable null ciphers as a key protection because it is strictly necessary for the 2G FBS to configure a null cipher (e.g. A5/0) in order to inject an SMS payload. This security feature launched with Android 14 requires devices that implement radio HAL 2.0 or above. Android also provides effective protections that specifically tackles SMS spam and phishing, regardless of whether the delivery channel is an SMS Blaster. Android has built-in spam protection that helps to identify and block spam SMS messages. Additional protection is provided through RCS for Business, a feature that helps users identify legitimate SMS messages from businesses. RCS for Business messages are marked with a blue checkmark, which indicates that the message has been verified by Google. We advocate leveraging a couple of important Google security features which are available on Android, namely Safe Browsing and Google Play Protect. As an additional layer of protection, Safe Browsing built-in on Android devices protects 5 billion devices globally and helps warn the users about potentially risky sites, downloads and extensions which could be phishing and malware-based. Let’s say a user decides to download an app from the Play store but the app contains code that is malicious or harmful, users are protected by Google Play Protect which is a security feature that scans apps for malware and other threats. It also warns users about potentially harmful apps before they are installed. Android’s commitment to security and privacy Thank you to all our colleagues who actively contribute to Android’s efforts in tackling fraud and FBS threats, and special thanks to those who contributed to this blog post: Yomna Nasser, Gil Cukierman, Il-Sung Lee, Eugene Liderman, Siddarth Pandit.
Posted by Will Harris, Chrome Security Team Cybercriminals using cookie theft infostealer malware continue to pose a risk to the safety and security of our users. We already have a number of initiatives in this area including Chrome’s download protection using Safe Browsing, Device Bound Session Credentials, and Google’s account-based threat detection to flag the use of stolen cookies. Today, we’re announcing another layer of protection to make Windows users safer from this type of malware. Like other software that needs to store secrets, Chrome currently secures sensitive data like cookies and passwords using the strongest techniques the OS makes available to us - on macOS this is the Keychain services, and on Linux we use a system provided wallet such as kwallet or gnome-libsecret. On Windows, Chrome uses the Data Protection API (DPAPI) which protects the data at rest from other users on the system or cold boot attacks. However, the DPAPI does not protect against malicious applications able to execute code as the logged in user - which infostealers take advantage of. In Chrome 127 we are introducing a new protection on Windows that improves on the DPAPI by providing Application-Bound (App-Bound) Encryption primitives. Rather than allowing any app running as the logged in user to access this data, Chrome can now encrypt data tied to app identity, similar to how the Keychain operates on macOS. We will be migrating each type of secret to this new system starting with cookies in Chrome 127. In future releases we intend to expand this protection to passwords, payment data, and other persistent authentication tokens, further protecting users from infostealer malware. How it works App-Bound Encryption relies on a privileged service to verify the identity of the requesting application. During encryption, the App-Bound Encryption service encodes the app's identity into the encrypted data, and then verifies this is valid when decryption is attempted. If another app on the system tries to decrypt the same data, it will fail. Because the App-Bound service is running with system privileges, attackers need to do more than just coax a user into running a malicious app. Now, the malware has to gain system privileges, or inject code into Chrome, something that legitimate software shouldn't be doing. This makes their actions more suspicious to antivirus software – and more likely to be detected. Our other recent initiatives such as providing event logs for cookie decryption work in tandem with this protection, with the goal of further increasing the cost and risk of detection to attackers attempting to steal user data. Enterprise Considerations Since malware can bypass this protection by running elevated, enterprise environments that do not grant their users the ability to run downloaded files as Administrator are particularly helped by this protection - malware cannot simply request elevation privilege in these environments and is forced to use techniques such as injection that can be more easily detected by endpoint agents. App-Bound Encryption strongly binds the encryption key to the machine, so will not function correctly in environments where Chrome profiles roam between multiple machines. We encourage enterprises who wish to support roaming profiles to follow current best practices. If it becomes necessary, App-Bound encryption can be configured using the new ApplicationBoundEncryptionEnabled policy. To further help detect any incompatibilities, Chrome emits an event when a failed verification occurs. The Event is ID 257 from 'Chrome' source in the Application log. Conclusion App-Bound Encryption increases the cost of data theft to attackers and also makes their actions far noisier on the system. It helps defenders draw a clear line in the sand for what is acceptable behavior for other apps on the system. As the malware landscape continually evolves we are keen to continue engaging with others in the security community on improving detections and strengthening operating system protections, such as stronger app isolation primitives, for any bypasses.
Posted by Jasika Bawa, Lily Chen, and Daniel Rubery, Chrome Security Last year, we introduced a redesign of the Chrome downloads experience on desktop to make it easier for users to interact with recent downloads. At the time, we mentioned that the additional space and more flexible UI of the new Chrome downloads experience would give us new opportunities to make sure users stay safe when downloading files. Adding context and consistency to download warnings The redesigned Chrome downloads experience gives us the opportunity to provide even more context when Chrome protects a user from a potentially malicious file. Taking advantage of the additional space available in the new downloads UI, we have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions. Our legacy, space-constrained warning vs. our redesigned one We also made download warnings more understandable by introducing a two-tier download warning taxonomy based on AI-powered malware verdicts from Google Safe Browsing. These are: Suspicious files (lower confidence verdict, unknown risk of user harm) Dangerous files (high confidence verdict, high risk of user harm) These two tiers of warnings are distinguished by iconography, color, and text, to make it easy for users to quickly and confidently make the best choice for themselves based on the nature of the danger and Safe Browsing's level of certainty. Overall, these improvements in clarity and consistency have resulted in significant changes in user behavior, including fewer warnings bypassed, warnings heeded more quickly, and all in all, better protection from malicious downloads. Differentiation between suspicious and dangerous warnings Protecting more downloads with automatic deep scans Users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome are prompted to send the contents of suspicious files to Safe Browsing for deep scanning before opening the file. Suspicious files are a small fraction of overall downloads, and file contents are only scanned for security purposes and are deleted shortly after a verdict is returned. We've found these additional scans to have been extraordinarily successful – they help catch brand new malware that Safe Browsing has not seen before and dangerous files hosted on brand new sites. In fact, files sent for deep scanning are over 50x more likely to be flagged as malware than downloads in the aggregate. Since Enhanced Protection users have already agreed to send a small fraction of their downloads to Safe Browsing for security purposes in order to benefit from additional protections, we recently moved towards automatic deep scans for these users rather than prompting each time. This will protect users from risky downloads while reducing user friction. An automatic deep scan resulting in a warning Staying ahead of attackers who hide in encrypted archives Not all deep scans can be conducted automatically. A current trend in cookie theft malware distribution is packaging malicious software in an encrypted archive – a .zip, .7z, or .rar file, protected by a password – which hides file contents from Safe Browsing and other antivirus detection scans. In order to combat this evasion technique, we have introduced two protection mechanisms depending on the mode of Safe Browsing selected by the user in Chrome. Attackers often make the passwords to encrypted archives available in places like the page from which the file was downloaded, or in the download file name. For Enhanced Protection users, downloads of suspicious encrypted archives will now prompt the user to enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed. Uploaded files and file passwords are deleted a short time after they're scanned, and all collected data is only used by Safe Browsing to provide better download protections. Enter a file password to send an encrypted file for a malware scan For those who use Standard Protection mode which is the default in Chrome, we still wanted to be able to provide some level of protection. In Standard Protection mode, downloading a suspicious encrypted archive will also trigger a prompt to enter the file's password, but in this case, both the file and the password stay on the local device and only the metadata of the archive contents are checked with Safe Browsing. As such, in this mode, users are still protected as long as Safe Browsing had previously seen and categorized the malware. The Chrome Security team works closely with Safe Browsing, Google's Threat Analysis Group, and security researchers from around the world to gain insights into the techniques attackers are using. Using these insights, we are constantly adapting our product strategy to stay ahead of attackers and to keep users safe while downloading files in Chrome. We look forward to sharing more in the future!
Posted by Chrome Root Program, Chrome Security Team Update (09/10/2024): In support of more closely aligning Chrome’s planned compliance action with a major release milestone (i.e., M131), blocking action will now begin on November 12, 2024. This post has been updated to reflect the date change. Website operators who will be impacted by the upcoming change can explore continuity options offered by Entrust. Entrust has expressed its commitment to continuing to support customer needs, and is best positioned to describe the available options for website operators. Learn more at Entrust’s TLS Certificate Information Center. .code { font-family: "Courier New", Courier, monospace; font-size: 11.8px; font-weight: bold; background-color: #f4f4f4; padding: 10px; border: 1px solid #ccc; border-radius: 2px; white-space: pre-wrap; display: inline-block; line-height: 12px; } .highlight { color: red; } The Chrome Security Team prioritizes the security and privacy of Chrome’s users, and we are unwilling to compromise on these values. The Chrome Root Program Policy states that CA certificates included in the Chrome Root Store must provide value to Chrome end users that exceeds the risk of their continued inclusion. It also describes many of the factors we consider significant when CA Owners disclose and respond to incidents. When things don’t go right, we expect CA Owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement. Over the past several years, publicly disclosed incident reports highlighted a pattern of concerning behaviors by Entrust that fall short of the above expectations, and has eroded confidence in their competence, reliability, and integrity as a publicly-trusted CA Owner. In response to the above concerns and to preserve the integrity of the Web PKI ecosystem, Chrome will take the following actions. Upcoming change in Chrome 131 and higher: TLS server authentication certificates validating to the following Entrust roots whose earliest Signed Certificate Timestamp (SCT) is dated after November 11, 2024 (11:59:59 PM UTC), will no longer be trusted by default. CN=Entrust Root Certification Authority - EC1,OU=See www.entrust.net/legal-terms+OU=(c) 2012 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US CN=Entrust Root Certification Authority - G2,OU=See www.entrust.net/legal-terms+OU=(c) 2009 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US CN=Entrust.net Certification Authority (2048),OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)+OU=(c) 1999 Entrust.net Limited,O=Entrust.net CN=Entrust Root Certification Authority,OU=www.entrust.net/CPS is incorporated by reference+OU=(c) 2006 Entrust, Inc.,O=Entrust, Inc.,C=US CN=Entrust Root Certification Authority - G4,OU=See www.entrust.net/legal-terms+OU=(c) 2015 Entrust, Inc. - for authorized use only,O=Entrust, Inc.,C=US CN=AffirmTrust Commercial,O=AffirmTrust,C=US CN=AffirmTrust Networking,O=AffirmTrust,C=US CN=AffirmTrust Premium,O=AffirmTrust,C=US CN=AffirmTrust Premium ECC,O=AffirmTrust,C=US TLS server authentication certificates validating to the above set of roots whose earliest SCT is on or before November 11, 2024 (11:59:59 PM UTC), will be unaffected by this change. This approach attempts to minimize disruption to existing subscribers using a recently announced Chrome feature to remove default trust based on the SCTs in certificates. explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome Root Store (e.g., explicit trust is conveyed through a Group Policy Object on Windows), the SCT-based constraints described above will be overridden and certificates will function as they do today. To further minimize risk of disruption, website operators are encouraged to review the “Frequently Asked Questions" listed below. Why is Chrome taking action? When will this action happen? Chrome 131 and greater on Windows, macOS, ChromeOS, Android, and Linux. Apple policies prevent the Chrome Certificate Verifier and corresponding Chrome Root Store from being used on Chrome for iOS. What is the user impact of this action? similar to this one. Certificates issued by other CAs are not impacted by this action. How can a website operator tell if their website is affected? Use the Chrome Certificate Viewer Navigate to a website (e.g., https://www.google.com) Click the “Tune" icon Click “Connection is Secure" Click “Certificate is Valid" (the Chrome Certificate Viewer will open) Website owner action is not required, if the “Organization (O)” field listed beneath the “Issued By" heading does not contain “Entrust" or “AffirmTrust”. Website owner action is required, if the “Organization (O)” field listed beneath the “Issued By" heading contains “Entrust" or “AffirmTrust”. What does an affected website operator do? must be completed before the existing certificate(s) expire if expiry is planned to take place after November 11, 2024 (11:59:59 PM UTC). While website operators could delay the impact of blocking action by choosing to collect and install a new TLS certificate issued from Entrust before Chrome’s blocking action begins on November 12, 2024, website operators will inevitably need to collect and install a new TLS certificate from one of the many other CAs included in the Chrome Root Store. Can I test these changes before they take effect? How to: Simulate an SCTNotAfter distrust 1. Close all open versions of Chrome 2. Start Chrome using the following command-line flag, substituting variables described below with actual values 3. Evaluate the effects of the flag with test websites Example: The following command will simulate an SCTNotAfter distrust with an effective date of April 30, 2024 11:59:59 PM GMT for all of the Entrust trust anchors included in the Chrome Root Store. The expected behavior is that any website whose certificate is issued before the enforcement date/timestamp will function in Chrome, and all issued after will display an interstitial. --test-crs-constraints=02ED0EB28C14DA45165C566791700D6451D7FB56F0B2AB1D3B8EB070E56EDFF5, 43DF5774B03E7FEF5FE40D931A7BEDF1BB2E6B42738C4E6D3841103D3AA7F339, 6DC47172E01CBCB0BF62580D895FE2B8AC9AD4F873801E0C10B9C837D21EB177, 73C176434F1BC6D5ADF45B0E76E727287C8DE57616C1E6E6141A2B2CBC7D8E4C, DB3517D1F6732A2D5AB97C533EC70779EE3270A62FB4AC4238372460E6F01E88, 0376AB1D54C5F9803CE4B2E201A0EE7EEF7B57B636E8A93C9B8D4860C96F5FA7, 0A81EC5A929777F145904AF38D5D509F66B5E2C58FCDB531058B0E17F3F0B41B, 70A73F7F376B60074248904534B11482D5BF0E698ECC498DF52577EBF2E93B9A, BD71FDF6DA97E4CF62D1647ADD2581B07D79ADF8397EB4ECBA9C5E8488821423 :sctnotafter=1714521599 Illustrative Command (on Windows): Illustrative Command (on macOS): Note: If copy and pasting the above commands, ensure no line-breaks are introduced. here. I use Entrust certificates for my internal enterprise network, do I need to do anything? Beginning in Chrome 127, enterprises can override Chrome Root Store constraints like those described for Entrust in this blog post by installing the corresponding root CA certificate as a locally-trusted root on the platform Chrome is running (e.g., installed in the Microsoft Certificate Store as a Trusted Root CA). How do enterprises add a CA as locally-trusted? Customer organizations should defer to platform provider guidance. What about other Google products? Other Google product team updates may be made available in the future.
You can subscribe to this RSS to get more information