Since Google announced the Chromium project in 2008, we have been excited to build on the great foundations of open-source web browsers and contribute to the continued development of a rich web platform. Today, Chromium is used by hundreds of different projects globally, including big browsers like Chrome, home electronics from LG, application frameworks like Electron and even custom applications like Bloomberg terminals and SpaceX capsule control software. In 2024, Google made over 100,000 commits to Chromium, accounting for ~94 percent of contributions. While we have no intention of reducing this investment, we continue to welcome others stepping up to invest more. The Linux Foundation and the launch of the Supporters of Chromium-based Browsers. The goal of this initiative is to foster a sustainable environment of open-source contributions towards the health of the Chromium ecosystem and financially support a community of developers who want to contribute to the project, encouraging widespread support and continued technological progress for Chromium embedders. The Supporters of Chromium-based Browsers fund will be managed by the Linux Foundation, following their long established practices for open governance, prioritizing transparency, inclusivity, and community-driven development. We’re thrilled to have Meta, Microsoft, and Opera on-board as the initial members to pledge their support. We welcome this additional investment into Chromium’s commons and we’re looking forward to working with the other members of the Supporters of Chromium-based Browsers to ensure that it meets the needs of the wider Chromium community. At the same time, we remain committed to being the responsible steward of the Chromium project and to the massive investment necessary to keep Chromium working well for the entire web industry. Posted by Shruthi Sreekanta
In October 2020, Chrome enabled HTTP/3 by default. HTTP/3 (RFC 9114) runs over IETF QUIC (RFC9000). Default-enabling HTTP/3 in Chrome resulted in improved performance compared not only HTTP/1 and HTTP/2, but also Google QUIC. Benefits included reduced Google search latency and fewer rebuffers for YouTube. The journey to optimizing performance did not end when HTTP/3 was default enabled. Recent advancements include the implementation of the HTTP/3 ORIGIN frame (RFC 9412) and Server's Preferred Address (RFC 9000 Section 9.6). The former enhances connection coalescing, while the latter reduces a connection's round trip time (RTT). Both features have been enabled by default in M131, which was released to Stable on 11/19. ORIGIN Frame DNS resolution adds latency unless cached locally in Chrome’s DNS cache. Both client and server must send multiple packets to complete a QUIC handshake. TLS necessitates CPU-intensive asymmetric cryptography on both ends. The congestion controller begins in its default state, potentially leading to under or over-sending. 0-RTT might fail. Non-safe requests aren't sent via 0-RTT. More connections consume more memory. RFC 9218) are only effective if there are multiple simultaneous responses to send. The HTTP/3 ORIGIN Frame (RFC 9412) enables a server to indicate what domains it would like to pool onto a connection. Additionally, once the frame is received, it indicates other domains should not be pooled onto that connection, even if they are in the certificate. Server’s Preferred Address Media CDN has widely enabled advertising an alternative address, but we expect more servers to adopt it soon. Testing has shown that this migration is successful over 99% of the time in Chrome and reduces average RTT by 40-80%. Posted by Fan Yang and Ian Swett
The Fast and the Curious post covers how Chrome achieved best-in-class Speedometer scores on mobile devices, resulting in faster and smoother web experiences for Android users. Chrome has always been about speed. Whether it's loading pages quickly, running complex web apps smoothly, or delivering a seamless browsing experience, performance is at the heart of our browser. And we're always looking for ways to make Chrome even faster. Over the last two years, we have been hard at work on a number of performance improvements for Android devices. We're excited to share some of the progress we've made. Speedometer on Android Speedometer benchmark. This benchmark is developed in collaboration with other major web browser engines and measures how quickly Chrome can complete interactions with web pages, including parsing/rendering HTML or CSS and running JavaScript. Since the release of Chrome M112, we've seen a significant increase in Speedometer 2.1 scores on Android devices [1]. In fact, on many devices, scores more than doubled, with the newest Snapdragon® 8 Elite Mobile Platform setting new records for Speedometer performance on mobile devices! These huge accomplishments are a testament to the work not only of the Chrome and Android teams, but also our silicon and SoC partners. How Did We Do It? Build optimizations: We've made a number of changes to the way Chrome is built, which has resulted in faster code execution tuned to modern premium Android devices and SoCs. V8 and Blink improvements: Many improvements to the JavaScript engine (V8) and the rendering engine (Blink) have further boosted performance. Scheduling, OS and SoCs: We worked closely with Android partners to optimize the way Chrome interacts with the operating system and its thread scheduling to make the best use of the silicon on the devices. Build optimizations separate higher-performance build targeting premium Android devices via the Google Play Store. While we still ship a more binary-size-constrained build to other devices, this approach allowed us to land some of those modern optimizations into the new premium build: By targeting 64-bit Arm instead of 32-bit Arm, we can make use of more efficient Arm instruction set features and larger 64-bit operations. Since binary size is less relevant on premium devices with large disks and sufficient memory, we can now compile C++ code optimized for speed (-O2 / -O3) rather than size (-Oz). Furthermore, we tweaked the inlining thresholds used by the compiler to enable more inlining in hot code (within and across modules), while updating the model and policy used by another compiler pass (MLGO) to reduce inlining in cold code. We now also apply profile-guided optimization (PGO) techniques to the build to further improve the code layout and optimization level for hot code. Finally, we improved cross-function code ordering by aligning Chrome's orderfile generation with the new 64-bit build. We also now include Speedometer 3, the latest version of the industry-standard browser speed benchmark, in the workloads used to generate the orderfile. V8 and Blink improvements We now utilize an optimized fast-path HTML parser to parse innerHTML attributes. V8 launched its Sparkplug compiler tier, a super fast baseline compiler that sits right above its Ignition interpreter and generates non-optimized code very quickly. Later, V8 also launched Maglev, a new mid-tier compiler that generates semi-optimized code. It takes longer to do so than Sparkplug, but much less time than Turbofan, V8's ultra-optimizing compiler tier. All together, this new tiering hierarchy allows V8 to tier up more gradually, improving both performance and power consumption. We tuned our heuristics that decide when garbage collection occurs, targeting times when the rendering engine is idle or when users navigate away from pages. We landed many other incremental optimizations, e.g. to V8 and our parsing, style, layout, and text rendering engines. Scheduling and OS a 60-80% improvement in Speedometer 3.0 compared to its predecessor, resulting in class-leading web performance. This collaboration also highlighted important bottlenecks in Chrome's code, such as the need for improved PGO and opportunities in V8. Why do these improvements matter? Google Doc (frame count) [1] Speedometer 3 was released during M122, so results from Speedometer 2.1 are provided for a full picture. Measurements shown in graphs were taken on Pixel Tablet. Posted by Eric Seckler, Software Engineer, Chrome
Last October, we introduced a new identity model on iOS (Chrome 118) and are excited to bring it to Android devices and Desktop soon. This model aligns closely with how you already use other Google apps and services. When we first launched Chrome sync back in 2009, powered by the Google Account, our goal then, as it is today, was simple: help users access their bookmarks, passwords, tabs and more, across devices. At the time, this was best achieved by a sync model: synchronizing device data with your account and therefore requiring both sign-in and enabling sync. Over the years, the digital world has changed and user expectations have evolved significantly. Cloud services emerged in 2010, and over the past 15 years, the concept of having a digital identity became more prevalent, especially through smartphones and mobile apps. Today, users increasingly expect to just sign in to get access to their stuff and sign out to keep it safe. Given this evolution of technology and user norms, we’re continuing to make progress on transforming our legacy sync model into one that more seamlessly meets the expectation users have today. From the point of signing in to Chrome you’ll get access to your saved passwords, addresses, and other data from your Google Account. Where relevant, we’ll offer you the choice to sign into Chrome for a customized browsing experience on any device. For example, you can sign in and start to plan a trip on your phone during your commute, and then seamlessly finish it up on any device. Send tabs between your devices, find your bookmarks and use autogenerated passwords with ease. As always, you have control - we strive to provide an excellent browser experience regardless of whether you choose to sign in or not. Additionally, saving your history and open tabs to your account remains a separate opt-in after signing into Chrome. Stay tuned for updates on this change - already on iOS and coming to Android and Desktop soon. Posted by Claire Charron, Chrome Product Manager
ChromeOS will soon be developed on large portions of the Android stack to bring Google AI, innovations, and features faster to users. Over the last 13 years, we’ve evolved ChromeOS to deliver a secure, fast, and feature-rich Chromebook experience for millions of students and teachers, families, gamers, and businesses all over the world. With our recent announcements around new features powered by Google AI and Gemini, Chromebooks now give us the opportunity to put powerful tools in the hands of more people to help with everyday tasks. To continue rolling out new Google AI features to users at a faster and even larger scale, we’ll be embracing portions of the Android stack, like the Android Linux kernel and Android frameworks, as part of the foundation of ChromeOS. We already have a strong history of collaboration, with Android apps available on ChromeOS and the start of unifying our Bluetooth stacks as of ChromeOS 122. Bringing the Android-based tech stack into ChromeOS will allow us to accelerate the pace of AI innovation at the core of ChromeOS, simplify engineering efforts, and help different devices like phones and accessories work better together with Chromebooks. At the same time, we will continue to deliver the unmatched security, consistent look and feel, and extensive management capabilities that ChromeOS users, enterprises, and schools love. These improvements in the tech stack are starting now but won’t be ready for consumers for quite some time. When they are, we’ll provide a seamless transition to the updated experience. In the meantime, we continue to be extremely excited about our continued progress on ChromeOS without any change to our regular software updates and new innovations. Chromebooks will continue to deliver a great experience for our millions of customers, users, developers and partners worldwide. We’ve never been more excited about the future of ChromeOS. Posted by Prajakta Gudadhe, Senior Director, Engineering, ChromeOS & Alexander Kuscher, Senior Director, Product Management, ChromeOS
Today’s The Fast and the Curious post explores how Chrome achieved the highest score on the new Speedometer 3.0, an upgraded browser benchmarking tool to optimize the performance of Web applications. Try out Chrome today! Speedometer 3.0 is a recently published benchmark for measuring browser performance that was created as an industry collaboration between companies like Google, Apple, Mozilla, Intel, and Microsoft. This benchmark helped us identify areas in which we could optimize Chrome to deliver a faster browser experience to all our users. Here’s a closer look at how we further optimized Chrome to achieve the highest score ever Speedometer 3, by carefully tracking its recent performance over time as the updated benchmark was being developed. Since the inception of Speedometer 3 in May 2022, we've driven a 72% increase in Chrome’s Speedometer score - translating into performance gains for our users: previously shared how we optimized innerHTML using specialized fast paths for parsing, an implementation that also made its way into WebKit. Some workloads in Speedometer 3 use DOMParser so we extended the same optimization for another 1% gain. We worked with the Harfbuzz maintainer to also optimize how Chrome renders AAT fonts such as those used by Apple Mac OS system fonts. Text starts as a processed stream of unicode characters that is then transformed into a glyph stream that is then run through a state machine defined in the AAT font. The optimization allows us to determine more quickly whether glyphs actually participate in the rules for the state machine, leading to speed-ups when processing text using AAT. Picking the right code to focus on An important strategy for achieving high performance is tiering up code, which is picking the right code to further optimize within the engine. Intel contributed profile guided tiering to V8 that remembers tiering decisions from the past such that if a function was stably tiered up in the past, we eagerly tier it up on future runs. Improving garbage collection Another area of changes that drove around 3% progression on Speedometer 3 was improvements around garbage collection. V8’s garbage collector has a long history of making use of renderer idle time to avoid interfering with actual application code. The recent changes follow this spirit by extending existing mechanisms to prefer garbage collection in idle time on otherwise very active renderers where possible. Specifically, DOM finalization code that is run on reclaiming objects is now also run in idle time. Previously, such operations would compete with regular application code over CPU resources. In addition, V8 now supports a much more compact layout for objects that wrap DOM elements, i.e., all objects that are exposed to JavaScript frameworks. The compact layout reduces memory pressure and results in less time spent on garbage collection. Posted by Thomas Nattestad, Chrome Product Manager
On the Chrome team, we believe it’s not sufficient to be fast most of the time, we have to be fast all of the time. Today’s The Fast and the Curious post explores how we contributed to Core Web Vitals by surveying the field data of Chrome responding to user interactions across all websites, ultimately improving performance of the web. As billions of people turn to the web to get things done every day, the browser becomes more responsible for hosting a multitude of apps at once, resource contention becomes a challenge. The multi-process Chrome browser contends for multiple resources: CPU and memory of course, but also its own queues of work between its internal services (in this article, the network service). This is why we’ve been focused on identifying and fixing slow interactions from Chrome users’ field data, which is the authoritative source when it comes to real user experiences. We gather this field data by recording anonymized Perfetto traces on Chrome Canary, and report them using a privacy-preserving filter. When looking at field data of slow interactions, one particular cause caught our attention: recurring synchronous calls to fetch the current site’s cookies from the network service. Let’s dive into some history. Cookies under an evolving web document.cookie = "user=Alice;color=blue" And later retrieved like this: // Assuming a `getCookie` helper method: getCookie("user", document.cookie) Its implementation was simple in single-process browsers, which kept the cookie jar in memory. Over time, browsers became multi-process, and the process hosting the cookie jar became responsible for answering more and more queries. Because the Web Spec requires Javascript to fetch cookies synchronously, however, answering each document.cookie query is a blocking operation. The operation itself is very fast, so this approach was generally fine, but under heavy load scenarios where multiple websites are requesting cookies (and other resources) from the network service, the queue of requests could get backed up. We discovered through field traces of slow interactions that some websites were triggering inefficient scenarios with cookies being fetched multiple times in a row. We landed additional metrics to measure how often a GetCookieString() IPC was redundant (same value returned as last time) across all navigations. We were astonished to discover that 87% of cookie accesses were redundant and that, in some cases, this could happen hundreds of times per second. The simple design of document.cookie was backfiring as JavaScript on the web was using it like a local value when it was really a remote lookup. Was this a classic computer science case of caching?! Not so fast! The web spec allows collaborating domains to modify each other’s cookies. Hence, a simple cache per renderer process didn’t work, as it would have prevented writes from propagating between such sites (causing stale cookies and, for example, unsynchronized carts in ecommerce applications). A new paradigm: Shared Memory Versioning We solved this with a new paradigm which we called Shared Memory Versioning. The idea is that each value of document.cookie is now paired with a monotonically increasing version. Each renderer caches its last read of document.cookie alongside that version. The network service hosts the version of each document.cookie in shared memory. Renderers can thus tell whether they have the latest version without having to send an inter-process query to the network service. This reduced cookie-related inter-process messages by 80% and made document.cookie accesses 60% faster 🥳. passing Core Web Vitals 🥳. All of this adds up to a more seamless web for users. Timeline of the weighted average of the slowest interactions across the web on Chrome as this was released to 1% (Nov), 50% (Dec), and then all users (Feb). Onward to a seamless web! By Gabriel Charette, Olivier Li Shing Tat-Dupuis, Carlos Caballero Grolimund, and François Doray, from the Chrome engineering team
Update (10/10/2024): We’ve started disabling extensions still using Manifest V2 in Chrome stable. Read more details in the MV2 support timeline documentation. In November 2023, we shared a timeline for the phasing out of Manifest V2 extensions in Chrome. Based on the progress and feedback we’ve seen from the community, we’re now ready to roll out these changes as scheduled. We’ve always been clear that the goal of Manifest V3 is to protect existing functionality while improving the security, privacy, performance and trustworthiness of the extension ecosystem as a whole. We appreciate the collaboration and feedback from the community that has allowed us - and continues to allow us - to constantly improve the extensions platform. Addressing community feedback We understand migrations of this magnitude can be challenging, which is why we’ve listened to developer feedback and spent years refining Manifest V3 to support the innovation happening across the extensions community. This included adding support for user scripts and introducing offscreen documents to allow extensions to use DOM APIs from a background context. Based on input from the extension community, we also increased the number of rulesets for declarativeNetRequest, allowing extensions to bundle up to 330,000 static rules and dynamically add a further 30,000. You can find more detail in our content filtering guide. This month, we made the transition even easier for extensions using declarativeNetRequest with the launch of review skipping for safe rule updates. If the only changes are for safe modifications to an extension’s static rule list for declarativeNetRequest, Chrome will approve the update in minutes. Coupled with the launch of version roll back last month, developers now have greater control over how their updates are deployed. Ecosystem progress After we addressed the top issues and feature gaps blocking migration last year, we saw an acceleration of extensions migrating successfully to Manifest V3. Over the past year, we’ve even been able to invite some developers - such as Eyeo, the makers of Adblock Plus - and GDE members like Matt Frisbie to share their experiences and insights with the community through guest posts and YouTube videos. Now, over 85% of actively maintained extensions in the Chrome Web Store are running Manifest V3, and the top content filtering extensions all have Manifest V3 versions available - with options for users of AdBlock, Adblock Plus, uBlock Origin and AdGuard. What to expect next Starting on June 3 on the Chrome Beta, Dev and Canary channels, if users still have Manifest V2 extensions installed, some will start to see a warning banner when visiting their extension management page - chrome://extensions - informing them that some (Manifest V2) extensions they have installed will soon no longer be supported. At the same time, extensions with the Featured badge that are still using Manifest V2 will lose their badge. This will be followed gradually in the coming months by the disabling of those extensions. Users will be directed to the Chrome Web Store, where they will be recommended Manifest V3 alternatives for their disabled extension. For a short time after the extensions are disabled, users will still be able to turn their Manifest V2 extensions back on, but over time, this toggle will go away as well. Like any big launches, all these changes will begin in pre-stable channel builds of Chrome first – Chrome Beta, Dev, and Canary. The changes will be rolled out over the coming months to Chrome Stable, with the goal of completing the transition by the beginning of next year. Enterprises using the ExtensionManifestV2Availability policy will be exempt from any browser changes until June 2025. We’ve shared more information about the process in our recent Chrome extensions Google I/O talk. If you have any additional questions, don’t hesitate to reach out via the Chromium extensions mailing list. Posted by David Li, Product Manager, Chrome Extensions
In the latest release of Chrome, we're introducing Minimized Custom Tabs, a feature that allows users to effortlessly transition between native app and web content. With a simple tap on the down button in the Chrome Custom Tabs toolbar, users can minimize a Custom Tab into a compact, floating picture-in-picture window. This seamless integration enables multi-tasking across surfaces, enhancing the in-app web browsing experience. By tapping on the floating window, users can easily maximize the tab, restoring it to its original size. Posted by Victor Gallet, Senior Product Manager
Google and many other organizations, such as NIST, IETF, and NSA, believe that migrating to post-quantum cryptography is important due to the large risk posed by a cryptographically-relevant quantum computer (CRQC). In August, we posted about how Chrome Security is working to protect users from the risk of future quantum computers by leveraging a new form of hybrid post-quantum cryptographic key exchange, Kyber (ML-KEM)1. We’re happy to announce that we have enabled the latest Kyber draft specification by default for TLS 1.3 and QUIC on all desktop Chrome platforms as of Chrome 124.2 This rollout revealed a number of previously-existing bugs in several TLS middlebox products. To assist with the deployment of fixes, Chrome is offering a temporary enterprise policy to opt-out. Launching opportunistic quantum-resistant key exchange is part of Google’s broader strategy to prioritize deploying post-quantum cryptography in systems today that are at risk if an adversary has access to a quantum computer in the future. We believe that it’s important to inform standards with real-world experience, by implementing drafts and iterating based on feedback from implementers and early adopters. This iterative approach was a key part of developing QUIC and TLS 1.3. It’s part of why we’re launching this draft version of Kyber, and it informs our future plans for post-quantum cryptography. Chrome’s post-quantum strategy prioritizes quantum-resistant key exchange in HTTPS, and increased agility in certificates from the Web PKI. While PKI agility may appear somewhat unrelated, its absence has contributed to significant delays in past cryptographic transitions and will continue to do so until we find a viable solution in this space. A more agile Web PKI is required to enable a secure and reliable transition to post-quantum cryptography on the web. To understand this, let’s take a look at HTTPS and the current state of post-quantum cryptography. In the context of HTTPS, cryptography is primarily used in three different ways: Symmetric Encryption/Decryption. HTTP is transmitted as data inside a TLS connection using an authenticated cipher (AEAD) such as AES-GCM. These algorithms are broadly considered safe against quantum cryptanalysis and can remain in place. Key Exchange. Symmetric cryptography requires a secret key. Key exchange is a form of asymmetric cryptography in which two parties can mutually generate a shared secret key over a public channel. This secret key can then be used for symmetric encryption and decryption. All current forms of asymmetric key exchange standardized for use in TLS are vulnerable to quantum cryptanalysis. Authentication. In HTTPS, authentication is achieved primarily through the use of digital signatures, which are used to convey server identity, handshake authentication, and transparency for certificate issuance. All of the digital signature and public key algorithms standardized for authentication in TLS are vulnerable to quantum cryptanalysis. This results in two separate quantum threats to HTTPS. The first is the threat to traffic being generated today. An adversary could store encrypted traffic now, wait for a CRQC to be practical, and then use it to decrypt the traffic after the fact. This is commonly known as a store-now-decrypt-later attack. This threat is relatively urgent, since it doesn’t matter when a CRQC is practical—the threat comes from storing encrypted data now. Defending against this attack requires the key exchange to be quantum resistant. Launching Kyber in Chrome enables servers to mitigate store-now-decrypt-later attacks. The second threat is that future traffic is vulnerable to impersonation by a quantum computer. Once a CRQC actually exists, it could be used to break the asymmetric cryptography used for authentication in HTTPS. To defend against impersonation from a CRQC, we need to migrate all of the asymmetric cryptography used for authentication to post-quantum variants. However, breaking authentication only affects traffic generated after the availability of CRQCs. This is because breaking authentication on a recorded transcript doesn’t help the attacker impersonate either party—the conversation has already finished. In other words, there’s no store-now-decrypt-later equivalent for authentication, and so while migrating key exchange and authentication to post-quantum variants are both important, migrating authentication is less urgent than key exchange. This is good, because there are a variety of challenges for migrating to post-quantum authentication. Specifically, size. Post-quantum cryptography is big compared to the pre-quantum cryptographic algorithms used in HTTPS. A Kyber key exchange is ~1KB transmitted per peer, whereas an X25519 key exchange is only 32 bytes per peer, an over 30x increase. The actual key exchange operation in Kyber is quite fast. Transmitting Kyber keys is quite slow. The extra size from Kyber causes the TLS ClientHello to be split into two packets, resulting in a 4% median latency increase to all TLS handshakes in Chrome on desktop. On desktop platforms, this, with HTTP/2 and HTTP/3 connection reuse, is not large enough to be noticeable in Core Web Vitals. Unfortunately, it is noticeable on Android, where Internet connections are often lower bandwidth and higher latency, and so we have not yet launched on Android. The size issues are even worse for authentication. ML-DSA (Dilithium) keys and signatures are ~40X the size of ECDSA keys and signatures. A typical TLS connection today uses two public keys and five signatures to fulfill all of the authentication requirements. A naive swap to ML-DSA would add ~14KB to the TLS handshake. Cloudflare anticipates it would increase latency by 20-40%, and we’ve seen that a single kilobyte was already impactful. Instead, we need alternate approaches to authentication in HTTPS that provide the desired properties and transmit fewer signatures and public keys. We think the important next step for quantum-resistant authentication in HTTPS is to focus on enabling trust anchor agility. Historically, the public Web PKI could not deploy new algorithms quickly. This is because most site operators typically provision a single certificate for all supported clients and browsers. This certificate must both be issued from a trust hierarchy that is trusted by every browser or client the site operator supports, and the certificate must be compatible with each of these clients. The single certificate model makes it difficult for the Web PKI to evolve. As security requirements change, site operators may find that there is no longer an intersection between certificates trusted by deployed clients, certificates trusted by new clients, algorithms supported by deployed clients, and algorithms supported by new clients (all crossed with every separate browser and root store). These clients may range from different browsers, older versions of those browsers not receiving updates, all the way to applications on smart TVs or payment terminals. As requirements diverge, site operators have to choose between security for new clients, and compatibility with older clients. This conflict, in turn, limits new clients making PKI changes to improve user security, such as transitioning to post-quantum. Under a single-certificate deployment model, the newest clients cannot diverge too far from the oldest clients, or server operators will be left with no way to maintain compatibility. We propose to solve this by moving to a multi-certificate deployment model, where servers may be provisioned with multiple certificates, and automatically send the correct one to each client. This enables trust anchor agility, and allows clients to evolve at different rates. Clients who are up to date and reliably receiving updates could access the authentication mechanisms best suited for the Internet as it evolves without being hamstrung by old clients no longer receiving updates. Certification authorities and trust stores could introduce new post-quantum trust anchors without needing to wait for the slowest actor to add support. This would drastically simplify the post-quantum transition since it also enables the seamless addition and removal of hierarchies using experimental post-quantum authentication methods. At first glance, TLS may appear to have trust anchor agility by way of cross-signatures and signature algorithm negotiation. However, neither of these mechanisms provide true trust anchor agility, nor were they intended to. A cross-signature is when a CA creates two different certificates for a single subject and public key pair, but with different issuers and signatures. The first certificate is issued and signed as usual, by the CA itself. The second is issued and signed by a different trust hierarchy, often by a different organization. For example, the original Let’s Encrypt intermediate certificate existed in two forms3. The “regular” intermediate was signed by the Let’s Encrypt root, whereas the “cross-signed” intermediate was signed by IdenTrust. This approach of cross-signing a new PKI hierarchy with an older, more broadly available PKI hierarchy allows a new CA to bootstrap its trust on old devices, so long as the older devices support the signing algorithm. Cross-signatures, however, rely on significant cooperation among often competing CAs, and may not be suitable for when different clients have different needs. This limits when site operators can use cross-signs. Additionally, devices that do not support a new algorithm will still need to be updated to be able to use the new signing algorithm in the newer certificate, regardless of whether or not it is cross-signed. Signature algorithm negotiation allows TLS peers to agree on the algorithm to be used for the handshake signature. This algorithm needs to correspond with the key type used in the certificate. Endpoints can infer that if the peer supports an algorithm such as ECDSA for the handshake signature, it must also support ECDSA certificates. This value can be used to multiplex between an RSA-based chain and a smaller ECDSA-based chain. For example, Google’s RSA-based large compatibility chain is four certificates and ~4.1KB, whereas the shortest ECDSA-based chain is three certificates and only ~1.7KB4. Signature algorithm negotiation does not provide trust anchor agility. While the signature algorithm information implies algorithm support, it provides no information about what trust anchors a client actually trusts. A client can support ECDSA, but not have the latest ECDSA root certificate from a specific CA. Due to the wide variety of trust stores in use, many organizations may still often need to be conservative in when they serve ECDSA certificates and may need to provide a longer, cross-signed chain for maximum compatibility. Neither cross-signatures nor signature algorithm negotiation are solutions to migrating to post-quantum cryptography for authentication. Cross-signatures do not help with new algorithms, and signature algorithm negotiation is solely about negotiating algorithms, not providing information about trust anchors. We expect a gradual transition to post-quantum cryptography. Inferring information about the contents of the trust store from the result of signature algorithm negotiation risks ossifying to a specific version of a specific trust store, rather than purely being used for algorithm negotiation. Instead, to introduce agility to TLS we need an explicit mechanism for trust anchor negotiation, to allow the client and server to efficiently determine which certificate to use. At the November 2023 IETF meeting in Prague, Chrome proposed “Trust Expressions” as a mechanism for trust anchor negotiation in TLS. Chrome is currently seeking community input on Trust Expressions via the IETF process. We think the goal of being able to cleanly deploy multiple certificates to handle a range of clients is much more important than the specific mechanisms of the proposal. From there, we can explore more efficient ways to authenticate servers, such as Merkle Tree Certificates. We view introducing some mechanism for trust anchor agility as a necessity for efficient post-quantum authentication. Experimentation will be extremely important as proposals are developed. Agility also enables using different solutions in different contexts, rather than sending extra data for the lowest-common denominator— solutions like Merkle Tree Certificates and intermediate elision require up-to-date clients. Given these constraints, priorities, and risks, we think agility is more important than defining exactly what a post-quantum PKI will look like at this time. We recommend against immediately standardizing ML-DSA in X.509 for use in the public Web PKI via the CA/Browser Forum. We expect that ML-DSA, once NIST completes standardization, will play a part in a post-quantum Web PKI, but we’re focusing on agility first. This does not preclude introducing ML-DSA in X.509 as an option for private PKIs, which may be operating on more strict post-quantum timelines and have fewer constraints around certificate size, handshake latency, issuance transparency, and unmanaged endpoints. Ultimately, we think that any approach to post-quantum authentication has the same first requirement—a migration mechanism for clients to opt-in to post-quantum secure authentication mechanisms when servers support it. Post-quantum authentication presents significant challenges to the Web ecosystem, but we believe trust anchor agility will enable us to overcome them and lead to a more secure, robust, and performant post-quantum web. Notes The draft is X25519Kyber768, which is a combination of the pre-quantum algorithm X25519, and the post-quantum algorithm Kyber 768. Kyber is being renamed to ML-KEM, however for the purposes of this post, we will use “Kyber” to refer to the hybrid algorithm defined for TLS. ↩ ↩ ↩ ↩ Posted by David Adrian, Bob Beck, David Benjamin and Devon O'Brien
Used billions of times each day, the Chrome address bar (which we call the “omnibox”) is a powerful tool to make searching the web easier, whether you’re trying to quickly find your tabs or bookmarks, return to a web page you previously visited, or find information. With the latest release of Chrome (M124), we’re integrating machine learning models to power the Chrome omnibox on desktop, so that web page suggestions are more precise and relevant to you. In the future, these models will also help improve the relevance scoring of search suggestions. Here’s a closer look at some of the important insights that help our team build this integration and where we hope the new model takes us. How we got here inflexible. A set of hand-built and hand-tuned formulas did the job well, but were difficult to improve or to adapt to new scenarios. As a result, the scoring system went largely untouched for a long time. For most of that time, an ML-trained scoring model was the obvious path forward. But it took many false starts to finally get here. Our inability to tackle this challenge for so long was due to the difficulty of replacing the core mechanism of a feature used literally billions of times every day. Software engineering projects are sometimes described as "building the plane while flying it." This project felt more like "replacing all the seats in every plane in the world while they're all flying." The scale was enormous and the changes are felt directly by every user. This ambitious undertaking would not have been possible without the work of such a talented and dedicated team. There were bumps in the road, walls we had to break through, and unanticipated issues that slowed us down, but the team was driven by a sincere belief in the impact of getting this right for our users. A Surprising Insight all the data at a scale that would be difficult to impossible for any individual person or team. And that can lead to surprising insights. The coolest example of this phenomenon on this project was when we looked at the scoring curve of one particular signal: time since last navigation. The expectation with this signal is that the smaller it is (the more recently you've navigated to a particular URL), the bigger the contribution that signal should make towards a higher relevance score. And that is, in fact, what the model learned. But when we looked closer, we noticed something surprising: when the time since navigation was very low (seconds instead of hours, days or weeks), the model was decreasing the relevance score. It turns out that the training data reflected a pattern where users sometimes navigate to a URL that was not what they really wanted and then immediately return to the Chrome omnibox and try again. In that case, the URL they just navigated to is almost certainly not what they want, so it should receive a low relevance score during this second attempt. In retrospect, this is obvious. And if we had not launched ML scoring, we definitely would have added a new rule to the old system to reflect this scenario. But before the training system observed and learned from this pattern, it never occurred to anyone that this might be happening. The Future By Justin Donnelly, Chrome software engineer
Cookies – small files created by sites you visit – are fundamental to the modern web. They make your online experience easier by saving browsing information, so that sites can do things like keep you signed in and remember your site preferences. Due to their powerful utility, cookies are also a lucrative target for attackers. Many users across the web are victimized by cookie theft malware that gives attackers access to their web accounts. Operators of Malware-as-a-Service (MaaS) frequently use social engineering to spread cookie theft malware. These operators even convince users to bypass multiple warnings in order to land the malware on their device. The malware then typically exfiltrates all authentication cookies from browsers on the device to remote servers, enabling the attackers to curate and sell the compromised accounts. Cookie theft like this happens after login, so it bypasses two-factor authentication and any other login-time reputation checks. It’s also difficult to mitigate via anti-virus software since the stolen cookies continue to work even after the malware is detected and removed. And because of the way cookies and operating systems interact, primarily on desktop operating systems, Chrome and other browsers cannot protect them against malware that has the same level of access as the browser itself. To address this problem, we’re prototyping a new web capability called Device Bound Session Credentials (DBSC) that will help keep users more secure against cookie theft. The project is being developed in the open at github.com/WICG/dbsc with the goal of becoming an open web standard. By binding authentication sessions to the device, DBSC aims to disrupt the cookie theft industry since exfiltrating these cookies will no longer have any value. We think this will substantially reduce the success rate of cookie theft malware. Attackers would be forced to act locally on the device, which makes on-device detection and cleanup more effective, both for anti-virus software as well as for enterprise managed devices. Learning from prior work, our goal is to build a technical solution that’s practical to deploy to all sites large and small, to foster industry support to ensure broad adoption, and to maintain user privacy. Technical solution software-isolated solutions as well. The API allows a server to associate a session with this public key, as a replacement or an augmentation to existing cookies, and verify proof-of-possession of the private key throughout the session lifetime. To make this feasible from a latency standpoint and to aid migrations of existing cookie-based solutions, DBSC uses these keys to maintain the freshness of short-lived cookies through a dedicated DBSC-defined endpoint on the website. This happens out-of-band from regular web traffic, reducing the changes needed to legacy websites and apps. This ensures the session is still on the same device, enforcing it at regular intervals set by the server. For current implementation details please see the public explainer. Preserving user privacy Play Protect certified or not). DBSC will be fully aligned with the phase-out of third-party cookies in Chrome. In third-party contexts, DBSC will have the same availability and/or segmentation that third-party cookies will, as set by user preferences and other factors. This is to make sure that DBSC does not become a new tracking vector once third-party cookies are phased out, while also ensuring that such cookies can be fully protected in the meantime. If the user completely opts out of cookies, third-party cookies, or cookies for a specific site, this will disable DBSC in those scenarios as well. Improving user protection Interest outside Google Where to follow the progress GitHub and we have published an estimated timeline. This is where we will post announcements and updates to the expected timelines as needed. Our goal is to allow origin trials for all interested websites by the end of 2024. Please reach out if you'd like to get involved. We welcome feedback from all sources, either by opening a new issue or starting a discussion on GitHub. Posted by Kristian Monsen, Chrome Counter Abuse
Today’s The Fast and the Curious post covers the release of Speedometer 3.0 an upgraded browser benchmarking tool to optimize the performance of Web applications. In collaboration with major web browser engines, Blink/V8, Gecko/SpiderMonkey, and WebKit/JavaScriptCore, we’re excited to release Speedometer 3.0. Benchmarks, like Speedometer, are tools that can help browser vendors find opportunities to improve performance. Ideally, they simulate functionality that users encounter on typical websites, to ensure browsers can optimize areas that are beneficial to users. Let’s dig into the new changes in Speedometer 3.0. Applying a multi-stakeholder governance model 2014 by the WebKit team, browser vendors have successfully used Speedometer to optimize their engines and improve user experiences on the web. Speedometer 2.0, a result of a collaboration between Apple and Chrome, followed in 2018, and it included an updated set of workloads that were more representative of the modern web at that time. The web has changed a lot since 2018, and so has Speedometer in its latest release, Speedometer 3. This work has been based on a joint multi-stakeholder governance model to share work, and build a collaborative understanding of performance on the web to help drive browser performance in ways that help users. The goal of this collaborative project is to create a shared understanding of web performance so that improvements can be made to enhance the user experience. Together, we were able to to improve how Speedometer captures and calculates scores, show more detailed results and introduce an even wider variety of workloads. This cross-browser collaboration introduced more diverse perspectives that enabled clearer insights into a broader set of web users and workflows, ensuring the newest version of Speedometer will help make the web better for everyone, regardless of which browser they use. Why is building workloads challenging? Chrome Aurora team, together with colleagues from other participating browser vendors, were tasked with finding new workloads that accurately reflect what users experience across the vast, diverse and eclectic web of 2024 and beyond. A few tests and workloads can’t simulate the entire web, but while building Speedometer 3 we have established some criteria for selecting ones that are critical to user’s experience. We are now closer to a representative benchmark than ever before. Let’s take a look at how Speedometer workloads evolved How did the workloads change? Frameworks HTTP Archive and discussed inclusion with all browser vendors to ensure we cover a good range of implementations. For the initial evaluation, we took a snapshot of the HTTP Archive from March 2023 to determine the top JavaScript UI frameworks currently used to build complex web apps. More Workloads Validation Chrome V8 team to validate our assumptions above. In Chrome, we can use runtime-call-stats to measure time spent in each web API (and additionally many internal components). This allows us to get an insight into how dominant certain APIs are. If we look at Speedometer 2.1 we see that a disproportionate amount of benchmark time is spent in innerHTML. innerHTML is an important web API, it's overrepresented in Speedometer 2.1. Doing the same analysis on the new version 3.0 yields a slightly different picture: What workloads are included? TodoMVC Many developers might recognize the TodoMVC app. It’s a popular resource for learning and offers a wide range of TodoMVC implementations with different frameworks. Add a task Mark task as complete Delete task Repeat steps 1-3 a set amount of times These tests seem simple, but it lets us benchmark DOM manipulations. Having a variety of framework implementations also cover several different ways how this can be done. Complex DOM / TodoMVC The complex DOM workloads embed various TodoMVC implementations in a static UI shell that mimics a complex web page. The idea is to capture the performance impact on executing seemingly isolated actions (e.g. adding/deleting todo items) in the context of a complex website. Small performance hits that aren’t obvious in an isolated TodoMVC workload are amplified in a larger application and therefore capture more real-world impact. The tests are similar to the TodoMVC tests, executed in the complex DOM & CSSOM environment. This introduces an additional layer of complexity that browsers have to be able to handle effortlessly. Single-page-applications (News Site) Single-page-applications (SPAs) are widely used on the web for streaming, gaming, social media and pretty much anything you can imagine. A SPA lets us capture navigating between pages and interacting with an app. We chose a news site to represent a SPA, since it allows us to capture the main areas of interest in a deterministic way. An important factor was that we want to ensure we are using static local data and that the app doesn’t rely on network requests to present this data to the user. Two implementations are included: one built with Next.js and the other with Nuxt. This gave us the opportunity to represent applications built with meta frameworks, with the caveat that we needed to ensure to use static outputs. Click on ‘More’ toggle of the navigation Click on a navigation button Repeat steps 1 and 2 a set amount of times These tests let us evaluate how well a browser can handle large DOM and CSSOM changes, by changing a large amount of data that needs to be displayed when navigating to a different page. Charting Apps & Dashboards Charting apps allow us to test SVG and canvas rendering by displaying charts in various workloads. These apps represent popular sites that display financial information, stock charts or dashboards. Both SVG rendering and the use of the canvas api weren’t represented in previous releases of Speedometer. Observable Plot displays a stacked bar chart, as well as a dotted chart. It is based on D3, which is a JavaScript library for visualizing tabular data and outputs SVG elements. It loops through a big dataset to build the source data that D3 needs, using map, filter and flatMap methods. As a result this exercises creation and copying of objects and arrays. Chart.js is a JavaScript charting library. The included workload displays a scatter graph with the canvas api, both with some transparency and with full opacity. This uses the same data as the previous workload, but with a different preparation phase. In this case it makes a heavy use of trigonometry to compute distances between airports. React Stockcharts displays a dashboard for stocks. It is based on D3 for all computation, but outputs SVG directly using React. Webkit Perf-Dashboard is an application used to track various performance metrics of WebKit. The dashboard uses canvas drawing and web components for its ui. These workloads test DOM manipulation with SVG or canvas by interacting with charts. For example here are the interactions of the Observable Plot workload: Prepare data: compute the input datasets to output structures that D3 understands. Add stacked chart: this draws a chart using SVG elements. Change input slider to change the computation parameters. Repeat steps 1 and 2 Reset: this clears the view Add dotted chart: this draws another type of graph (dots instead of bars) to exercise different drawing primitives. This also uses a power scale. Code Editors Editors, for example WYSIWYG text and code editors, let us focus on editing live text and capturing form interactions. Typical scenarios are writing an email, logging into a website or filling out an online form. Although there is some form interaction present in the TodoMVC apps, the editor workloads use a large data set, which lets us evaluate performance more accurately. Codemirror is a code editor that implements a text input field with support for many editing features. Several languages and frameworks are available and for this workload we used the JavaScript library from Codemirror. Tiptap Editor is a headless, framework-agnostic rich text editor that's customizable and extendable. This workload used Tiptap as its basis and added a simple ui to interact with. Both apps test DOM insertion and manipulation of a large amount of data in the following way: Create an editable element. Insert a long text.: Codemirror uses the development bundle of React, whileTipTap loads an excerpt of Proust’s Du Côté de Chez Swann. Highlight text: Codemirror turns on syntax highlighting, while TipTap sets all the text to bold. Parting words Github to start the discussion. Posted by Thorsten Kober, Chrome Aurora
Balancing security and usability is always top of mind for us as we strive to stay on top of the constantly evolving threat landscape while building products that are delightful to use. To that end, we'd like to announce a few recent changes to how Chrome works with Google Safe Browsing to keep you safe online while optimizing for smooth and uninterrupted web browsing. Asynchronous checks Today, Safe Browsing checks are on the blocking path of page loads in Chrome, meaning that users cannot see pages until checks are completed. While this works fine for local-first checks such as those made using Safe Browsing API v4, it can add latency for checks made directly with the Safe Browsing server. Starting in Chrome 122, we will begin to introduce an asynchronous mechanism which will allow sites to load even while real-time checks with Safe Browsing servers are in progress. We expect this to reduce page load time and improve user experience as real-time server-side checks will no longer block page load, although if a site is found to be dangerous after the page loads then a warning will still be shown. In addition to the performance boost, this change will let us improve the quality of protection over time. By taking the remote lookup outside of the blocking path of the page load, we're now able to experiment with and deploy novel AI and ML based algorithms to detect and block more phishing and social engineering attacks. It was previously challenging to perform such experimentation because of the potential to delay page loads. In terms of potential risks, we evaluated the following and concluded that sufficient mitigations are in place: Phishing and social engineering attacks: With the move to asynchronous checks, such sites may start to load while server-side Safe Browsing checks are in progress. We have studied the timing data and concluded that it is extremely unlikely a user would have significantly interacted with (e.g. typed in a password) such a site by the time a warning is shown. Exploits against the browser: Chrome maintains a local Safe Browsing list of some sites which are known to deliver browser exploits, and we'll continue to check that synchronously. Besides this, we always recommend updating Chrome as soon as an update is available, to stay protected online. Sub-resource checks Most sites we encounter include various sub-resources as a way to render their content. These sub-resources can include images, scripts, and more. Chrome has historically checked both top-level URLs as well as sub-resources with Safe Browsing in order to warn on potentially harmful sites. While the majority of sub-resources are safe, in the past, we'd commonly observe compromised sites embedding sub-resources that were being leveraged by bad actors to distribute malware and exploit browsers at scale. In recent years, we've seen this attacker trend decline – large scale campaigns that exploit sub-resources are no longer common, making sub-resource checks less important. Additionally, our advances in intelligence gathering, threat detection, and Safe Browsing APIs mean that we now have other ways to protect users in real-time without relying on sub-resource checks. For example, Chrome’s client-side visual ML model can spot images used to create phishing pages, regardless of their use of sub-resources. As such, moving forward Chrome will no longer check the URLs of sub-resources with Safe Browsing. This means that Chrome clients now connect to Google less frequently, which reduces unnecessary network bandwidth cost for users. On the Safe Browsing side, the change allows us to drastically simplify detection logic and APIs, which helps improve infrastructure reliability and warning accuracy, thus reducing risk overall. PDF download checks Finally, we have vastly reduced the frequency with which Chrome contacts Safe Browsing to check PDF downloads. In the past, PDF was a widely exploited file type due to its popularity. As time has passed, thanks in part to the ongoing hardening of PDF viewers (for example, Chrome's PDF viewer is sandboxed), we aren't seeing widespread exploitation of PDF anymore, nor do we hear industry reports about it being a dangerous file type. Even when we have observed malicious PDFs in the wild, they have contained links that redirect users back to Chrome which gives us another chance to protect users. As a result of this change, Chrome is now contacting Safe Browsing billions of times less often each week. What to expect The changes described above, while mostly under the hood, should result in a smoother web browsing experience for Chrome users without a degradation in security posture. We'll continue to monitor trends in the threat landscape, and remain ready to respond to keep you safe online. Posted by Jasika Bawa, Chrome Security & Jonathan Li, Safe Browsing
We are thrilled to share that Chromium issue tracking has migrated! Access the Issue Tracker, and supporting documentation. Why was this done Issue tracking moved from Monorail to the Chromium Issue Tracker (powered by the Google Issue Tracker) to provide a feature-rich and well-supported issue tracker for Chromium’s ecosystem. Chromium joins other open source projects (Git, Gerrit) on this tooling. What happens moving forward Existing Monorail issue links will redirect to the migrated issues in the new issue tracker. We will prioritize feedback to continue to improve the issue tracker experience. Help & Feedback You can reach out at any time to issue-tracker-support@chromium.org with questions or concerns.
As we shared last year, Chromium is moving to a different issue tracker to provide a well-supported user experience for the long term. Migration is beginning today (February 2, 2024) at 5pm PST. We expect migration will be completed by the end of day (PST) February 4, 2024. What’s happening We will migrate all Chromium issues, including issue history and stars, from Monorail to a different tool: Chromium Issue Tracker, powered by the Google Issue Tracker. This tooling change will provide a feature-rich and well-supported issue tracker for Chromium’s ecosystem. Chromium will join other open source projects (Git, Gerrit) on this tooling. Existing transparency levels to bugs will be maintained. Post-Migration We will publish another post once the migration is complete. Once the migration completes, existing Monorail issue links will redirect to the migrated issues in the new issue tracker. We will prioritize feedback to continue to improve the issue tracker experience. Documentation on new and common workflows will be added to chromium.org once the migration is complete. Help & Feedback You can reach out at any time to issue-tracker-support@chromium.org with questions or concerns.
Whether you’re browsing the web on your PC at home or on the go with your phone, we designed Chrome to be simple to use and work great on all platforms. For example, tools like Chrome sync have made it possible for you to access your bookmarks and passwords when switching between all your devices. In the coming weeks we’re making changes to Chrome on iOS to help you get to your most important stuff right away. Instead of having to set up Chrome sync on your device, you can now simply sign in to Chrome to save new things in your Google Account and access what's already there. This may feel familiar to you, as it’s how many Google apps on iOS already work today. Once you’re signed in to Chrome, you’ll be able to save your important stuff to your account, including bookmarks, reading lists, passwords, payment info, addresses and settings. And, you can separately opt in to synchronizing your tabs and browsing history from Chrome on iOS to your Google Account, which can help you pick up browsing where you left off on another device. These updates are also designed to help you manage your data. When you sign in to Chrome, the browsing data that’s already on your device will be kept separate as local data on your device. You’ll be able to easily distinguish local from account data in settings. And, if you want any local data to be available on your other devices, you can simply go to settings and save it to your account. Signing in to Chrome on iOS remains entirely optional. If you don’t sign in, you can still save your bookmarks, passwords and more, but they will be available only on the device where you saved them. You can also continue to sign in to Google web services like Gmail without signing in to Chrome. We’re hoping these changes make it even easier for you to get the best of Chrome while offering you all the flexibility you need. Posted by Nico Jersch, Chrome Product Manager
Today’s The Fast and the Curious post explores how Core Web Vitals saved Chrome users more than 10,000 Years of waiting for web pages to load in 2023 (across Chrome desktop and Android) by quantifying the experience of sites and identifying opportunities to make improvements. In 2020, we introduced Web Vitals - essential quality signals for webpages to ensure a better user experience. Since then, there has been a massive leap in web performance made possible by our work on Core Web Vitals (CWV) and its broader impact on the web. Today, over 40% of sites pass all of the CWV metrics, leading to pages that load and respond to interactions more quickly. Here’s a closer look at the journey to help improve the performance for sites and some specific work done in the browser and the ecosystem to enable this achievement. Chrome's Quest for Speed The very essence of the web lies in its ability to provide information and services efficiently and rapidly. This principle is at the heart of Google's business and drives our work on Chrome. However, we noticed an issue with sites over a long time horizon. Even if slow sites improved their performance for a while, it would often decline over time. No matter how fast Google Search might be, the user experience would be subpar if the pages found were slow to load. We could not help these sites improve their performance directly, but we wanted users to have a great experience when they moved from Google Search to the individual sites. To tackle the challenge of improving the user experience while simultaneously providing unified guidance to developers, teams from Search and Chrome collaborated to address the issue of slow web pages. Defining the Fast Web We examined millions of pages to define a public standard for a fast, user-friendly web page (initially published in The Science Behind Web Vitals). We published our specifications and data to the open ecosystem and took note of the feedback we received. The introduction of CWV metrics such as LCP (Largest Contentful Paint) was groundbreaking because it allowed us to measure when the user actually sees the content. The ability to measure the actual user experience at scale has been foundational to the improvements that we will discuss in this blog post. Next, we updated Google's search ranking algorithms in August 2021 to consider, among other factors, whether a page met the speed and usability standards established as part of CWV. Today, it remains highly recommended for site owners to achieve good Core Web Vitals for success with Search and to ensure a great user experience generally. Exponential Impact of Small Changes The results we saw after these changes were significant. The average page load in Chrome is now 166 ms faster. That might seem like a minor improvement, but small changes can accumulate to create a substantial impact on the web. So far in 2023, this project saved users over 10,000 years of waiting for web pages to load and over 1,200 years of waiting for web pages to respond to user input. And the web continues to get faster. We also tracked improvements in how many navigations meet Core Web Vitals (CWV). The current figures stand at 64.45% for mobile (up from 64%) and 68.39% for desktop (up from 67%). The Chrome Data team projects a ~69% pass rate by the end of the year. Caption: Our savings for LCP translate into 8,000 years saved for users waiting for pages to load on Android and 2,000 years in 2023 so far. On INP, we have saved users 800 years on Android and 450 years on Windows so far in 2023. Next, let’s look at some recent updates from both the Chrome team and the wider developer ecosystem, demonstrating how our joint efforts are speeding up the web. Chrome’s Core Web Vitals Achievements We’re proud to highlight numerous ways we’ve optimized performance. The Back/forward cache (bfcache) is designed to improve browsing experience by enabling instant back and forward navigation. BFCache’s hit rate has improved month-over-month on both Android (3.6%) and Desktop (1.8%). Another example of a particularly impactful optimization is our PreconnectOnAnchorInteraction feature which connects to origins on pointer-down rather than pointer-up. This fully launched feature led to a 6/10ms (0.4/1%) median LCP improvement on Android/Desktop, and an improvement in cross-origin LCP by ~60ms on both Android and Desktop. The launch also resulted in a 0.08% Content Ad revenue increase, underlining the significant impact of performance optimizations on user engagement and ecosystem health. We also introduced prerendering, which makes pages load instantly by rendering them before the user actually visits. Page loads via typing URLs directly in the omnibox get a 500-700ms (14-25%) median LCP improvement when prerendered, depending on the platform, moving global median LCP across all navigations by 6.4ms. We're currently rolling out prerendering of omnibox-initiated searches. Chrome has been working hard to keep background tabs out of your way. Implementing tab throttling for background tabs running at EcoQOS on Windows 11 and Task Role and QoS Adjustments on macOS have led to improvements in Largest Contentful Paint (LCP) and Interaction to Next Paint (INP). The web’s modern ability to run all types of applications also comes with a mandate to manage the workload that this encurs. We have been optimizing Chrome under mutliple active tabs and are happy to report improvements to scheduling and contention which improve INP by 5% and LCP by 2% in the last 6 months. We have made targeted improvements to the page loading code in Chrome in 2022. These resulted in LCP improving by 10% on Android, and CWV pass rate improving by 1.5%. Chrome's renderer has also seen some improvements. The renderer's main thread includes task queues for JavaScript, rendering, and image loading. Some changes that alter the priority of these tasks for optimal CWV include. High priority image loading: Historically, image-loading had the same or lower priority than rendering. However, an experiment showed that between an image load task and a rendering task, choosing the image load task first can prevent layout shift of an intermediate frame that doesn't have the image and also improves LCP. The improvement on Android at the 75th percentile was -6.66% for CLS and -0.82% for LCP, improving the CWV pass rate on Android by +0.24%. A similar experiment that boosted the loading priority to "medium" of the first five images parsed from the HTML (for non-icon-sized images) showed an improvement on Android at the 75th percentile of -6.08% for CLS and -0.53% for LCP. A combined experiment showed the effects of both changes were largely independent. Prioritize compositing after delay: If it has been more than 100ms since the last compositing task run, elevate the priority of any queued compositing task so that it will preempt normal-priority work. This produced an improvement of -0.27% for CLS on Android and Windows at the 95th percentile. SVG Raster Optimizations: Another SVG drawing optimization improved INP pass rates on desktops by -2.28% for MacOS at the 75th percentile. Caption: An example of Chrome’s new prioritized loading of the first five images parsed from the HTML. This improved LCP from 3.1s to 2.5s. Ecosystem Core Web Vitals Achievements The broader developer ecosystem has also achieved remarkable results by focusing on Core Web Vitals. The most significant achievement was the performance improvement on WordPress - the Content Management System that powers over a third of the web: "WordPress 6.3 loads 27% faster for block themes and 18% faster for classic themes, compared to WordPress 6.2, based on the Largest Contentful Paint (LCP) metric". Some parts of the WordPress ecosystem are going even further. Prerendering some links via the speculation rules API, NitroPack's prerendered page loads have seen an 80% LCP improvement and 55% INP improvement compared to those without any speculative loading. Caption: The percentage of origins passing all three Core Web Vitals (LCP, FID, CLS) with a "good" experience (Source: HTTP Archive) The JavaScript framework community has also seen Core Web Vital gains. Over the past few years, Chrome Aurora has collaborated with Next.js, Angular, and Nuxt to release performance-focused features like the next/script component, NgOptimizedImage, and nuxt/google-fonts. In 2022, Next.js pass rates increased from 20.4% to 27.3%, Angular pass rates increased from 7.6% to 13.2%, and Nuxt pass rates increased from 15.8% to 20.2%. Enterprise partners who tried our features have seen wins in LCP. For example, after switching to NgOptimizedImage, Land's End saw a 40% LCP improvement on mobile in Lighthouse lab tests and a 75% improvement in LCP on desktop. In similar tests, CareerKarma's LCP reduced 24% when switching to next/script's web worker mode. In the business world, performance optimization has led to remarkable growth. For instance, RedBus improved INP and observed a 7% increase in conversion rates. Economic Times improved INP and saw a 42% rise in page views and a 49% reduction in bounce rate. Meesho successfully brought LCP down from 6.9s to 2.5s, resulting in a 16.6% reduction in bounce rate and a 3% increase in conversions. Major web platforms have also seen significant improvements. Amazon has leveraged the bfcache change introduced on Chrome and saw a 22.7 percentage point (pp) improvement in bfcache hit rate with Chrome's latest version (M112). Cricbuzz experienced an even higher increase, with a 31.40 pp improvement. Partnering for a Better Web These performance improvements aren't just statistics – they represent real-world improvements in user experience (and hence business metrics) as well as developer experience. Crucially, we have managed to achieve these speed boosts without impacting developer satisfaction, which remains high at 90% overall. Through our developer satisfaction studies, we also found that about half (~51%) of developers are monitoring CWV and are either already optimizing for them or planning to do so. Furthermore, a significant majority (78%) of developers optimizing for CWV report seeing notable improvements in their scores. Our aim is always to create a better web experience for all users, so we're excited to see the web getting faster. But we also understand that maintaining developer satisfaction is crucial to sustaining these improvements. As developers continue to monitor and optimize for CWV, we are optimistic about the future of web performance. On behalf of the Chrome team, we want to thank the developer community for their incredible work. By focusing on Core Web Vitals, we've made the web a significantly faster and more enjoyable place to be. We look forward to continuing this journey together, making the web better for everyone, everywhere. Posted by Addy Osmani, Annie Sullivan and Kouhei Ueno, Software Engineers for Chrome
Update: Migration is on track for early February 2024 instead of January 2024. Chromium is moving to a different issue tracker to provide a well-supported user experience for the long term. The Google team is targeting January 2024 for migration—this post explains the details. What’s happening We will migrate all Chromium issues, including issue history and stars, from Monorail to a different tool: Chromium Issue Tracker, powered by the Google Issue Tracker. This tooling change will provide a feature-rich and well-supported issue tracker for Chromium’s ecosystem. Chromium will join other open source projects (Git, Gerrit) on this tooling. Existing transparency levels to bugs will be maintained. Timing We are targeting January 2024 for Chromium’s migration, and will share milestones and timing updates throughout the coming months. Migration Readiness In due course, we will share additional resources, including a walkthrough of the new issue tracker, highlighting key features. Post-Migration While there will be differences, we are working to make the migration straightforward. Once the migration completes, existing Monorail issue links will redirect to the migrated issues in the new issue tracker. Help & Feedback You can reach out at any time to issue-tracker-support@chromium.org with questions or concerns.
TL;DR: Automated certificate issuance and management strengthens the underlying security assurances provided by Transport Layer Security (TLS) by increasing agility and resilience. This post describes the benefits of automation and upcoming changes to the Chrome Root Program policy that represent Chrome Security’s ongoing commitment to improving web security. Introduction One of the most common tools for enhancing user security on the Internet is “Transport Layer Security” (TLS), formerly known as “Secure Socket Layer” (SSL). At its most basic level, TLS is a security protocol that encrypts data such that only the intended recipient can read it. Encryption makes the Internet more secure, but only if consistently and reliably deployed. The adoption of modern practices, like automated TLS certificate issuance and management, helps achieve this goal. Background: TLS - The Foundation for Encrypted Communications on the Internet You’re probably more familiar with TLS than you think, as it’s the underlying technology that puts the ‘S’ (referencing “Secure”) in HTTPS. We recently wrote about HTTPS and how it’s become the norm, with over 92% of page loads in Chrome on Android, Chrome OS, macOS, and Windows being transmitted using HTTPS. TLS is the cryptographic protocol that establishes a secure channel between a web browser and the web server hosting the website a user is browsing. It provides a few core security properties: Encryption: Ensures data being transmitted can’t be intercepted and understood by third parties or unintended recipients. Authentication: Ensures the web server or application a web browser is connecting to is who it claims to be. Integrity: Ensures data has not been altered while in transit. To establish a TLS connection, a web browser and server introduce themselves and agree on the rules used to secure ongoing subsequent connections. This introduction is referred to as the “TLS Handshake.” If you’d like to look closer and improve your understanding of how TLS connections are established, check out this resource. X.509 certificates, sometimes referred to as “certificates,” “TLS certificates,” or “server authentication certificates,” are an essential part of the TLS Handshake. Certificates are issued by trusted entities called “Certification Authorities” (CAs) and are responsible for verifying and subsequently binding a domain name (e.g., google.com) with a corresponding public key. The certificate allows the web browser to verify it’s communicating with an authorized web server (i.e., server identity verification). It’s important to note that TLS isn’t a perfect solution, nor does its use guarantee a website is completely safe. Remember, using TLS ensures web traffic is encrypted while in transit to or from the corresponding web server; it does not guarantee the safety or security of that content. TLS does not prevent phishing or malicious content like malware or viruses from being served to a website’s users. Removing opportunities for confusion related to the terms “encrypted" (a security property provided by TLS) and “safe" (a subjective feeling) is one of the reasons why, beginning in Chrome 117, Chrome replaced the lock icon in the address bar with a new security-neutral “tune” icon. The Power of Automation As outlined above, server authentication certificates underpin the encrypted connections between web browsers and web servers. Publicly trusted certificates – those trusted in products like Chrome by default – must adhere to both industry-wide and web browser-specific policies, like the CA/Browser Forum “Baseline Requirements” and the Chrome Root Program policy. One such requirement is that a certificate’s maximum validity is no more than 398 days. Certificate validity is defined in RFC 5280 and determines the functional lifetime a certificate may maximally be considered valid for use in establishing TLS connections. While today, the maximum certificate validity is set to 398 days, this hasn’t always been the case. In just over ten years, the ecosystem has trended from unlimited certificate lifetime to 60 months (2012), to 39 months (2015), to 825 days (2018), to 398 days (2020). With each reduction in maximum validity, the underlying goal was always the same: improving security. Shortening certificate lifetimes protects users by reducing the impact of compromised certificate keys and by speeding up the replacement of insecure technologies and practices across the web. Key compromises (i.e., when a web server certificate’s corresponding private key is accidentally or intentionally exposed) and the discovery of internet security weaknesses (e.g., the Heartbleed bug) are common events that can lead to real-world harm, and the web’s users should be better protected against them. The decreasing lifetime of certificates and the increasing number of certificates that organizations rely on have created a growing need for website operators to become more agile in managing certificates and corresponding infrastructure. Automation is one of the best methods of achieving increased agility, reliability, and security. What is Certificate Automation? While there isn’t a one-size-fits-all definition of certificate automation, there is one shared element: the requirement for “hands-on” input from humans during initial certificate issuance and ongoing renewal is minimized or eliminated. Certificate automation simplifies the often complex and error-prone tasks associated with managing certificates, enhancing security and operational efficiency. In the Web Public Key Infrastructure (“Web PKI"), there are two major categories of certificate automation solutions: open solutions relying on standards such as the Automatic Certificate Management Environment (ACME) protocol and solutions often relying on proprietary tools or protocols. Benefits of Automation Automated certificate issuance and management: promotes agility. Automation increases the speed at which the benefits of new security capabilities are realized. increases resilience and reliability. Automation eliminates human error and can help scale the certificate management process across complex environments. Automation coupled with monitoring protects against website outages due to certificate expiration that could result in a loss of traffic, reputation, or revenue. Innovations like ACME Renewal Information (ARI) present opportunities to seamlessly protect website operators and organizations from outages related to unforeseen events. ARI allows CAs to communicate to web servers that they should attempt to renew a certificate during a defined window, for example, before a certificate is revoked due to an incident. increases efficiency. Automation reduces the time and resources required to manage certificates manually. Though there is an initial investment to automate, over time, team members have increased availability to focus on more strategic, value-adding activities. Why does Automation Lead to Better Security? Automation improves security posture and increases resilience in response to unexpected events including CA incidents, Internet security weaknesses, and cryptographic deprecations. CA Incidents The Baseline Requirements prescribe response expectations for some types of CA incidents, and many of these responses include marking affected certificates as no longer trusted (“revoked”). Four years ago, Let’s Encrypt self-reported a bug that affected over 3 million certificates. In response to the incident, nearly 2 million certificates were revoked, meaning website operators needed to intervene and trigger replacement to avoid a potential outage. While the scale of this incident was atypical, Web PKI incidents that necessitate certificate re-issuance are commonplace. There are two important conclusions from this incident. First, the ACME protocol pioneered by and relied on by Let’s Encrypt presented the opportunity for affected website operators to recover from the incident with limited manual effort. More than 1.7 million affected certificates were replaced in less than 48 hours. Second, the incident resulted in Let’s Encrypt’s commitment to developing and deploying a new protocol (ARI, described above) capable of improving response to future CA incidents such that certificate replacement can occur automatically without human intervention. Let’s Encrypt announced a production deployment of ARI in March 2023. Other CAs have the opportunity to deploy this open protocol (e.g., Google Trust Services announced their production deployment of ARI in May 2023) to improve incident response. Internet Security Weaknesses In April 2014, a security vulnerability (“Heartbleed”) was discovered in a popular cryptographic software library used to secure the majority of servers on the Internet that broke the security properties provided by TLS. It was estimated that in response to the bug, over 500,000 active publicly accessible server authentication certificates needed to be revoked and replaced. Despite a demonstrated vulnerability, remediation efforts from website operators were slow. Only 14% of affected websites completed the necessary remediation steps within a month of disclosure. About 33% of affected devices remained vulnerable nearly three years after disclosure. The maximum certificate validity permitted by the Baseline Requirements at the time was five years. For some website operators, this meant the need to revisit the state of their TLS configuration was incorrectly assumed to be years away - which partly explains the observed remediation inaction. Further, CAs who elected to revoke certificates faced significant costs related to hosting revocation information - estimated for one CA to be between $400,000 and $952,992.40 USD per month. The Baseline Requirements obligate CAs to host revocation information for each certificate they issue until the end of its validity period, meaning these costs may have needed to be sustained over several years - representing potentially catastrophic financial consequences to the organizations responsible for underpinning the web’s security. Minimally, modern automation technologies like ACME and ARI would have reduced touch labor experienced by website operators to reissue affected certificates. Considering the concerns related to vulnerable private key reuse, popular ACME clients like Certbot and Lego automatically create new key material for each certificate request. Further, if we could imagine a world where certificate validity was reduced, the maximum window of opportunity for attackers would have been significantly reduced from the 5-year window. As the degree of automation increases, so does the ease of transition to reduced certificate validity. Indeed, many sites are already using certificates with much shorter validity than today’s maximum of 398 days. For example, Facebook has implemented a highly automated certificate issuance and management workflow to protect its network edge and corresponding devices with certificates that are used for just a few days. Other CAs are defaulting to certificates valid for only 30 days. A final point of interest is that peer-reviewed research demonstrates that in response to the manual intervention necessitated by Heartbleed, system administrators who implemented automation were more prompt in performing certificate replacements when compared to those who did not. Cryptographic Deprecations Cryptographic hash functions — mathematical algorithms that produce a fixed-length output from an arbitrarily sized input — are central to the security of certificates. In 2005, researchers demonstrated the first weaknesses in the widely used SHA-1 hash function. In response to growing security concerns, in 2014, Chrome announced a deprecation timeline, with the CA/Browser Forum ultimately prohibiting the issuance of certificates that used SHA-1 after January 1, 2016. Unfortunately, this deprecation took years. Browsers had to wait for almost all affected certificates to be renewed, many of them manually, to avoid mass breakage. Modern automation technologies like ACME and ARI would have reduced the touch labor needed to reissue affected certificates. When coupled with reduced certificate validity, the web would have been able to transition away from SHA-1 much faster. And these cryptographic weaknesses weren't theoretical: in February 2017, researchers demonstrated a devastating vulnerability in SHA-1 — barely avoiding a crisis because Chrome had finished removing support for affected certificates just weeks before. Cryptographic deprecations aren't as infrequent as you might think, since there is a steady stream of legacy cryptography in TLS and PKI that Chrome is working to eradicate and modernize, ideally before it becomes vulnerable. The Opportunity for and Cost of Failure Expired certificates bring a website down, causing loss of productivity, reputational harm, and missed service level expectations. When considering failed TLS connections observed in Chrome versions released within the last year (i.e., Chrome 106 and greater) on all platforms, over 22% of these resulted from certificates with an invalid validity date. A 2019 study found that 3.9% of all HTTPS sites have expired certificates. State of Automation in the Ecosystem Between December 2022 and January 2023, our team ran a survey with owners of CAs included in the Chrome Root Store. The intent of the survey was to better understand existing and planned adoption of automated certificate issuance and management solutions. When coupled with publicly available data from Certificate Transparency logs and tools like crt.sh, the survey data estimated 58% of the certificates issued by the Web PKI today rely on the ACME protocol. There is clearly broad website operator support for issuing and managing certificates using ACME, and by extension, a strong demand for certificate automation in the Web PKI. The survey also highlighted that the set of CA owners that offer ACME support today and are included in the Chrome Root Store represent more than 95% of Web PKI’s certificate population. 70% of those corresponding CA owners self-reported increasing demand for ACME services, which we interpret as a strong indicator of a healthy and growing ACME user population across the ecosystem. None of the CA owners supporting ACME today indicated that ACME demand was decreasing. To better understand other types of automated certificate issuance and management solutions offered by CA owners included in the Chrome Root Store, we ran a separate survey between April and June 2023. When again coupled with publicly available data from Certificate Transparency logs and tools like crt.sh, the survey data indicated that more than 80% of the certificates issued by the Web PKI today are issued using some form of automation (which includes ACME). Organizations included in the Chrome Root Store that self-reported no automation support represented approximately .08% of the Web PKI certificate population. Our Commitment to Automation The Chrome Root Program provides governance and security review to determine the set of CAs trusted by default in Chrome. We’ve blogged about the Chrome Root Program in the past [1 and 2], but if you missed it, we keep users safe online by: Administering policy and governance activities to manage the set of CAs trusted by default in Chrome, evaluating impact and corresponding security implications related to public security incident disclosures by participating CAs, and leading positive change to make the ecosystem more resilient. Specific to the last point, the June 2022 release (Version 1.1) of the Chrome Root Program policy introduced the Chrome Root Program’s “Moving Forward, Together” (MFT) initiative that set out to share our vision of the future that includes modern, reliable, highly agile, purpose-driven PKIs with a focus on automation, simplicity, and security. Moving Forward, Together While “Moving Forward, Together" is non-normative and therefore not policy, it represents future initiatives on which we hope to collaborate further with members of the Web PKI ecosystem. To explore and understand the broader ecosystem impacts of the related proposals described in MFT, we: Study ecosystem data from publicly available tools like crt.sh and Censys, interpret data resulting from Chrome tools, experiments, and usage data, evaluate peer-reviewed research, and, collect feedback through surveys like the ones related to automation solutions described earlier. Some of the MFT initiatives might be achieved through collaborations within the CA/Browser Forum. In other cases, it might be most appropriate for corresponding changes to land only in the Chrome Root Program policy, as not all CA owners who adhere to the CA/Browser Forum Baseline Requirements intend to serve Chrome’s focused PKI use case of server authentication - or wish to be trusted by default in Chrome. Regardless of how these proposals might eventually be implemented, we are committed to collaborating with community members to minimize adverse ecosystem impacts when appropriate and possible. Upcoming Policy Changes As announced last week at CA/Browser Forum Face-to-Face Meeting 60, we’ll soon be pre-releasing an updated version of the Chrome Root Program policy to collect feedback and requested clarifications from CA owners included in the Chrome Root Store. One of the major focal points of Version 1.5 requires that applicants seeking inclusion in the Chrome Root Store must support automated certificate issuance and management. We’ve been communicating intent to require automation over the past year, including past Face-to-Face Meeting updates in February and June. It’s important to note that these new requirements do not prohibit Chrome Root Store applicants from supporting “non-automated” methods of certificate issuance and renewal, nor require website operators to only rely on the automated solution(s) for certificate issuance and renewal. The intent behind this policy update is to make automated certificate issuance an option for a CA owner’s customers. While we prefer ACME solutions over those that rely on proprietary protocols or tools, both forms of automation satisfy the intent of the new policy requirement. Specifically, we prefer ACME because of its widespread ecosystem support and adoption. Further, ACME is open and benefits from continued innovation and enhancements from a robust set of ecosystem participants. There is an extensive set of well-documented ACME client options spanning multiple languages and supported platforms. Last but not least, ACME was designed specifically to meet the TLS certificate issuance needs of the Web PKI. Future Opportunities Related to Automation Promoting broader ubiquity of automated certificate issuance and management will establish an important foundation for the next generation of the Web PKI. Increased use of automation will also unlock future opportunities for more modern and agile infrastructures where strengthened security properties can be realized, for example, where maximum certificate validity can be reduced with minimal downsides. Continued collaboration across members of the Web PKI ecosystem (e.g., web browsers, CAs, and website operators, and hosting providers) is necessary to make automation a viable option for all website operators. We’ve been encouraged by recent developments within the ACME ecosystem including ACME Renewal Information and Automated Certificate Management Environment for Subdomains. These initiatives aim to better protect website operators from unforeseen events that could affect certificate status and lead to outages, as well as to make it easier for popular server authentication use cases to be supported by ACME. There’s further opportunity related to improved fail-over (e.g., allowing a graceful transition to a new CA if the preferred provider is unavailable at the time of a request). We’re hopeful that as more CA owners support their customers in adopting automation, we’ll see continued developments such as these, making it even easier for website operators to securely obtain and manage server authentication certificates. Learn More If you’re a website operator, we encourage you to discover the potential of automated certificate issuance and management, and you should get started today! While we’ve compiled the below list of resources to improve your understanding, we encourage you to reach out to your corresponding CA owner to learn how they support, or plan to support automation. Resources https://www.acmeisuptime.com/ https://letsencrypt.org/how-it-works/ https://certbot.eff.org/ RFC 8555 If you previously investigated implementing an automated certificate issuance and management solution and determined that it was either too difficult or that there were too many obstacles to make it a viable solution, we encourage you to reconsider. The Web PKI continues to evolve, and recent developments have made it easier than ever to adopt automation. Modern web server platform providers like Caddy help website operators configure TLS by default, as do many third-party hosting provider organizations. If you depend on software or a service provider that does not support automated certificate issuance and management, share this post and ask the corresponding organization to include support for automation on their future product roadmap. Finally, if you’d like to share with us any challenges, lessons learned, or opportunities for improvement related to certificate automation, let us know at chrome-root-program [at] google [dot] com. Note: the service providers listed in this post should not be considered exhaustive or an endorsement. The references are only intended to be informational. Posted by Chrome Root Program, Chrome Security Team
In celebration of Chrome’s 15th birthday, we’re thrilled to introduce the redesigned Chrome Web Store. With a user-centric focus, we’ve made it easier for you to search and find fun themes and helpful extensions to stay productive at home or at work. Let's go behind the scenes and learn more about this redesign from Chrome Product Manager Hafsah Ismail and UX Designer Crystal Wang. What influenced your decision to redesign the Chrome Web Store? Hafsah: Chrome and the Web have evolved in remarkable ways. We now have extensions that unlock uncharted levels of productivity for developers or harness the power of generative AI to reshape work as we know it. It only felt natural to evolve the store to continue to meet the dynamic needs of users and developers in our ecosystem. Extensions and themes lie at the heart of a personalized Chrome experience, so it was a natural progression to give the store a fresh, contemporary look to align with this transformation. Can you share more details about the design? Crystal: This project was an amazing opportunity to redesign everything from the ground up, and was a collaborative team effort with product, research, writing, and more. Our main goals were to modernize the UI and create a well-lit path for users to find high quality extensions and themes to make the web work better for them. Two key areas of the design I’m particularly proud of are the refreshed look and feel and global navigation and search. Seamless, global navigation and search We updated the navigation and search experience to be seamless, universal and easily accessible, no matter where the user is in their extension discovery journey. New categories based on user needs and lifestyle Extension and theme categories were revamped to be more expansive, relevant, and focused on usefulness and purpose. Modern and expressive look & feel The redesign was an exciting opportunity to modernize the UI with Google’s latest design system, Material 3, allowing for a more modern, consistent and intuitive user experience. We also created brand new illustrations to help users connect with extensions on a more meaningful level; differentiating us from any other extension store on the market. What’s new in the Chrome Web Store for developers? Hafsah: Amplifying our developers is a critical part of our storefront’s redesign. We’re introducing a self-nomination form for developers to showcase their extensions for a spot in our Editor’s Picks collection. We’re eager to highlight extensions that: Have a high-quality listing including visually appealing assets Provide clear value to the user, and add to their Chrome experience Are from a range of developers, big and small! Please feel free to check out our developer post for more information and as a place for feedback from the community. What are some of your favorite recent additions to store? Hafsah: Instapaper: I'm passionate about tech and cooking, always eager to discover the newest innovations and curate articles and recipes. Instapaper has become an essential extension for me; its power lies in letting me save anything I want to revisit later, a tool you don't realize you need until you do. Noisli: As a product manager who finds herself in energizing meetings, I really value creating the perfect work environment for deep work and reflection. Extensions like Noisli are game-changers, enabling the perfect environment for focused work. With Noisli, I can curate the soundtrack to my productivity Crystal: Todoist for Chrome: I’m someone who loves being organized, and I’ve always been super big on writing physical checklists. Recently, I’ve been very into Todoist to make to-do lists in my Chrome browser, and this productivity extension has become a personal favorite. Asian & Pacific Islander Artist Theme Series: Being an Asian American, I’m also a huge fan and extra proud of the Asian & Pacific Islander Artist Themes series created by our team. I currently have Crested Ibis installed on my browser and I love it! Posted by Joshua Cruz, Communications Manager
For the past several years, more than 90% of Chrome users' navigations have been to HTTPS sites, across all major platforms. Thankfully, that means that most traffic is encrypted and authenticated, and thus safe from network attackers. However, a stubborn 5-10% of traffic has remained on HTTP, allowing attackers to eavesdrop on or change that data. Chrome shows a warning in the address bar when a connection to a site is not secure, but we believe this is insufficient: not only do many people not notice that warning, but by the time someone notices the warning, the damage may already have been done. We believe that the web should be secure by default. HTTPS-First Mode lets Chrome deliver on exactly that promise, by getting explicit permission from you before connecting to a site insecurely. Our goal is to eventually enable this mode for everyone by default. While the web isn't quite ready to universally enable HTTPS-First Mode today, we're announcing several important stepping stones towards that goal. Automatic upgrades Chrome will automatically upgrade all http:// navigations to https://, even when you click on a link that explicitly declares http://. This works very similarly to HSTS upgrading, but Chrome will detect when these upgrades fail (e.g. due to a site providing an invalid certificate or returning a HTTP 404), and will automatically fallback to http://. This change ensures that Chrome only ever uses insecure HTTP when HTTPS truly isn't available, and not because you clicked on an out-of-date insecure link. We're currently experimenting with this change in Chrome version 115, working to standardize the behavior across the web, and plan to roll out the feature to everyone soon. While this change can't protect against active network attackers, it's a stepping stone towards HTTPS-First mode for everyone and protects more traffic from passive network eavesdroppers. Warning on insecurely downloaded files Building and expanding on our previous work removing support for mixed downloads, Chrome will start showing a warning before downloading any high-risk files over an insecure connection. Downloaded files can contain malicious code that bypasses Chrome's sandbox and other protections, so a network attacker has a unique opportunity to compromise your computer when insecure downloads happen. This warning aims to inform people of the risk they're taking. You will still be able to download the file if you're comfortable with the risk. Unless HTTPS-First Mode is enabled, Chrome will not show warnings when insecurely downloading files like images, audio, or video, as these file types are relatively safe. We're expecting to roll out these warnings starting in mid September. Expanding HTTPS-First Mode protections for more people Our ultimate goal is to enable HTTPS-First Mode for everyone. To that end, we're expanding HTTPS-First Mode protections to several new areas: We've enabled HTTPS-First Mode for users enrolled in Google's Advanced Protection Program who are also signed-in to Chrome. These users have asked Google for the strongest protection available, and HTTPS-First Mode helps avoid the very real threats of insecure connections these users face. We're planning to enable HTTPS-First Mode by default in Incognito Mode for a more secure browsing experience soon. We're currently experimenting with automatically enabling HTTPS-First-Mode protections on sites that Chrome knows you typically access over HTTPS. Finally, we're exploring automatically enabling HTTPS-First Mode for users that only very rarely use HTTP. Try it out If you'd like to try out HTTPS upgrading or warning on insecure downloads before they roll out to everyone, you can do so in Chrome today by enabling the "HTTPS Upgrades" and "Insecure download warnings" flags at chrome://flags. And if you want stronger protections, you can also turn on HTTPS-First Mode by enabling "Always use secure connections" in Chrome security settings (chrome://settings/security)! Information for Developers and Enterprise If you're a developer, you can ensure your users don't see warnings or encounter failed upgrades on your sites by using HTTPS and ensuring that your site doesn't host content only accessible over HTTP. We encourage you to fully adopt HTTPS and redirect all HTTP URLs to their HTTPS equivalents. Even if you believe that your site does not host personal information, using HTTP puts your users at increased risk of network attackers injecting malicious content into their browsers. Malicious network attackers rely on insecure sites to get a foothold towards your users. We're exploring additional ways we can reduce the risk users experience by visiting insecure websites by, for instance, reducing the lifetime of cookies accessible over HTTP -- switching to HTTPS ensures that your users' experience will not be impacted by these future changes. If you can't support HTTPS yet, you can ensure that users can access your site by making sure that your server either does not respond to requests on port 443 at all, or uses HTTPS to redirect users back to HTTP. We know that enterprises and education networks have unique needs. These features can be turned on early, customized, or turned off entirely via the HttpsOnlyMode, HttpsUpgradesEnabled, HttpAllowlist, and InsecureContentAllowedForUrls policies. Part of our ongoing commitment Chrome has a long history of working towards a secure-by-default web, and we're not stopping here. We're so close to the finish line, and we're excited to help the web get to HTTPS by default. Post by Joe DeBlasio, Chrome Security team
Teams across Google are working hard to prepare the web for the migration to quantum-resistant cryptography. Continuing with our strategy for handling this major transition, we are updating technical standards, testing and deploying new quantum-resistant algorithms, and working with the broader ecosystem to help ensure this effort is a success. As a step down this path, Chrome will begin supporting X25519Kyber768 for establishing symmetric secrets in TLS, starting in Chrome 116, and available behind a flag in Chrome 115. This hybrid mechanism combines the output of two cryptographic algorithms to create the session key used to encrypt the bulk of the TLS connection: X25519 – an elliptic curve algorithm widely used for key agreement in TLS today Kyber-768 – a quantum-resistant Key Encapsulation Method, and NIST’s PQC winner for general encryption In order to identify ecosystem incompatibilities with this change, we are rolling this out to Chrome and to Google servers, over both TCP and QUIC and monitoring for possible compatibility issues. Chrome may also use this updated key agreement when connecting to third-party server operators, such as Cloudflare, as they add support. If you are a developer or administrator experiencing an issue that you believe is caused by this change, please file a bug. The remainder of this post provides important background information to help understand this change as well as the motivations behind it. The Post-Quantum Motivation Modern networking protocols like TLS use cryptography for a variety of purposes including protecting information (confidentiality) and validating the identity of websites (authentication). The strength of this cryptography is expressed in terms of how hard it would be for an attacker to violate one or more of these properties. There’s a common mantra in cryptography that attacks only get better, not worse, which highlights the importance of moving to stronger algorithms as attacks advance and improve over time. One such advancement is the development of quantum computers, which will be capable of efficiently performing certain computations that are out of reach of existing computing methods. Many types of asymmetric cryptography used today are considered strong against attacks using existing technology but do not protect against attackers with a sufficiently-capable quantum computer. Quantum-resistant cryptography must also be secure against both quantum and classical cryptanalytic techniques. This is not theoretical: in 2022 and 2023, several leading candidates for quantum-resistant cryptographic algorithms have been broken on inexpensive and commercially available hardware. Hybrid mechanisms such as X25519Kyber768 provide the flexibility to deploy and test new quantum-resistant algorithms while ensuring that connections are still protected by an existing secure algorithm. On top of all these considerations, these algorithms must also be performant on commercially available hardware, providing yet another layer of challenge to this already complex problem. Why Protecting Data in Transit is Important Now It’s believed that quantum computers that can break modern classical cryptography won’t arrive for 5, 10, possibly even 50 years from now, so why is it important to start protecting traffic today? The answer is that certain uses of cryptography are vulnerable to a type of attack called Harvest Now, Decrypt Later, in which data is collected and stored today and later decrypted once cryptanalysis improves. In TLS, even though the symmetric encryption algorithms that protect the data in transit are considered safe against quantum cryptanalysis, the way that the symmetric keys are created is not. This means that in Chrome, the sooner we can update TLS to use quantum-resistant session keys, the sooner we can protect user network traffic against future quantum cryptanalysis. Deployment Considerations Using X25519Kyber768 adds over a kilobyte of extra data to the TLS ClientHello message due to the addition of the Kyber-encapsulated key material. Our earlier experiments with CECPQ2 demonstrated that the vast majority of TLS implementations are compatible with this size increase; however, in certain limited cases, TLS middleboxes failed due to improperly hardcoded restrictions on message size. To assist with enterprises dealing with network appliance incompatibility while these new algorithms get rolled out, administrators can disable X25519Kyber768 in Chrome using the PostQuantumKeyAgreementEnabled enterprise policy, available starting in Chrome 116. This policy will only be offered as a temporary measure; administrators are strongly encouraged to work with the vendors of the affected products to ensure that bugs causing incompatibilities get fixed as soon as possible. As a final deployment consideration, both the X25519Kyber768 and the Kyber specifications are drafts and may change before they are finalized, which may result in Chrome’s implementation changing as well. Posted by: Devon O'Brien, Technical Program Manager, Chrome security
Big performance wins can be found by taking a step back and tweaking what you already have. Today’s The Fast and the Curious post explores how we improved the scrolling experience of Chrome on Android, ultimately reducing slow scrolling jank by 2x. Read on to see how we discovered and evaluated the problem, and how that has helped us design a better browser experience going forward. When measuring the performance of a browser, one might typically think of page load speed or Web Vitals. On mobile where touch interactions are common we also prioritize your interaction with Chrome to ensure it is always smooth and responsive including on new form factors like foldables. A significant focus of late has been on reducing jank while you scroll. We recently improved the scrolling experience of Chrome on Android by 2x by filtering noise and reducing visual jumps in the content presented on screen. To get this result, we had to take a step back and figure out the problem of why Chrome on Android was lagging behind Chrome on iOS. As we compared Chrome across platforms, we were hit with a particular observation. iOS Chrome scrolling was smooth and consistent whereas on Android, Chrome’s scrolling didn't follow your finger as closely. However, our metrics were telling us that while janks occurred occasionally, they weren’t as common as our perception when comparing with Chrome on iOS. Thus we had ourselves a mystery which needed some investigation. Investigating input to output rate Our metrics flagged that we often received input at an inconsistent rate; but since the input rate was greater than the display’s frame rate, we usually had at least one input event to trigger the production of a frame to display. However, this frame might have consumed fewer or more input events, which could result in inconsistent shifting of content on screen even while scrolling at a fixed speed. This problem of different input rate vs frame rate is a problem that Chrome has had to address before. Internally, we resample input to predict/extrapolate where the finger was at a consistent point relative to the frame we want to produce. This should result in each frame representing a consistent amount of time and should mean smooth scrolling regardless of noise in the input events. The ideal scenario is illustrated in the following diagram where blue dots are real input events, green are resampled input events, and the displayed scroll deltas would fluctuate if you were to use the real input events rather than resampling. Okay so we already do resampling so what's the problem? A tale of woe and reimplementation Input resampling inside of Chrome (and Android) were added back in 2019 as 90hz devices emerged and the problem above became more apparent (oscillating between 2 vs 1 input events per frame rather than consistently 2 input events per frame we usually see on 60hz devices). Android implemented multiple resampling algorithms (kalman, linear, etc.) and arrived at the conclusion that linear resampling (drawing a line between two points to figure out velocity and then extrapolate to the given timestamp) was good enough for their use cases. This fixed the problem for most Android apps, but didn't address Chrome. Due to historical reasons and web spec requirements for raw input, Chrome uses unbuffered input and thus as devices started to appear with sampling rates that didn’t fit with input, Chrome had to implement some version of resampling. Below we see that each frame (measuring the time from input to it being displayed) consumes a different amount of input events (2 for the first, 3 for the second, and 1 for the third), if we assume input is consistently arriving and each is exactly 30 pixels of displacement then ideally we should smooth it out to 60 pixels per frame as seen below: However, while we were investigating the original mystery we discovered that reality was very different from the ideal situation pictured above. We found that the actual input movement of the screen was quite spiky and inconsistent (more than we expected) and that our predictor was improving things but not as much as desired. On the left is real finger displacement on a screen (each point is an input event) and on the right the result of our predictor of actual content offset after smoothing out (each point is a frame) Frames are being presented consistently on the right, but the rate of displacement spikes between one to another isn’t consistent (-50 to -40 followed by another -52 being especially drastic). Human fingers don’t move this discretely (at frame level precision). Rather they should slide and flex in a gradient, speeding up or slowing down gradually. So we knew we had a problem here. We dug deeper into Chrome’s implementation and found there were some fundamental differences in Chrome’s implementation (which was supposedly a copy of Android’s). 1. Android uses the native C++ MotionEvent timestamp (with nanosecond precision), but Chrome uses Java MotionEvent.getEventTime & MotionEvent.getHistoricalEventTime (milliseconds precision). Unfortunately, nanosecond precision was not part of the public API. However, rounding of milliseconds can introduce error into our predictor when it computes velocity between event timestamps. 2. Android’s implementation takes care when selecting the two input events so resampling is using the most relevant events. Chrome however uses a simple FIFO queue of input events, which can result in weird cases of using future events to predict velocity in the past in rare cases on high refresh rate devices. We prototyped using Android’s resampling in Chrome, but found it was still not perfect for Chrome’s architecture resulting in some jank. To improve on it, we experimented with different algorithms, using automation to replay the same input over and over again and evaluating the screen displacement curves. After tuning, this landed at the 1€ filter implementation that visibly and drastically improved the scrolling experience. With this filter, the screen tracks closely to your finger and websites smoothly scroll, preventing jank caused by inconsistent input events. The improvement is visible in our manual validation, on both top-end and low-end devices (Here's a redmi 9A video example). Going forward! In Android 14, the nanosecond API for java MotionEvents will be publicly exposed in the SDK so Chrome (and other apps with unbuffered input) will be able to call it. We also developed new metrics that track the quality of the scroll predictors frame, by creating a test app which introduced pixel level differences between frames (and no other form of jank) and running experiments to see what people would notice. These analysis can be read about here and will be used going forward for more exciting performance wins and to make this a visible area for tracking against regressions. In the end, after tuning and enabling the 1€ filter, our metrics show a 2x reduction in visible jank while scrolling slowly! This improvement is going live in M116 as the default, but will be launched all the way back to M110 and brings Chrome on Android on par with Chrome on iOS! The moral of the story is: Sometimes metrics don’t cover all the cases and taking a step back and investigating from the top down and getting the lay of the land can end with a superior scrolling experience for users. Post by: Stephen Nusko, Chrome Software Engineer
With the latest release of Chrome for desktop we are introducing a redesign of the Chrome downloads experience to make it easier for you to interact with your recent downloads. Let's go behind the scenes and learn more about this redesign from Chrome Senior Product Manager Jasika Bawa. What influenced your decision to redesign Chrome downloads? Downloads are a core part of day to day web browsing, from getting the perfect cat themed background for your PC to saving a copy of your tax return. Over the years, we have listened to your feedback about the legacy Chrome downloads experience. We learned that while there was a lot about it that worked well for you, like strong support for core download journeys and built-in protection from potentially harmful files, it had its problems too. For example, it – Occupied precious pixels at the bottom of the screen which squeezed the web content area, and was limited by screen width in how many files it could show at once Didn't go away automatically, and only offered actions such as pause/resume and open in folder from a fixed overflow menu Was no longer modern, interactive, and consistent with the look and feel of other browser UI or the browser ecosystem at large All this made it clear that there was room for improvement for us to create a more intuitive experience for downloading in Chrome. What does the redesign have to offer? The new download tray is available to the right of the Chrome address bar and replaces the legacy downloads experience at the bottom of your screen. When a download is in progress, an animated ring helps you monitor it with a quick glance. The tray opens when you've finished downloading a file and automatically dismisses itself, making it easy to access quickly and allowing you to continue browsing uninterrupted. With the download tray, you can see a list of all your downloads from the past 24 hours in any browser window, not just the one in which you originally downloaded a file. The tray also offers in-line options to open the folder a download is in, cancel a download, retry a download should it fail for any reason, and pause/resume downloads. How will the new downloads experience help keep people safe online? We always want to make sure you are safe when downloading files, so Chrome will continue to provide clear warning signs of potential malware or viruses to protect your device and accounts. In fact, the additional space and more flexible UI of the new Chrome downloads experience will give us the opportunity to provide even more context when Chrome protects you from a potentially malicious file, and enable us to build advanced deep scan options that we couldn't before. Be sure to watch the Google Security Blog for more details on these coming soon. In addition to download warnings, the download tray being anchored next to the Chrome address bar helps create a clearer separation of trusted browser UI from web content, which was something we wanted to ensure with the redesign. How did you think about making the transition to the new downloads experience easier? It was important to us that all the functionality of the legacy Chrome downloads experience be made available in the new one. For example, you can still drag a downloaded file to another folder, program, or website, and perform actions like "Always open" from the new download tray. If you want a more detailed view of your downloads, you can continue to access this by selecting the "Show all downloads" option in the download tray, by clicking "Downloads" from the Chrome three dot menu, or by typing chrome://downloads in the Chrome address bar. We also took feedback from our early experiments seriously, and used it to make changes like adjusting the frequency with which the download tray opens. You can even choose to have the tray not open automatically at all, in Chrome settings. Are there any steps developers need to take? For developers, we'd like to highlight the opportunity to update any guidance or visuals you may have built to help guide users through their download journey. You may want to consider referencing the Download a file topic in the Google Chrome Help Center as a starting point. For extension developers, it is worth noting changes to chrome.downloads extensions APIs in case you need to update your extensions – specifically, setShelfEnabled has been replaced by setUiOptions which lets you show or hide the new downloads experience. We hope you'll enjoy this fresh coat of paint! We'll continue to build upon it in future releases to help you stay productive in Chrome while keeping you safe when downloading files from the web. Posted by Joshua Cruz, Communications Manager
You can subscribe to this RSS to get more information