Editor’s Note:The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.
(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)
Fei-Fei: Are you getting excited for Grace Hopper?
Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.
Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?
Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.
It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.
Fei-Fei:So do you think it’s too late?
Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.
Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?
Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.
I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. Diane Greene
Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?
Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.
Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.
Diane:This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.
Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.
...young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. Dr. Fei-Fei Li
Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?
Fei-Fei:Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.
Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.
Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.
Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.
Diane:Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.
We all get sidetracked at work. We intend to be as efficient as possible, but inevitably, the “busyness” of business gets in the way through back-to-back meetings, unfinished docs or managing a rowdy inbox. To be more efficient, you need quick access to your information like relevant docs, important tasks and context for your meetings.
Sadly, according to a report by McKinsey, workers spend up to 20 percent of their time—an entire day each week—searching for and consolidating information across a number of tools. We made Google Cloud Search available to Enterprise and Business edition customers earlier this year so that teams can access important information quicker. Here are a few ways that Cloud Search can help you get the information you need to accomplish more throughout your day.
1. Search more intuitively, access information quicker
If you search for a doc, you’re probably not going to remember its exact name or where you saved it in Drive. Instead, you might remember who sent the doc to you or a specific piece of information it contains, like a statistic.
A few weeks ago, we launched a new, more intuitive way to search in Cloud Search using natural language processing (NLP) technology. Type questions in Cloud Search using everyday language, like “Documents shared with me by John?,” “What’s my agenda next Tuesday?,” or “What docs need my attention?” and it will track down useful information for you.
2. Prioritize your to-dos, use spare time more wisely
With so much work to do, deciding what to focus on and what to leave for later isn’t always simple. A study by McKinsey reports that only nine percent of executives surveyed feel “very satisfied” with the way they allocate their time. We think technology, like Cloud Search, should help you with more than just finding what you’re looking for—it should help you stay focused on what’s important.
Imagine if your next meeting gets cancelled and you suddenly have an extra half hour to accomplish tasks. You can open the Cloud Search app to help you focus on what’s important. Powered by machine intelligence, Cloud Search proactively surfaces information that it believes is relevant to you and organizes it into simple cards that appear in the app throughout your workday. For example, it suggests documents or tasks based on which documents need your attention or upcoming meetings you have in Google Calendar.
3. Prepare for meetings, get more out of them
Employees spend a lot of time in meetings. According to a study in the UK by the Centre for Economics and Business, office workers spend an average of four hours per week in meetings. It’s even normal for us to join meetings unprepared. The same group surveyed feels like nearly half of the time (47%) spent in meetings is unproductive.
Thankfully, Cloud Search can help. It uses machine intelligence to organize and present information to set you up for success in a meeting. In addition to surfacing relevant docs, Cloud Search also surfaces information about meeting attendees from your corporate directory, and even includes links to relevant conversations from Gmail.
Start by going into Cloud Search to see info related to your next meeting. If you’re interested in looking at another meeting later in the day, just click on “Today’s meetings” and it will show you your agenda for the day. Next, select an event in your agenda (sourced from your Calendar) and Cloud Search will recommend information that’s relevant to that meeting.
Take back your time and focus on what’s important—open the Cloud Search app and get started today, or ask your IT administrator to enable it in your domain. You can also learn more about how Cloud Search can help your teams here.
As the publishing world continues to face new challenges amidst the shift to digital, news media and publishers are tasked with unlocking new opportunities. With online news consumption continuing to grow, it’s crucial that publishers take advantage of new technologies to sustain and grow their business. Machine learning yields tremendous value for media and can help them tackle the hardest problems: engaging readers, increasing profits, and making newsrooms more efficient. Google has a suite of machine learning tools and services that are easy to use—here are a few ways they can help newsrooms and reporters do their jobs
1. Improve your newsroom's efficiency
Editors want to make their stories appealing and to stand out so that people will read them. So finding just the right photograph or video can be key in bringing a story to life. But with ever-pressing deadlines, there’s often not enough time to find that perfect image. This is where Google Cloud Visionand Video Intelligencecan simplify the process by tagging images and videos based on the content inside the actual image. This metadata can then be used to make it easier and quicker to find the right visual.
2. Better understand your audience
News publishers use analytics tools to grow their audiences, and understand what that audience is reading and how they’re discovering content. Google Cloud Natural Language uses machine learning to understand what your content is about, independent of a website’s section and subsection structure (i.e. Sports, Local, etc.) Today, Cloud Natural Language announced a new content classifier and entity sentiment that digs into the detail of what a story is actually about. For example, an article about a high-tech stadium for the Golden State Warriors may be classified under the “technology” section of a paper, when its content should fall under “technology” and “sports.” This section-independent tagging can increase readership by driving smarter article recommendations and provides better data around trending topics. Naveed Ahmad, Senior Director of Data at Hearst has emphasized that precision and speed are critical to engaging readers: “Google Cloud Natural Language is unmatched in its accuracy for content classification. At Hearst, we publish several thousand articles a day across 30+ properties and, with natural language processing, we're able to quickly gain insight into what content is being published and how it resonates with our audiences."
3. Engage with new audiences
As publications expand their reach into more countries, they have to write for multiple audiences in different languages and many cannot afford multi-language desks. Google Cloud Translation makes translating for different audiences easier by providing a simple interface to translate content into more than 100 languages. Vice launched GoogleFish earlier this year to help editors quickly translate existing Vice articles into the language of their market. Once text was auto-translated, an editor could then push the translation to a local editor to ensure tone and local slang were accurate. Early translation results are very positive and Vice is also uncovering new insights around global content sharing they could not previously identify.
DB Corp, India’s largest newspaper group, publishes 62 editions in four languages and sells about 6 million newspaper copies per day. To address its growing customers and its diverse readership, reporters use Google Cloud Translation to capture and document interviews and source material for articles, with accuracy rates of 95 percent for Hindi alone.
4. Monetize your audience
So far we’ve primarily outlined ways to improve content creation and engagement with readers, however monetization is a critical piece for all publishers. Using Cloud Datalab, publishers can identify new subscription opportunities and offerings. The metadata collected from image, video, and content tagging creates an invaluable dataset to advertisers, such as audiences interested in local events or personal finance, or those who watch videos about cars or travel. The Washington Post has seen success with their in-house solution through the ability to target native ads to likely interested readers. Lastly, improved content recommendation drives consumption, ultimately improving the bottom line.
5. Experiment with new formats
The ability to share news quickly and efficiently is a major concern for newsrooms across the world. However today more than ever, readers are reading the news in different ways across different platforms and the “one format fits all” method is not always best. TensorFlow’s “summary.text” feature can help publishers quickly experiment with creating short form content from longer stories. This helps them quickly test the best way to share their content across different platforms. Reddit recently launched a similar “tl;dr bot” that summarizes long posts into digestible snippets.
6. Keep your content safe for everyone
The comments section can be a place of both fruitful discussion as well as toxicity. Users who comment are frequently the most highly engaged on the site overall, and while publishers want to keep sharing open, it can frequently spiral out of control into offensive speech and bad language. Jigsaw’s Perspective is an API that uses machine learning to spot harmful comments which can be flagged for moderators. Publishers like the New York Times have leveraged Perspective's technology to improve the way all readers engage with comments. By making the task of moderating conversations at scale easier, this frees up valuable time for editors and improves online discussion.
Example of New York Time’s moderator dashboard. Each dot represents a negative comment
From the printing press to machine learning, technology continues to spur new opportunities for publishers to reach more people, create engaging content and operate efficiently. We're only beginning to scratch the surface of what machine learning can do for publishers. Keep tabs on The Keyword for the latest developments.
Earlier this year, we launched Google Cloud Search, a new G Suite tool that uses machine learning to help organizations find and access information quickly.
Just like in Google Search, which lets you search queries in a natural, intuitive way, we want to make it easy for you to find information in the workplace using everyday language. According to Gartner research, by 2018, 30 percent or more of enterprise search queries will start with a "what," "who," "how" or "when.”*
Today, we’re making it possible to use natural language processing (NLP) technology in Cloud Search so you can track down information—like documents, presentations or meeting details—fast.
Find information fast with Cloud Search
If you’re looking for a Google Doc, you’re more likely to remember who shared it with you than the exact name of a file. Now, you can use NLP technology, an intuitive way to search, to find information quickly in Cloud Search.
Type queries into Cloud Search using natural, everyday language. Ask questions like “Docs shared by Mary,” “Who’s Bob’s manager?” or “What docs need my attention?” and Cloud Search will show you answer cards with relevant information.
Having access to information quicker can help you make better and faster decisions in the workplace. If your organization runs on G Suite Business or Enterprise edition, start using Cloud Search now. If you’re new to Cloud Search, learn more on our website or check out this video to see it in action.
*Gartner, ‘Insight Engines’ Will Power Enterprise Search That is Natural, Total and Proactive, 09 December 2015, refreshed 05 April 2017
A few months back, we announced a new way for you to analyze data in Google Sheets using machine learning. Instead of relying on lengthy formulas to crunch your numbers, now you can use Explore in Sheets to ask questions and quickly gather insights. Check it out.
Quicker data → problems solved
When you have easier access to data—and can figure out what it means quickly—you can solve problems for your business faster. You might use Explore in Sheets to analyze profit from last year, or look for trends in how your customers sign up for your company’s services. Explore in Sheets can help you track down this information, and more importantly, visualize it.
Getting started is easy. Just click the “Explore” button on the bottom right corner of your screen in Sheets. Type in a question about your data in the search box and Explore responds to your query. Here’s an example of how Sheets can build charts for you.
Syncing Sheets with BigQuery for deeper insights
For those of you who want to take data analysis one step further, you can sync Sheets with BigQuery—Google Cloud’s low cost data warehouse for analytics.
We all waste time at work, whether it’s on purpose (brushing up on Wonder Woman's history) or on accident (really should have budgeted more time for internal reviews). Luckily, G Suite can help you accomplish more at work, quicker. Here are four tell-tale signs you’re spending time on the wrong things, and tips on how to avoid these time-sinks.
1. You’ve spent more time emailing co-workers than you have actually working
The average worker spends an estimated 13 hours per week writing emails—nearly two full work days. Luckily, you can cut back on time spent replying to emails with Smart Reply in Gmail. Smart Reply uses machine learning to generate quick, natural language responses for you.
2. You’ve spent the past hour formatting slides for a presentation
Is an image centered? Should you use “Times New Roman” or “Calibri?” Formatting presentations monopolizes too much of our time and takes away from what’s really valuable: sharing insights.
But you can save time polishing your presentations by using Explore in Slides, powered by machine learning. Explore generates design suggestions for your presentation so you don’t have to worry about cropping, resizing or reformatting. You can also use Explore in Docs, which makes it easy to research right within your documents. Explore will recommend related topics to help you learn more or even suggest photos or more content you can add to your document. Check out how to use Explore in Slides and Docs in this episode of the G Suite Show:
Explore feature for Docs and Slides | The G Suite Show
3. You can’t find a file you know you saved in your drive
Where is that pesky file? According to a McKinsey report, employees spend almost two hours every day searching and gathering information. That’s a lot of time.
Curb time wasted with Quick Access in Drive, which uses machine intelligence to predict and suggest files you need when you need them. Natural Language Processing (NLP) also makes it possible for you to search the way you speak. Say you’re trying to find an important file from 2016. Simply search “spreadsheets I created in 2016” and voilà!
Another way to avoid losing files is by using Team Drives, a central location in Drive that houses shared files. In Team Drives, all team members can access files (or manage individual share permissions), so you don’t have to worry about tracking down a file after someone leaves or granting access to every doc that you create.
4. You’ve fussed with a spreadsheet formula over and over again
According to internal Google data, less than 30 percent of enterprise users feel comfortable manipulating formulas within spreadsheets. “=SUM(A1, B1)" or "=SUM(1, 2)" is easy, but more sophisticated calculations can be challenging.
Bypass remembering formulas and time-consuming analysis and dive straight into finding insights with Explore in Sheets, which uses machine learning to crunch numbers for you. Type in questions (in words, not formulas) in Explore in Sheets on the web to learn more about your data instantly. And now, you can use the same powerful technology to create charts for you within Sheets. Instead of manually building graphs, ask Explore to do it for you by typing the request in words.
Stop wasting time on menial tasks and focus more on important, strategic work. To learn more about other G Suite apps that can help you save time, visit https://gsuite.google.com/.
Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a “3:3 point” strategy—a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but it’s a favoured position of our program AlphaGo. Ke Jie was playing it at its own game.
Ke Jie’s thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.
Ke Jie has a laugh after game two against AlphaGo on May 25, 2017 (Photo credit: Google)
We have always believed in the potential for AI to help society discover new knowledge and benefit from it, and AlphaGo has given us an early glimpse that this may indeed be possible. More than a competitor, AlphaGo has been a tool to inspire Go players to try new strategies and uncover new ideas in this 3,000 year-old game.
The 9 dan player team of (left to right): Shi Yue, Mi Yuting, Tang Weixing, Chen Yaoye, and Zhou Ruiyang strategize their next move during the Team Go game against AlphaGo on May 26, 2017 (Photo credit: Google)
The creative moves it played against the legendary Lee Sedol in Seoul in 2016 brought completely new knowledge to the Go world, while the unofficial online games it played under the moniker Magister (Master) earlier this year have influenced many of Go’s leading professionals—including the genius Ke Jie himself. Events like this week’s Pair Go, in which two of the world’s top players partnered with AlphaGo, showed the great potential for people to use AI systems to generate new insights in complex fields.
This week’s series of thrilling games with the world’s best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.
The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials. If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next.
While AlphaGo is stepping back from competitive play, it’s certainly not the end of our work with the Go community, to which we owe a huge debt of gratitude for their encouragement and motivation over the past few years. We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems. Just like our first AlphaGo paper, we hope that other developers will pick up the baton, and use these new advances to build their own set of strong Go programs.
We’re also working on a teaching tool—one of the top requests we’ve received throughout this week. The tool will show AlphaGo’s analysis of Go positions, providing an insight into how the program thinks, and hopefully giving all players and fans the opportunity to see the game through the lens of AlphaGo. We’re particularly honoured that our first collaborator in this effort will be the great Ke Jie, who has agreed to work with us on a study of his match with AlphaGo. We’re excited to hear his insights into these amazing games, and to have the chance to share some of AlphaGo’s own analysis too.
Finally, to mark the end of the Future of Go Summit, we wanted to give a special gift to fans of Go around the world. Since our match with Lee Sedol, AlphaGo has become its own teacher, playing millions of high level training games against itself to continually improve. We’re now publishing a special set of 50 AlphaGo vs AlphaGo games, played at full length time controls, which we believe contain many new and interesting ideas and strategies.
We took the opportunity this week in Wuzhen to show some of these games to a handful of top professionals. Shi Yue, 9 Dan Professional and World Champion said the games were “Like nothing I’ve ever seen before—they’re how I imagine games from far in the future.”Gu Li, 9 Dan Professional and World Champion, said that “AlphaGo’s self play games are incredible—we can learn many things from them.”We hope that all Go players will now enjoy trying out some of the moves in the set. The first ten games are now available here, and we’ll publish another ten each day until all 50 have been released.
We have been humbled by the Go community’s reaction to AlphaGo, and the way professional and amateur players have embraced its insights about this ancient game. We plan to bring that same excitement and insight to a range of new fields, and try to address some of the most important and urgent scientific challenges of our time. We hope that the story of AlphaGo is just the beginning.
Demis and Ke Jie embrace after the award giving on the final day, May 27. 2017. (Photo credit: Google)
Ke Jie opening
Ke Jie makes his second move, part of his 3:3 opening in the final match of the series on May 27, 2017 (Photo credit: Google)
8 dan player Lian Xian and 9 dan player Gu Li pair up with AlphaGo teammates in the Pair Go match on May 26, 2017 (Photo credit: Google)
Go players on stage
DeepMind's Dave Silver and Fan Hui (far left) together on stage with (left to right) Gu Li, Lian Xian, Tang Weixing, Shi Yue, Zhou Ruiyang, Chen Yaoye, and Mi Yuting after the Team and Pair Go games on May 26, 2017 (Photo credit: Google)
Ke Jie and Lian Xian
China's 8 dan and Pair Go player Lian Xian shares a laugh with Ke Jie as they watch the Team Go match on May 26, 2017 (Photo credit: Google)
Ke Jie speaking with press
Ke Jie on stage after game #1 on May 23, 2017 (Photo credit: Google)
San Francisco — What a week! Google Cloud Next ‘17 has come to the end, but really, it’s just the beginning. We welcomed 10,000+ attendees including customers, partners, developers, IT leaders, engineers, press, analysts, cloud enthusiasts (and skeptics). Together we engaged in 3 days of keynotes, 200+ sessions, and 4 invitation-only summits. Hard to believe this was our first show as all of Google Cloud with GCP, G Suite, Chrome, Maps and Education. Thank you to all who were here with us in San Francisco this week, and we hope to see you next year.
If you’re a fan of video highlights, we’ve got you covered. Check out our Day 1 keynote(in less than 4 minutes) and Day 2 keynote (in under 5!).
One of the common refrains from customers and partners throughout the conference was “Wow, you’ve been busy. I can’t believe how many announcements you’ve had at Next!” So we decided to count all the announcements from across Google Cloud and in fact we had 100 (!) announcements this week.
For the list lovers amongst you, we’ve compiled a handy-dandy run-down of our announcements from the past few days:
Google Cloud is excited to welcome two new acquisitions to the Google Cloud family this week, Kaggle and AppBridge.
1. Kaggle - Kaggle is one of the world's largest communities of data scientists and machine learning enthusiasts. Kaggle and Google Cloud will continue to support machine learning training and deployment services in addition to offering the community the ability to store and query large datasets.
2. AppBridge - Google Cloud acquired Vancouver-based AppBridge this week, which helps you migrate data from on-prem file servers into G Suite and Google Drive.
Google Cloud brings a suite of new security features to Google Cloud Platform and G Suite designed to help safeguard your company’s assets and prevent disruption to your business:
3. Identity-Aware Proxy (IAP) for Google Cloud Platform (Beta) - Identity-Aware Proxy lets you provide access to applications based on risk, rather than using a VPN. It provides secure application access from anywhere, restricts access by user, identity and group, deploys with integrated phishing resistant Security Key and is easier to setup than end-user VPN.
4. Data Loss Prevention (DLP) for Google Cloud Platform (Beta) - Data Loss Prevention API lets you scan data for 40+ sensitive data types, and is used as part of DLP in Gmail and Drive. You can find and redact sensitive data stored in GCP, invigorate old applications with new sensitive data sensing “smarts” and use predefined detectors as well as customize your own.
7. Vault for Google Drive (GA) - Google Vault is the eDiscovery and archiving solution for G Suite. Vault enables admins to easily manage their G Suite data lifecycle and search, preview and export the G Suite data in their domain. Vault for Drive enables full support for Google Drive content, including Team Drive files.
8. Google-designed security chip, Titan - Google uses Titan to establish hardware root of trust, allowing us to securely identify and authenticate legitimate access at the hardware level. Titan includes a hardware random number generator, performs cryptographic operations in the isolated memory, and has a dedicated secure processor (on-chip).
New GCP data analytics products and services help organizations solve business problems with data, rather than spending time and resources building, integrating and managing the underlying infrastructure:
9. BigQuery Data Transfer Service (Private Beta) - BigQuery Data Transfer Service makes it easy for users to quickly get value from all their Google-managed advertising datasets. With just a few clicks, marketing analysts can schedule data imports from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers and YouTube Content and Channel Owner reports.
10. Cloud Dataprep (Private Beta) - Cloud Dataprep is a new managed data service, built in collaboration with Trifacta, that makes it faster and easier for BigQuery end-users to visually explore and prepare data for analysis without the need for dedicated data engineer resources.
11. New Commercial Datasets - Businesses often look for datasets (public or commercial) outside their organizational boundaries. Commercial datasets offered include financial market data from Xignite, residential real-estate valuations (historical and projected) from HouseCanary, predictions for when a house will go on sale from Remine, historical weather data from AccuWeather, and news archives from Dow Jones,all immediately ready for use in BigQuery (with more to come as new partners join the program).
12. Python for Google Cloud Dataflow in GA - Cloud Dataflow is a fully managed data processing service supporting both batch and stream execution of pipelines. Until recently, these benefits have been available solely to Java developers. Now there’s a Python SDK for Cloud Dataflow in GA.
14. Google Cloud Datalab in GA - This interactive data science workflow tool makes it easy to do iterative model and data analysis in a Jupyter notebook-based environment using standard SQL, Python and shell commands.
15. Cloud Dataproc updates - Our fully managed service for running Apache Spark, Flink and Hadoop pipelines has new support for restarting failed jobs (including automatic restart as needed) in beta, the ability to create single-node clusters for lightweight sandbox development, in beta, GPU support, and the cloud labels feature, for more flexibility managing your Dataproc resources, is now GA.
New GCP databases and database features round out a platform on which developers can build great applications across a spectrum of use cases:
16. Cloud SQL for Postgre SQL (Beta) - Cloud SQL for PostgreSQL implements the same design principles currently reflected in Cloud SQL for MySQL, namely, the ability to securely store and connect to your relational data via open standards.
18. Cloud SQL for MySQL improvements- Increased performance for demanding workloads via 32-core instances with up to 208GB of RAM, and central management of resources via Identity and Access Management (IAM) controls.
19. Cloud Spanner - Launched a month ago, but still, it would be remiss not to mention it because, hello, it’s Cloud Spanner! The industry’s first horizontally scalable, globally consistent, relational database service.
21. Federated query on Cloud Bigtable - We’ve extended BigQuery’s reach to query data inside Cloud Bigtable, the NoSQL database service for massive analytic or operational workloads that require low latency and high throughput (particularly common in Financial Services and IoT use cases).
New GCP Cloud Machine Learning services bolster our efforts to make machine learning accessible to organizations of all sizes and sophistication:
22. Cloud Machine Learning Engine (GA) - Cloud ML Engine, now generally available, is for organizations that want to train and deploy their own models into production in the cloud.
23. Cloud Video Intelligence API (Private Beta) - A first of its kind, Cloud Video Intelligence API lets developers easily search and discover video content by providing information about entities (nouns such as “dog,” “flower”, or “human” or verbs such as “run,” “swim,” or “fly”) inside video content.
24. Cloud Vision API (GA) - Cloud Vision API reaches GA and offers new capabilities for enterprises and partners to classify a more diverse set of images. The API can now recognize millions of entities from Google’s Knowledge Graph and offers enhanced OCR capabilities that can extract text from scans of text-heavy documents such as legal contracts or research papers or books.
26. Cloud Jobs API- A powerful aid to job search and discovery, Cloud Jobs API now has new features such as Commute Search, which will return relevant jobs based on desired commute time and preferred mode of transportation.
27. Machine Learning Startup Competition - We announced a Machine Learning Startup Competition in collaboration with venture capital firms Data Collective and Emergence Capital, and with additional support from a16z, Greylock Partners, GV, Kleiner Perkins Caufield & Byers and Sequoia Capital.
New GCP pricing continues our intention to create customer-friendly pricing that’s as smart as our products; and support services that are geared towards meeting our customers where they are:
28. Compute Engine price cuts - Continuing our history of pricing leadership, we’ve cut Google Compute Engine prices by up to 8%.
29. Committed Use Discounts - With Committed Use Discounts, customers can receive a discount of up to 57% off our list price, in exchange for a one or three year purchase commitment paid monthly, with no upfront costs.
30. Free trial extended to 12 months- We’ve extended our free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and schedule. Plus, we’re introduced new Always Free products -- non-expiring usage limits that you can use to test and develop applications at no cost. Visit the Google Cloud Platform Free Tier page for details.
31. Engineering Support - Our new Engineering Support offering is a role-based subscription model that allows us to match engineer to engineer, to meet you where your business is, no matter what stage of development you’re in. It has 3 tiers:
Development engineering support - ideal for developers or QA engineers that can manage with a response within four to eight business hours, priced at $100/user per month.
Production engineering support provides a one-hour response time for critical issues at $250/user per month.
On-call engineering support pages a Google engineer and delivers a 15-minute response time 24x7 for critical issues at $1,500/user per month.
32. Cloud.google.com/community site - Google Cloud Platform Community is a new site to learn, connect and share with other people like you, who are interested in GCP. You can follow along with tutorials or submit one yourself, find meetups in your area, and learn about community resources for GCP support, open source projects and more.
New GCP developer platforms and tools reinforce our commitment to openness and choice and giving you what you need to move fast and focus on great code.
33. Google AppEngine Flex (GA) - We announced a major expansion of our popular App Engine platform to new developer communities that emphasizes openness, developer choice, and application portability.
34. Cloud Functions (Beta)- Google Cloud Functions has launched into public beta. It is a serverless environment for creating event-driven applications and microservices, letting you build and connect cloud services with code.
35. Firebase integration with GCP (GA) - Firebase Storage is now Google Cloud Storage for Firebase and adds support for multiple buckets, support for linking to existing buckets, and integrates with Google Cloud Functions.
36. Cloud Container Builder - Cloud Container Builder is a standalone tool that lets you build your Docker containers on GCP regardless of deployment environment. It’s a fast, reliable, and consistent way to package your software into containers as part of an automated workflow.
37. Community Tutorials (Beta) - With community tutorials, anyone can now submit or request a technical how-to for Google Cloud Platform.
Secure, global and high-performance, we’ve built our cloud for the long haul. This week we announced a slew of new infrastructure updates.
38. New data center region: California - This new GCP region delivers lower latency for customers on the West Coast of the U.S. and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.
39. New data center region: Montreal - This new GCP region delivers lower latency for customers in Canada and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.
40. New data center region: Netherlands - This new GCP region delivers lower latency for customers in Western Europe and adjacent geographic areas. Like other Google Cloud regions, it will feature a minimum of three zones, benefit from Google’s global, private fibre network, and offer a complement of GCP services.
41. Google Container Engine - Managed Nodes - Google Container Engine (GKE) has added Automated Monitoring and Repair of your GKE nodes, letting you focus on your applications while Google ensures your cluster is available and up-to-date.
42. 64 Core machines + more memory- We have doubled the number of vCPUs you can run in an instance from 32 to 64 and up to 416GB of memory per instance.
43. Internal Load balancing (GA) - Internal Load Balancing, now GA, lets you run and scale your services behind a private load balancing IP address which is accessible only to your internal instances, not the internet.
44. Cross-Project Networking (Beta) - Cross-Project Networking (XPN), now in beta, is a virtual network that provides a common network across several Google Cloud Platform projects, enabling simple multi-tenant deployments.
In the past year, we’ve launched 300+ features and updates for G Suite and this week we announced our next generation of collaboration and communication tools.
46. Drive File Stream (EAP) - Drive File Stream is a way to quickly stream files directly from the cloud to your computer With Drive File Steam, company data can be accessed directly from your laptop, even if you don’t have much space on your hard drive.
48. Quick Access in Team Drives (GA) - powered by Google’s machine intelligence, Quick Access helps to surface the right information for employees at the right time within Google Drive. Quick Access now works with Team Drives on iOS and Android devices, and is coming soon to the web.
49. Hangouts Meet (GA to existing customers) - Hangouts Meet is a new video meeting experience built on the Hangouts that can run 30-person video conferences without accounts, plugins or downloads. For G Suite Enterprise customers, each call comes with a dedicated dial-in phone number so that team members on the road can join meetings without wifi or data issues.
50. Hangouts Chat (EAP) - Hangouts Chat is an intelligent communication app in Hangouts with dedicated, virtual rooms that connect cross-functional enterprise teams. Hangouts Chat integrates with G Suite apps like Drive and Docs, as well as photos, videos and other third-party enterprise apps.
51. @meet - @meet is an intelligent bot built on top of the Hangouts platform that uses natural language processing and machine learning to automatically schedule meetings for your team with Hangouts Meet and Google Calendar.
52. Gmail Add-ons for G Suite (Developer Preview) - Gmail Add-ons provide a way to surface the functionality of your app or service directly in Gmail. With Add-ons, developers only build their integration once, and it runs natively in Gmail on web, Android and iOS.
53. Edit Opportunities in Google Sheets - with Edit Opportunities in Google Sheets, sales reps can sync a Salesforce Opportunity List View to Sheets to bulk edit data and changes are synced automatically to Salesforce, no upload required.
Ever since I can remember, music has been a huge part of who I am. Growing up, my parents formed a traditional Mexican trio band and their music filled the rooms of my childhood home. I’ve always felt deeply moved by music, and I’m fascinated by the emotions music brings out in people.
When I attended community college and took my first physics course, I was introduced to the science of music—how it’s a complex assembly of overlapping sound waves that we sense from the resulting vibrations created in our eardrums. Though my parents had always taken an artistic approach to playing with soundwaves, I took a scientific one. Studying acoustics opened up all kinds of doors for me I never thought were possible, from pursuing a career in electrical engineering—to studying whale calls using machine learning.
Daniel with his family during move in day for his first quarter at Cal Poly.
I applied to the Monterey Bay Aquarium Research Institute (MBARI) summer internship program, where I learned about John Ryan and Danelle Cline’s research using machine learning (ML) to monitor whale sounds. Once again, I found myself fascinated by sound, this time by analyzing the sounds of endangered blue and fin whales to further understand their ecology. By identifying and tracking the whales’ calls and changing migration patterns, scientists hope to gain insight on the broader impacts of climate change on ocean ecology, and how human influence negatively impacts marine life.
MBARI had already collected thousands of hours of audio, but it would have proven too cumbersome of a task to sift through all of that data to find whale calls. That’s what led Danelle to introduce me to machine learning. ML enables us to pick out patterns from very large data sets like MBARI’s audio recordings. By training the model using TensorFlow, we can efficiently sift through the data and track these whales with 98 percent accuracy. This tracking system can tell us how many calls were made in any given amount of time near the Monterey Bay and will enable scientists at MBARI to track their changing migration behavior, and advance their research on whale ecology and how human influence above water negatively impacts marine life below.
What started as a passion for music ended in a love of engineering thanks to the opportunity at MBARI. Before community college I had no idea what an engineer even did, and I certainly never imagined my music background would be relevant in using TensorFlow to identify and classify whale calls within a sea of ocean audio data. But I soon learned there’s more than one way to pursue a passion, and I’m excited for what the future holds—for marine life, for machine learning, and for myself. Following the whales on their journey has led me to begin mine.
Editor’s Note: AI is behind many of Google’s products and is a big priority for us as a company (as you may have heard at Google I/O yesterday). So we’re sharing highlights on how AI already affects your life in ways you might not know, and how people from all over the world have used AI to build their own technology.
Machine learning is at the core of many of Google’s own products, but TensorFlow—our open source machine learning framework—has also been an essential component of the work of scientists, researchers and even high school students around the world. At Google I/O, we’re hearing from some of these people, who are solving big (we mean, big) problems—the origin of the universe, that sort of stuff. Here are some of the interesting ways they’re using TensorFlow to aid their work.
Ari Silburt, a Ph.D. student at Penn State University, wants to uncover the origins of our solar system. In order to do this, he has to map craters in the solar system, which helps him figure out where matter has existed in various places (and at various times) in the solar system. You with us? Historically, this process has been done by hand and is both time consuming and subjective, but Ari and his team turned to TensorFlow to automate it. They’ve trained the machine learning model using existing photos of the moon, and have identified more than 6,000 new craters.
On the left is a picture of the moon, hard to tell where the heck those craters are. On the right we have an accurate depiction of crater distribution thanks to TensorFlow.
Switching from outer space to the rainforests of Brazil: Topher White (founder of Rainforest Connection) invented “The Guardian” device to prevent illegal deforestation in the Amazon. The devices—which are upcycled cell phones running on Tensorflow—are installed in trees throughout the forest, recognize the sound of chainsaws and logging trucks, and alert the rangers who police the area. Without these devices, the land must be policed by people, which is nearly impossible given the massive area it covers.
Topher installs guardian devices in the tall trees of the Amazon
Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. In 2016, we announced that machine learning was being used to aid diagnostic efforts in the area of DR, by analyzing a patient’s fundus image (photo of the back of the eye) with higher accuracy. Now we’re taking those fundus images to the next level with TensorFlow. Dr. Jorge Cuadros, an optometrist in Oakland, CA, is able to determine a patient’s risk of cardiovascular disease by analyzing their fundus image with a deep learning model.
Fundus image of an eye with sight-threatening retinal disease. With machine learning this image will tell doctors much more than eye health.
Good news for green thumbs of the world, Shaza Mehdi and Nile Ravenell are high school students who developed PlantMD, an app that lets you figure out if your plant is diseased. The machine learning model runs on TensorFlow, and Shaza and Nile used data from plantvillage.com and a few university databases to train the model to recognize diseased plants. Shaza also built another app that uses a similar approach to diagnose skin disease.
PlantMD in action
To learn more about how AI can bring benefits to everyone, check out ai.google.