Loading...

Follow Analytics Pros | Google Analytics Consultant Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Analytics Pros was recently confirmed as a Data Analytics Premier Partner specialist in the Google Cloud Partner Program. Per Google, “Specializations match a customer’s need with a partner’s expertise in a specific service or solution area. By achieving specialization [we] signal to the market that [we’ve] gone through rigorous technical assessment, employ certified technical professionals, and have demonstrated customer success in [our] area of specialization.”

“Partners that have achieve this Specialization have demonstrated success from ingestion to data preparation, store, and analysis.”

This Specialization is a testament to our team of skilled analysts and data scientists who’ve developed their talents for several years to better serve our clients and partners. We’re prepared—and, now, certified!—to offer our clients the absolute best service in the Google Cloud Platform.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As a marketer, it’s very common to not have the time or resources to keep your GA and GTM implementation clean. Too often, we save a bunch of housekeeping tasks for a rainy day. Think about this and compare it to your own home. It doesn’t matter who in the household you are. It doesn’t matter whether you are single or have a family of five. And finally, it doesn’t matter if you live in a tiny pop out camper or in a mansion with a pool. There are things that have to be done in your home in order for you to have a safe and productive life at home. The same goes for your Digital Analytics world. Google provides some incredible FREE tools such as Google Analytics and Google Tag manager. They are very easy to use, but they are not “set it and forget it” type of tools if you want to use the information successfully.  It’s time for you to clean house or find someone to do so.

I have five topics that may seem very basic, but coming from the client side to the agency side I feel that they are very important for us to take seriously. Take action now if you’re not doing so already.

1 – Audit your Google Tags and Google Analytics Reports

Think of this as your once-a-year spring cleaning. Although, if you have the resources, this could be done quarterly.

I recommend that all Google Analytics users check for the following:

  • Personally Identifiable Information (PII) – Check all of your data in Google Analytics for any instances of PII. Google takes violations very seriously.  https://support.google.com/analytics/answer/6366371?hl=en
    • The quickest check I like to perform is to log into the Site Content section in Google Analytics and look at the page names.

You can add a filter where page contains an “@” symbol to quickly find any pages where an email address is part of the page name.  A better and more advanced filter is to apply this regex to the filter:

([a-zA-Z0-9_\.-]+)@([\da-zA-Z\.-]+)\.([a-zA-Z\.]{2,6})

Note: When looking for the existence of PII, you need to look at an unfiltered view.  Thus, you are reviewing ALL of your data that is collected by GA.

  • “Not Set” – Some not set is unavoidable and even expected.  However, “not set” should not be your number 1 landing page.  Review all of the out of the box reports for anything showing up as “Not Set”.
    • Again, I will send you first to the Site Content section but this time to check your Landing Pages report.

  • Goals – Do your “Goals” in Google Analytics match your current-year business goals? I wish I had a dollar for every time I heard a Marketing Manager look at their GA setup and then say, “We don’t really use those goals anymore. They are outdated.” Having goals set up outside of your event reporting helps you see critical information. One clear advantage that I see with using Goals in GA is that Goals are available in certain reports where you can not report on events.  
  • Here at Analytics Pros, we have seen clients who have asked for just a “mini audit” because they thought their GTM and GA was set up perfectly. After taking a look, we have seen a few things such as the following;
    • some events (such as scroll tracking) are firing BEFORE the page view, thus causing the “not set” values to show up in their top 10 entry pages  
    • multiple page views are happening on the same page, causing the page views to be inflated drastically
    • you’re using GTM, but you still have hard coded tags firing but they are being sent to an older property and view, or you’re not taking advantage of Google Tag Manager (GTM) at all
    • clientid is not set as a user-scoped dimension
    • missing your own domain from the referral exclusion list
    • query parameter fragmentation
    • poor or missing event categorization
    • inconsistent attribution—an example is using UTM parameters on internal links
    • not using Auto tag for your external search campaigns
  • Account Setup – Please review the list of users who have access to your Google Analytics. Are there gmail addresses in your list who are no longer employed at your company?

To delete a user:

  1. Sign in to Google Analytics.
  2. Click Admin, and navigate to the desired account.
  3. In the Account, Property, or View column, click User Management.
  4. Use the search box at the top of the list to find the user you want. Enter a full or partial address( e.g., janedoe@gmail.com or janedoe).
  5. Select the check box for each user you want to delete, then click REMOVE.

More information on adding and editing user permissions can be found here.

2 – Google Analytics – custom alerts

Back to the housekeeping analogy, this is like a quarterly window wash where you use a little glass cleaner as needed. The best part is that it’s automated. You can set up alerts and receive emails on the frequency that you desire!

At the very least, you should set 3 alerts in GA. These three metrics should be a KPI that someone other than the manager is also responsible or accountable for. I will speak to this a little later.

The steps to set up an alert are as follows:

  1. Sign in to Google Analytics.
  2. Navigate to your view.
  3. Open Reports.
  4. Click CUSTOMIZATION > Custom Alerts.
  5. Click Manage custom alerts.
  6. Click + NEW ALERT.
  7. Alert name: Enter a name for the alert.

Q: Why would you want an alert on something like bounce rate?

A: If I were on the client side, I would want to know as soon as possible that my bounce rate for the previous day was higher than “normal.”  If and when this happens, it would tell me that, for some reason, the traffic coming to our site was one of the following:

  • Not engaged with our site based on what they saw on the landing page—increasing quantity doesn’t always increase quality
  • We possibly had site issues causing users not to view a second page

Trend out your bounce rate over about a fiscal quarter. What is your average? If it floats somewhere around 45%, set up your alert to check the previous day’s data for a bounce rate higher than 50%. When it hits 50%, it will automatically alert you via email or text. Then, when you get the alert, log into GA and look at your acquisition reports. Check your traffic and bounce rate for each high level marketing program.

Another example is how you can check for outliers. You can set up an alert on a KPI to notify you if/when your sessions are less than or greater than 5% difference from a previous time period. The percentage and time period you select all depend on your site and what you would consider as an outlier.

Here is more information from Google Support if you want to read more.

3 – Google Alerts on your competitors

What is a good way to find out the latest news about your competitors? Google can tell you in an automated email and they’ll do this for FREE! You can also use Google Alerts to find out what is being said about your own company or brand. You may be asking, “How do these alerts help with my digital housekeeping?” These alerts will save you time researching competitors and what is happening in the industry. If you read something about how your competitor just met quarterly earnings and used X, Y and Z as their KPI’s, it will spark a reminder for you to compare those same KPI’s and make sure your tags are performing as expected.

How to do this:

Go to the google alerts page:  https://www.google.com/alerts

Type in your company name or the name of a brand that you consider a competitor. In my example below, I will type in my Company Name, Analytics Pros.

You can setup these alerts to email to you as they happen, once per day, or once per week. If you click on pencil/edit icon, you can see the details.

4 – RACI

Who is or are the housekeepers in your Digital Analytics house? Is the RACI acronym overused at your company? In my opinion, it can’t be overused.  

  • Your company has to define who is Responsible for the tags? Who is responsible for looking at specific reports in GA on a regular basis?
  • Your company has to define who is Accountable for your tags and GA data. If this has changed owners multiple times over the last year or two, it may be time to hire some help. If the owner is not technical or is not using the GA data on a daily basis, it is time to find someone who can assist with that. In a perfect world, every company would have at least one person closely monitoring the KPI’s in GA. You could also assign a report owner for each section of GA. When companies are not happy with their Google Analytics implementation, more often than not, these companies can’t identify the person or people who are 100% responsible or accountable for the tools.  
  • Responsible: person who performs an activity or does the work.
  • Accountable: person who is ultimately accountable and has Yes/No/Veto.
  • Consulted: person that needs to feedback and contribute to the activity.
  • Informed: person that needs to know of the decision or action.
5 – Hire a Housekeeper

Last but not least…get assistance if you’re responsible for your GA implementation but don’t have time to keep up with the latest and greatest in Google Analytics. At Analytics Pros, we can provide a full Health Check Audit for you. The value that our full audit provides is an investment worth making. The ROI of preventing PII violations alone can be substantial. If you’re interested in an audit of your GA Account, please contact gethelp@analyticspros.com.  

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Charles Farina, director of growth and development, Rick Reinard, vice president and chief of analytics, and Tamara Asselta, analytics manager for Analytics Pros accepted the Google Platform Case Study Award on November 1st in San Francisco at the Google Partner Summit.

We are delighted to share that Analytics Pros was recognized for with a Google Partner Platform Award for its work with the World Surf League at this year’s Google Partner Summit.

Analytics Pros was recognized for its work with World Surf League (WSL), a media company that broadcasts surfing events via streaming live video to television partners and social media.

To better understand fan behavior, WSL began using Google Analytics 360 to gather data from a variety of customer touchpoints. WSL worked with Analytics Pros to accelerate time to value and keep data quality high.

Working with Analytics Pros and moving to Google Cloud has resulted in the media company doubling the efficiency of its campaigns that drive fan engagement and content viewership, increased web/mobile sessions by 31 percent, improved responsiveness and speed of analytics team up to 80 percent and reduced infrastructure costs by 70 percent during the off season.

Rich Robinson, the SVP Product for WSL recently stated, “Moving our infrastructure to Google Cloud Platform has allowed us to scale our use of resources to match traffic to our website, mobile, and connected device apps. During our off season, we reduced infrastructure costs by 70% by scaling our resources down.” —Rich Robinson, SVP Product, WSL

We are proud of our partnership with Google and the dedication of the team at Analytics Pros. Working with data-driven marketers allows us analyze and understand billions of digital interactions from hundreds of millions of users globally and deliver insights that have a real impact. Thanks to the team at World Surf League for allowing us to share your story and success!

To read the World Surf League case study click here.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s possible to use React Native and install Firebase GTM. This is great because no documentation on this topic exists at the moment. Below is a summary. Our goal is to provide a bit of structure around installation because we’re combining several different technologies and, therefore, it’s easy to become disorganized.

Step 1: Install Firebase
To accomplish this, we’ll use the below integration. This integration allows us to use JavaScript to make analytic calls instead of having to write native code twice.
This is a good starting place for installing the integration. The basic outline is below:
At this point you should have Firebase installed.
Step 2: Install Firebase GTM
You’ll need to install an individual container for both iOS and Android. We’ll need to access the native level of the React Native app. Because this part is native, it’s identical to the standard Firebase GTM installation.
  • iOS
    • Install Cocopods
    • Add the below to the podfile.
      • pod ‘GoogleTagManager’, ‘~> 6.0’
    • Run ‘pod install’
    • Create GTM Container and download default container.
    • Place default container into a created ‘container’ folder.
      • <PROJECT_ROOT>/container/GTM-XXXXXX.json
      • Note: This has to be an actual folder with the exact name—otherwise GTM will not be installed correctly.
  • Android
    • Add the below under ‘dependencies’ within the app gradle.
      • compile ‘com.google.android.gms:play-services-tagmanager:11.0.4′
    • Create GTM Container and download default container.
    • Place default container into a created ‘containers’ folder.
      • app/main/assets/containers
      • Note: This has to be an actual folder with the exact name—otherwise GTM will not be installed correctly.
Step 3: Add Analytics Code

We can now add JavaScript code to our React Native app that will send data to Firebase GTM. We do this by calling the below object:

firebase.analytics()

We can send data to Firebase GTM with the below:

firebase.analytics().logEvent(‘eventName’, {‘event’:’parameters’});

Here’s the reference for more info.

This information is sent to Firebase GTM and, therefore, Firebase Analytics automatically. We should see this information in the Firebase Analytics StreamView. Once we have Google Analytics tags setup in Firebase GTM, we should see this information appear in Google Analytics Real-Time reports as well.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s always exciting to read about advancements and predictions for technological trends. Since I work with analytics, I tend to pay closer attention to what’s coming for the field of data science and analysis. After doing some research, I saw that many new trends involve artificial intelligence and the use of machine learning algorithms. There was one trend that caught my eye, though, and Gartner calls it augmented analytics.

What is Augmented Analytics?   

One of Gartner’s definitions:

Augmented analytics is a next-generation data and analytics paradigm that uses machine learning to automate data preparation, insight discovery and insight sharing for a broad range of business users, operational workers and citizen data scientists.

The tools of augmented analytics will be able to automatically go through data, clean it up, identify data patterns and trends, present them as a visualization or in natural-language narratives such as “50% of web traffic is middle-aged women from US”, and then convert these insights into actionable steps without professional supervision.

The CEO of one company focused on developing algorithms, which will help to draw insights from analytics tools, including Google Analytics, compared augmented analytics to the evolution of a car. Nearly everyone in the United States knows how to drive a car despite it having a very intricate design. That complexity was obscured by technological advancement and the drivers just need to know how to press pedals and steer the wheel to make decisions while driving.

Now, the progress has gone further with the advent of smart cars making the task of “driving” unnecessary—we can simply focus on more important things such as efficiently and safely moving from point A to B. Frankly, we’re not as advanced in the data field as in the automobile industry. According to Gartner’s 2017 ITScore assessments, while organizations realize the importance of digital analytics and want to be data-driven, they still predominantly accumulate large amounts of data without driving actionable insights.

The image below shows that only 34% of companies can confidently say they’ve adopted diagnostic analytics and can answer questions such as “Why is one product selling better than another?”, “Why are expenses higher this month?”, “Why did this patient respond better to a particular treatment?”. Even a smaller number of companies are able to fully take on predictive and prescriptive analytics.

Source: Gartner (July 2017)

A Mixture of Pros and Cons

When machine learning tools become powerful enough to prepare data and drive business insights, a question arises: Will it replace the need for businesses to hire analysts and data scientists? I believe that augmented analytics will definitely have a positive effect on small business owners who have no means of getting services of experienced professionals but desire to use data to help them grow, know their customer, and stay on top of competition.

Although, the same business owner who has access to all the great machine generated insights will face the responsibility to choose to apply them to their business or not. There is always a human factor present in the decision making regardless of having amazing tools that automate some aspects of data analysis.

The data scientist’s help will still be needed for the more mature and intricately structured businesses because it’s necessary to implement a correct data model before any analysis can be done by augmented analytics tools. At the same time, business data is becoming more complex and its analysis can take a lot of time. That’s where augmented analytics will step in to provide faster time to insight.

Despite the fact that automation in the data field is inevitable, we will still seek new opportunities to be more innovative and create jobs that never existed before. It’s proven by history from controlling fire to creating a wheel to discovering electricity and so on.

What is your outlook on augmented analytics? Do you think it will replace a need for human analysis?

Resources:
https://www.humanlytics.co/analytics-for-humans/
https://www.gartner.com/doc/reprints?id=1-4IWRUXA&ct=171020&st=sb

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
What is a Rollup Property in Google Analytics?

A rollup property is a GA 360 only feature and unique in that it allows you to aggregate multiple source properties into a single property and allows you to see that data together in the same reports. Rollup properties allow you to consolidate enterprise data into one place without disrupting the ecosystem of how other teams can access and use the same data. There are a few considerations to be aware of as you begin thinking about creating a rollup property.

When is a Rollup Property Needed?

Let’s say a company owns three different domains and has them on three separate properties. One site is an online store that sells camping equipment, one for looking up nearby campsites and making reservations, and the last is a blog where people can share their adventure stories with others. These are separate sites but you want to see and report on data from all of them, combined, to answer questions such as, “How many sessions happened on all of my sites last year?”, “Which campaigns drove the most traffic to each of my sites?”, and, “What blog posts lead to the most transactions and reservations?” Creating a rollup property will allow you to answer questions like these and analyze data across multiple properties in one, centralized location. If a company owns just one domain, there’s no need to create a rollup property because there’s nothing else to compare that one domain to.

Mobile Applications and OTT Data:

You also have the ability to include data from mobile applications and OTT devices (Amazon Alexa, Google HomePod, etc.) into a rollup property. Mobile apps and OTT devices are tracked differently than browser content, even if they appear the same and deliver the same content. It is best practice keep web, mobile, and OTT data separate in different properties in GA, with separate properties for iOS and Android as well. Rollup considerations are particularly important here. Firebase cannot be added to a rollup property, which makes sense because it is a completely different platform (used to build mobile applications). The next step would be to decide whether or not you want to include your mobile and OTT properties into a rollup. If you find that the data and content is similar or the same, it definitely would be worth including. However, if you find there’s little in common with the data and content between the properties and would gain little for analysis, it’s best to keep them separate.

How to create a Rollup Property?

Unfortunately you cannot create a rollup property yourself, you’ll need to reach out to your Google Account Manager or Reseller who can assist you with getting one setup. The following information will need to be provided in order for Google to create the new property:

    1. Account Number
    2. Time Zone
    3. Default Website URL
    4. View Type (Web or App)
    5. Number of Rollup Properties

 

Once you have your Rollup Property:

When you have confirmation from Google that the rollup property has been created it will show up as just another property in the current list.

The most important thing to do in a new rollup property is to add all necessary source properties from which you want to collect data. This can be done by going into the rollup Admin > Property > Roll-Up Management. Remember, when you’re reaching out to Google to create a rollup property, you’re not specifying which properties you want included into it, so it’s crucial to remember and manually add them yourself.

There are also a number of considerations to be aware of with rollup properties.

  1. Data Limits: Each hit in a rollup property is counted as .5 of a hit rather than 1 as it applies toward your monthly billable hit volume.
  2. Session Merging: You need to have same client ID across source properties in order to merge sessions, otherwise it will be counted as two separate sessions. If you have a logged-in function across multiple sites, and have User-ID enabled, sessions will be unified and de-duplicated in the rollup property.
  3. Data populated in Rollup property is on a go-forward basis, no historical data is added.

Once the source properties have been added, all reports in the rollup will report on the aggregated data. Under ‘Audience’, there is a new section called ‘Roll-Up Reporting’ which allows you to see high level data (sessions, users, bounce rate, etc.) broken out by the individual property.

In conclusion, if a company has multiple domains with data in separate websites, or mobile applications and OTT data that they would like to analyze all together, rollup properties are the answer. It is no different than a normal property, with the exception that you have the ability to feed multiple source properties into it and view the aggregated data from all of them. If a company only has one domain, there is no need to create a rollup since there’s no other data to compare it to. Although you cannot create one yourself and instead must go through Google, it’s a very quick and simple process, and before you know it you’ll be able to tackle and answer even bigger questions.    

If you need any help implementing a Rollup Property, feel free to contact us!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So, you’re a large company that owns multiple domains and wants to report on company-wide performance and understand attribution better? Well, look no further! Luckily, this isn’t the first time we’ve solved for this solution.

In this article, we’ll talk about:

  • Why cross-domain tracking is important
  • How to implement cross-domain tracking (in GTM, hard-coded GA, or gtag.js)
  • Implications of cross-domain tracking in GA reports

Note, this article explains an implementation for when the root domains a user traveling across are different. This does not address for when a user travels across subdomains for the same root domain.

Why cross-domain tracking is important

Let’s consider the following path.

Tanya goes to Google.com and searches for “domain 1” and clicks on this paid search ad to get to domain1.com.

Tanya then clicks on a link from domain1.com to domain2.com.

For a case where each domain is sending to a different tracking ID, but the data in both source properties are forwarded to a 360 rollup property, what happens?

How many sessions and users show up in the rollup property? What sources and mediums will be reported?

If you guessed “2 sessions and 2 users with 1 session coming from google / cpc and 1 session coming from domain1.com / referral,” then you’re absolutely right!

Google Analytics determines “users” based on a 1st party cookie (_ga) that is created for a user that visits your site for the first time. There are many things to note about a 1st party cookie (e.g. browser-specific, device-specific), but for the purposes of cross-domain tracking, the important call-out is that the “user” is set at the root domain. When Tanya navigated to domain2.com, a new _ga cookie was created, creating a new user that started a new session with new attribution.

Well, that’s no good. How do we know this is the same person and give credit to the paid search campaign that drove Tanya to both of these brands?

Implementing Cross-Domain Tracking

To do this, there are 3 things we need to accomplish. We’ll talk through what that looks like in a GTM implementation using analytics.js, hard-coded implementation using analytics.js, and a gtag.js implementation. In a nutshell:

  1. Tell Google Analytics to allow domains to be linked to one another
  2. Explicitly tell Google Analytics which domains need to be linked
  3. Conduct Quality Assurance to ensure this works
  1. Tell Google Analytics to allow domains to be linked to each other
  • With analytics.js via GTM
    • In your Universal Analytics pageview tag, update your Fields to Set like below. This needs to be done in both containers.

  • With hard-coded analytics.js
    • Update your tracking snippet like below. This needs to be done for both snippets on both domains.
<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');


ga('create', 'UA-XXXXX-Y', 'auto', {'allowLinker': true});
ga('require', 'linker');
ga('send', 'pageview');
</script>
<!-- End Google Analytics -->
  • With Gtag.js
    • Update your tracking snippet like below. This needs to be done for both snippets on both domains.
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=GA_TRACKING_ID"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

gtag('config', 'GA_TRACKING_ID', {
  'linker': {
    'accept_incoming': true
  }
});

</script>
  1. Explicitly tell Google Analytics which domains need to be linked
  • With analytics.js via GTM
    • Option 1: In the Auto-link Domains field, provide the root domains in a comma-separated list for both GTM containers for the respective domains to be linked.

    • Option 2: If there is a long list of domains, consider creating a Custom JavaScript variable that stores the list of domains in an array. Reference that variable in the Auto-link Domains field.

  • With hard-coded analytics.js
    • Update your tracking snippet like below. This needs to be done for both snippets on both domains for the ones that need to be linked.
<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXXX-Y', 'auto', {'allowLinker': true});
ga('require', 'linker');
ga('linker:autoLink', ['domain2.com, domain3.com']);
ga('send', 'pageview');
</script>

<!-- End Google Analytics -->
  • With gtag.js
    • Update your tracking snippet like below. This needs to be done for both snippets on both domains for the ones that need to be linked.
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=GA_TRACKING_ID"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

gtag('config', 'GA_TRACKING_ID', {
  'linker': {
    'domains': ['domain2.com, dmomain3.com'],
    'accept_incoming': true
  }
});

</script>
  1. Conduct Quality Assurance to ensure this works

With either GTM Preview turned on or in a staging environment where this is implemented:

  1. Start recording your session with Tag Assistant.
  2. Navigate to domain1.com and decorate your URL with custom UTM parameters for testing (e.g. ?utm_source=catsmeow&utm_medium=catspurr)
  3. Check the value of your _ga cookie (either in Dev Tools > Security or with a Chrome plugin like Edit This Cookie).
  4. Click on a link that goes to a different domain.
  5. Check your URL. Did the query parameter of _ga get appended to it?

  1. Check your _ga cookie. Did it stay the same?
  2. End your recording and view your results in the “Google Analytics Report” tab.

If your results look like below, with 1 session and 1 user across 2 pages coming from 1 source/medium, then congratulations, cross-domain tracking has successfully been implemented!

Implications of cross-domain tracking in GA reports

Conceptually, implementing cross-domain tracking is a no-brainer. However, if cross-domain tracking wasn’t a part of the implementation since the beginning, it makes the decision to do this a little harder as it does impact what your data looks like.

  • If data for domain1.com and domain2.com are sending to the same property, sessions and users will go down.
    • This is a good thing since the numbers will be more accurate. However, since the numbers go down, often times explanation to the wider team is required.
  • If data for domain1.com and domain2.com are sending to the same property, referral traffic will go down.
    • Again, this is good because we’ll be able to see which channels are really driving traffic. However, some stakeholders may be used to measuring cross-site traffic using referral traffic so thorough communication and education need to occur when updating the implementation. We recommend implementing a session-scoped custom dimension of “Full Referring Hostname” to send to GA to support this type of cross-site measurement reporting.
  • If data for domain1.com and domain2.com are sending to different properties, then this implementation doesn’t matter. If we exported data from each domain into a CSV and combined them, the result would be 2 sessions and 2 users with 1 session coming from google / cpc and 1 coming from domain1.com / referral. This is because the data lives separately.
    • However, if these separate properties were configured to point to a 360 Rollup, then this implementation does matter and the solutions for unified sessions and true attribution work
For more resources:

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Google Analytics 360 has recently launched an exciting addition to their toolbox of analytics tools. That addition is called, Advanced Analysis. With it comes the ability to take the reporting of your Google Analytics data even further by being able to dig in and explore your data rather than just reporting on your KPI’s. In addition to digging into analysis options, it also makes taking action on your data easy and straightforward. Through great analysis and actionability, Advanced Analysis in Google Analytics should be a welcome addition to your Google Analytics workflow!

Before diving into the different analysis techniques within Advanced Analysis, let’s talk about some of the general functionalities and capabilities of this tool. Each technique utilizes segments, dimensions, and metrics that you add—by simply dragging and dropping—to be utilized within your analysis. This makes running your analysis both quick and user-friendly as any user is able to begin working with this tool very quickly. In addition to the ease of dragging and dropping the values you want to analyze, Advanced Analysis has the ability to add multiple tabs to your Analysis thus allowing for multiple techniques within a single analysis report. Each analysis technique has the option to build a segment or audience on a subset of the users within your analysis report, along with going straight to the User Explorer report in Google Analytics to take a closer look at those users.

Exploration

The Exploration technique lets you utilize a few different visualization options in order to explore your data within Advanced Analysis. You can view your data with a table, a doughnut chart, or a time series graph.

The table exploration tool allows you to place your dimensions on both columns and rows, with your “values” (metrics) laid out upon the table for you to see. For your metrics, you can choose to show them one of three ways: plain text, bar chart, or heat map. In addition to your dimensions and metrics, you can layer your segments on top of these columns and rows by choosing how you want your segments to pivot upon your data.

You can use the doughnut chart to visualize the percentage breakdown of your users and their relevant metrics. With the doughnut chart, you can place a metric and a breakdown dimension within the visualization. You can also add segments to see if the data varies per segment of users.

Like the doughnut chart, the time series visualization allows you to choose a breakdown dimension and a metric to see your data trended over time. Again, like the doughnut chart, the time series visualization displays your two segments side by side to better understand the comparison of your segments.

Segment Overlap

Advanced Analysis’ Segment Overlap tool allows you to see users from different segments of your audience as they intersect with each other. You can view up to three segments at a time to see the relationship between different user sets. In addition to being able to see the relationship of the users to each other, you can add metrics to the table below your visualization to understand the value of each user set (as well as the overlap of these user sets). This provides you with a great understanding of your users and the various segments they’re part of.

Funnel

The Funnel analysis technique allows us to understand, retroactively, how users progress through a given funnel that we can define. This analysis technique can be defined by up to 10 steps in our funnel in order to understand the flow of our users. Not only are you able to see the segments of your users and how they continue through or abandon your funnel, the funnel analysis gives the ability to add a breakdown dimension. This breakdown dimension allows you to understand the rate at which users complete or exit each funnel step.

It’s important to remember at this point that Advanced Analysis allows you to not only analyze your data but also to act. For example, in the Funnel analysis technique, you can take a subset of your users at a given stage and either build a segment or audience of those users.

This addition is important because the value of analysis is not simply understanding our users in a better way but understanding those users in a way that leads toward action. The option to build a segment of your users that you can place back within GA or to build an audience which can be sent to Google AdWords, DoubleClick, or Google Optimize allows you to take the understanding you’ve gained through Advanced Analysis and retarget or test against those specific users to see greater results within your efforts!

Hopefully this introduction to Google’s new Advanced Analysis has spurred you to take hold of the data available  in Google Analytics to further understand your users and turn that understanding into action and results!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Google announced the BigQuery ML service at Google Cloud NEXT 2018 in San Francisco.  They have published wonderful help articles and guides written to go along with the product release that you should read here.

What is BigQuery ML?

Hint:  It makes machine learning accessible to all (SQL practitioners)! Google touts their new product as having democratized machine learning by giving data analysts, and folks familiar with SQL, the ability to train and evaluate predictive models without the need for Python or R data processing. As an example of how impactful that goal is at Analytics Pros, our team is mostly comprised of Data Analysts with a much smaller subset of folks who have “Machine Learning Engineering” in their job title. Our ratio of data practitioners to data scientists is large enough to call this a “game changer” for our organization.

Using BigQuery ML, you can easily create predictive models using supervised machine learning methods. The predictive modeling tools at our disposal are Linear Regression (predicting the value of something) and Binary Logistic Regression (predicting the type/class of something). We’re able to write a query that includes a “label” that will train our model using any number of data features. Data features are numerical or categorical columns that are available to make a prediction when a label is absent. The label in our training data is what makes this supervised machine learning (opposed to unsupervised learning in which we don’t have access to labeled data).

Build a predictive model using SQL and BigQuery

Our goal in this article is to predict the number of trips NYC taxi drivers will need to make in order to meet demand. This is a demand forecasting problem, not unlike problems you might encounter in your respective business vertical. Using a classic example written by the Googler, Valliappa Lakshmanan, we’ll convert his Python+TensorFlow machine learning demonstration into the new BigQuery ML syntax. Be sure to follow along using the Google Colaboratory published on Github.

Our input data (features) will consist of weather data in the NYC area over the course of three years. Those data features will be trained to predict a target variable coming from the NYC Taxi dataset. The target variable is the number of taxi trips that occurred over the same three year period as our weather data. Our data features combined with the target variable give us everything we need in order to train a machine learning model to make predictions about the future based on observations from the past. The combined dataset of weather + taxi trips gives us labeled data, including features (weather data) and a target variable, ie. label (taxi trips). 

Using BigQuery ML, we will train a linear regression model with all of the heavy lifting done for us automatically, saving time without losing any predictive power. Some of the auto-magical work done for us includes splitting training and testing data, feature normalization and standardization, categorical feature encoding, and finally the very time consuming process of hyper-parameter tuning. Having this type of automation saves us a non-trivial amount of time, even for experienced ML engineers!

BigQuery ML gives data analysts that are skilled with SQL, but less familiar with Python ML frameworks like TensorFlow or SK Learn, the ability to generate predictive models that can be used in production applications or to aid advanced data analysis. We encourage anyone wanting to use the BigQuery ML service to familiarize yourself with the underlying concepts of machine learning, but it is important to note that the days of having a PhD as a prerequisite for machine learning are coming to an end. This service is helping to bridge the skills divide and help to democratize machine learning data processing.

In this article, using the new BigQuery ML syntax, we will:

  1. Create a linear regression model using SQL code syntax
  2. Train and evaluate the model using data in BigQuery public dataset
  3. Inspect the predictive model weights and training metrics
  4. Make predictions by feeding new data into the model
Count Taxi Trips per Day

First we collect the number of taxi trips by day in NYC. We do this by querying the BigQuery public data for New York City. This SQL will give us the data we need to label our prediction model.

WITH trips AS (
   SELECT 
      EXTRACT (YEAR FROM pickup_datetime) AS year, 
      EXTRACT (DAYOFYEAR FROM pickup_datetime) AS daynumber 
   FROM `bigquery-public-data.new_york.tlc_yellow_trips_*`
   WHERE _table_suffix BETWEEN '2014' AND '2016'
)
SELECT year, daynumber, COUNT(1) AS numtrips FROM trips
GROUP BY year, daynumber ORDER BY year, daynumber
Fig. 1 – NYC Taxi Trips Taking a look at the NYC Taxi Trips Data

Within the trips  WITH-clause, we use EXTRACT to generate a date key using the date parts YEAR and DAYOFYEAR.  Our example uses the years 2014 through 2016 because the schema is consistent for those periods. The query will COUNT() the numtrips per each year and daynumber in the data.

In Fig. 1 we see the number of NYC taxi trips on January 1st, 2016 was 345,037.  That was a Friday, and a predictably cold week in New York City. If you look at the data, you’ll see the weekly pattern reveal itself. With peak demand on Fridays and Saturdays with a sharp decline on Sunday and Monday.

Learning how to export your BigQuery data directly to Data Studio  to explore your data deserves a blog article entirely (stay tuned!) In the meantime I will walk through some of the exploratory analysis steps performed in the Python Colaboratory Notebook supporting this article.

Exploratory Data Analysis

We have a hunch that there is correlation between the number of taxi trips in New York City and the weather. We feel so strongly about this that we’re willing to build a machine learning model to mechanize this insight. Before we do this, we want to look at the data and get a feel for whether or not it will be useful in building a linear regression model.

For linear regression models to work properly we generally need values with some degree of correlation, but not too tight so as to throw off the model accuracy. A simple way to check for correlation would be to visualize a plot of two metrics against each other. In our case we examine The maxtemp field on the X-axis and numtrips on the Y-axis as Fig. 2:

Fig. 2 – Looking at all the data we see a slight correlation between trips and temperature.

The line isn’t flat, so we’re saying there is a chance! The expectation was that temperature would have some influence on the prediction model. In this case, we see as the temperature increases the number of taxi trips decrease. More importantly, the inverse is true: as the temperature decreases the demand for taxi’s go up. That is our first insight. However, the insight isn’t as valuable as we’d expect. The loss rate on the above line is very high. We want to see if we can find a way to minimize the loss.

We already recognized a weekly seasonal pattern in the data when we looked at the number of trips in the first 10 days of January. If a seasonal pattern exists then our linear regression model would improve by factoring it in. Adding dayofweek as a categorical variable will improve the model accuracy because the loss rate will have been minimized compared to a linear average on all entities. Below we see the weekly seasonal pattern when we plot dayofweek against numtrips as Fig. 3:

Fig. 3 – We see a pattern in the weekly seasonal data; Saturday a peak with Monday a low.

Our intuition is validated, by isolating the data by dayofweek we are able to increase the correlation rating between the variables.  In this case, as we input a higher temperature the prediction decreases. It is slight, but it is an improvement. That improvement, and others are used to optimize the results of the model output, increasing accuracy (by reducing the loss rate). Fig. 4 shows the NYC taxi demand with maxtemp on the X-axis and numtrips on the Y-axis. The correlation increases when we partition the data by dayofweek, in this example we isolate Sunday trips:

Fig. 4 – As the temperature decreases the demand for NYC taxis goes up Creating a Linear Regression Model using BQML

Now the fun part! We’re going to create a linear regression model using the new BigQuery ML SQL syntax.  This new syntax gives us an API that can build and configure a model and then evaluate that model and even make predictions using new data. This model is available on the globally distributed Big Data Machine that is BigQuery. I’m envisioning very interesting applications  built on this service, but most importantly I’m seeing a huge breakthrough for data analysts who are skilled in SQL but less so with Python or R. This is a wonderfully democratizing step toward putting machine learning data processing into a much broader reach.

First we need labeled data:
-- Taxi Demand, aka [QUERY]

-- Weather Data
WITH wd AS (
   SELECT 
      cast(year as STRING) as year,
      EXTRACT (DAYOFYEAR FROM CAST(CONCAT(year,'-',mo,'-',da) AS TIMESTAMP)) AS daynumber, 
      MIN(EXTRACT (DAYOFWEEK FROM CAST(CONCAT(year,'-',mo,'-',da) AS TIMESTAMP))) dayofweek,
      MIN(min) mintemp, MAX(max) maxtemp, MAX(IF(prcp=99.99,0,prcp)) rain
   FROM `bigquery-public-data.noaa_gsod.gsod*`
   WHERE stn='725030' AND _TABLE_SUFFIX between '2014' and '2016'
   GROUP BY 1,2 
), 

-- Taxi Data
td AS (
   WITH trips AS (
      SELECT 
         EXTRACT (YEAR from pickup_datetime) AS year, 
         EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber 
      FROM `bigquery-public-data.new_york.tlc_yellow_trips_*`
      WHERE _TABLE_SUFFIX BETWEEN '2014' AND '2016'
   )
   SELECT CAST(year AS STRING) AS year, daynumber, COUNT(1) AS numtrips FROM trips
   GROUP BY year, daynumber 
)

-- Join Taxi and Weather Data
SELECT 
   CAST(wd.dayofweek AS STRING) AS dayofweek, 
   wd.mintemp, 
   wd.maxtemp, 
   wd.rain,
   td.numtrips / MAX(td.numtrips) OVER () AS label
FROM wd, td
WHERE wd.year = td.year AND wd.daynumber = td.daynumber
GROUP BY dayofweek, mintemp, maxtemp, rain, numtrips

You’ll see magic numbers like 99.99 and 725030 in the above SQL. The stn value is the station id for LaGuardia and 99.99 was found in EDA to be invalid input.

Here are a handful of results from the above query, each row represents a day in the 3 year dataset:

dayofweek mintemp maxtemp rain label
4 37 46 0 0.565787687
6 66 81 0.52 0.716719754
2 55 82.9 0 0.549772858
7 63 79 0 0.729906184
4 39 45 0 0.652754425
Creating a model

Now that we have labeled data it is time to train a machine learning model. BQML has a simple, and familiar, syntax to do this.

CREATE MODEL yourdataset.your_model_name 
OPTIONS (model_type='linear_reg') as [QUERY]

It is that simple. You’d replace [QUERY] with the SQL query we used to generate our label data. After only a few short moments we have a regression model generated having been trained and evaluated using three years of taxi and weather data.

Calculating a baseline score for evaluation Fig. 5 – Demonstrating a model, data (entities) and the loss

We are already familiar with an effective linear model. It’s called the “average”. We could very easily spend time building a machine learning model that could be beat by simply predicting the average of the data. In order to prove that our machine learning model is better than the classic linear model (average) then we’re going to need a score metric.

Enter the MAE, or otherwise referred to as the Mean-Absolute-Error. The MAE is a way to calculate the aggregate loss rate, and doing so in a way that penalizes big misses. When you think of aggregating the loss rate, then you’re summing all of the loss distances shown in Fig. 5 above.

Using Python, we would calculate  the MAE using the Google Cloud Python BigQuery API as follows:

from google.cloud import bigquery 
client = bigquery.Client(project=[BQ PROJECT ID])
df = client.query([QUERY]).to_dataframe()
print 'Average trips={0} with a MAE of {1}'.format(
   int(df.label.mean()),
   int(df.label.mad()) # Mean Absolute Error = Mean Absolute Deviation
)

Output: Average trips=403642 with a MAE of 50419

The Moment of Truth: Evaluating your Model

There comes a time in every data scientists life in which your going to have to evaluate your model. We do this by setting aside a set of data, in which we know the labels, but will act like we don’t and make a prediction using our new model. We’ll score ourselves based on how good we were at guessing the correct values. BigQuery ML handles this step automatically and provides a simple function to evaluate our model and get back error and loss metrics. Remember, our number to beat is a MAE of 50,419:

SELECT * FROM ML.EVALUATE(MODEL yourdataset.your_model_name, ([QUERY]))
metric value
mean_absolute_error 43800
mean_squared_error 0.009846
mean_squared_log_error 0.003598
median_absolute_error 0.064709
r2_score 0.200022
explained_variance 0.20064

Our Mean Absolute Error is 43,800. The good news is we were able to beat the baseline error rate of 50,419 found in the previous step. By beating the MAE of the entire dataset, we’re able to say that our model has more predictive power than a standard linear average. This is mostly because our model will factor in the day of the week, which we found to be an important signal in the data. In the accompanying  Colaboratory Notebook we inspect the weights of the different input features in more detail.

Note regarding ML.EVALUATE(): BigQuery ML prints out different score metrics when using a linear or logistic regression model, including Precision, Recall, and F1 Score.

You can access the scores of the many training runs using ML.TRAINING_INFO():

SELECT * FROM ML.TRAINING_INFO(MODEL yourdataset.your_model_name)

Fig. 6 – Print and visualize the metrics collected during the regression model training

We see that our model only took a few iterations in order to achieve a low loss rate. In fact,  90% of the loss was diminished between the first and second run. In this case our model didn’t have to work very hard to reach a convergence. BigQuery ML, by default, will attempt 20 different training + evaluation iterations before stopping. You can set this number to be higher in the OPTIONS when creating your model using the max_iterations flag. Additionally, by default, the training will stop once it sees no progress is being made; this can be overridden using the boolean, early_stop flag. Finally, you’ll notice that the learning_rate was fluctuating between training runs, this is an example of hyper parameter tuning that is automatically being performed by BigQuery ML. This is a special, time-saving gift to the data science practitioner, like manna from heaven.

Making Predictions Against the Model

We’re now ready to make predictions against our model! To do this, we need to feed it new data. The ML.PREDICT() function will return a prediction for every row of input in the query.  The ML.PREDICT() function accepts a MODEL to evaluate against and the input comes from the  TABLE or QUERY.  Your query needs to be wrapped with a () if that is the route you chose.

In the example below we use a QUERY that includes only a single row of input. In other cases you would want to feed in multiple values at once or point the entire input function to a TABLE.

SELECT * 
FROM ML.PREDICT(
   MODEL yourdataset.your_model_name, (
      SELECT 
         '4' AS dayofweek, 
         60  AS mintemp, 
         80  AS maxtemp, 
         .98   AS rain
   )
)

Using Google’s Colaboratory, you can map sliders to the input variables. Each adjustment to the sliders will result in a new prediction being generated by the model. Click this link to fork your own Colaboratory, you will be able to build a model and then make predictions against it in no time at all! (shown in Fig. 7)

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Google Cloud Next is an annual conference focused on Google Cloud Platform (GCP), where Google presents all of the latest features that are coming to the cloud. We get announcements on many new features, updates on existing ones, and even new public betas that are ready for use. There are also hundreds of sessions, panels, and bootcamps to attend. One of our favorite parts of the conference: there are Googlers everywhere! You can directly connect with the product teams and there are endless opportunities for interactive demos, discussions, and networking.

This year’s conference focused on three main topics: Machine Learning & Artificial Intelligence (AI), Data Analytics, and Application Development. We spent three days attending and are excited to share with you our highlights primarily focused around Machine Learning & AI. Here is the top five list of our favorite announcements from Google Cloud Next.

Number 1: BigQuery ML (Machine Learning)

BigQuery just got a huge update! We now have BigQuery ML which is a way for users to create and execute machine learning models directly in BigQuery using standard SQL. We’ve been using BigQuery ML for a few months and it’s awesome! Now that it’s in public beta, you can use it, too!

This is important because Google has made it very easy to train your machine learning models inside BigQuery with just a few very simple SQL-like statements. This means no more exporting data back and forth, building out separate TensorFlow models in Python, or trying to run off sample data on your local computer. We now have a quick and easy way for anyone that has a basic understanding of SQL and Machine Learning to execute quickly. With everything staying inside of BigQuery, this also makes other tasks that were tedious in the past—like retraining, prediction and result analysis—that much simpler.

If you’re getting started we want to highlight that we’re currently limited to 2 basic machine learning algorithms, linear regression and logistic regression. So, for any complex custom models, TensorFlow and Cloud ML Engine are still the way to go.

For more in depth about BigQuery ML, here’s the link to the documentation.

Number 2: BigQuery Clustering

The name of the feature is clustering which may be a bit confusing because it suggests it has something to do with a data science related term, but it actually has little to do with it.

If you regularly use BigQuery you know that you can partition tables either via ingestion time method or based on a partition column. Partitioning data is nice because you can query data on those time-based partitions which save cost and have performance benefits.

BigQuery clustering is the extension of that idea, except now you can partition by multiple columns that you may be frequently querying. BigQuery will sort data internally based on those columns and store it separately. So, at query time, there is no need to do a full column scan of the data, rather, only a scan of the cluster you want to read from. This adds big performance and cost benefits, which is a big win for all!

Bellow is the comparison of the two features:

PARTITIONING CLUSTERING
Cardinality Less than 10k Unlimited
Dry Run Pricing Available Not available
Query Pricing Exact Best Effort
Performance Overhead Small None
Data Management Like a Table Use DML

Let’s consider an example using stock data. Every stock is essentially a big time series which is great because we can time partition stocks based on the timestamp. Imagine we have a table called “stock” that houses that data, the columns might be “timestamp”, “stock_name” and “price” for simplicity. So, column “timestamp” is our partition column and using that we can efficiently navigate the date ranges. Usually we’d want to query data for one single stock, so we’d do a WHERE clause, where “stock_name” equals “GOOGL”.

That’s OK, but BigQuery will do a full column scan of “stock_name” column (and any other column in the SELECT clause) reading a lot of stock names we don’t want and then filter out what we want.

With clustering we can cluster based on the “stock_name” column and, behind the scenes, BigQuery will store data in a way that when we run the same query again we’ll only read from the “GOOGL” cluster, reading only the data we actually want to read and thus avoiding full column scans of the columns in the SELECT clause.

You can learn more about BigQuery clustering which is now in beta here.

Number 3: Training and online prediction through scikit-learn and XGBoost in Cloud ML Engine

While we love TensorFlow here at AP, we welcome the new additions to MLE. We particularly use scikit-learn a lot, especially in the first stages of the ML cycle, later transitioning to TF models, but those aren’t always needed, so we can now easily deploy those and continue our dive into more complex TF models later.

With the new additions, there’s a quicker way to transition to Cloud MLE as you’re no longer constrained by only TF. For us, we hope that means delivering production models faster as well as iterating and improving them more efficiently.

There’s not much else to say here except this is now generally available.

More information is available here.

Number 4: AutoML

Google is bringing ML even closer to developers. Our first highlight listed above was about BigQuery ML which is primarily meant for data analysts who know SQL and data scientists, but AutoML goes a step further and is meant for developers who don’t have to know anything about Machine Learning. There are three versions in beta which are vision, natural language, and translation.

Vision

Let’s say you have a lot of images of home interiors and you want to be able to say if the image is showing the kitchen, living room, yard, bedroom etc…

What you need to do is pair those labels (kitchen, living room, etc.) with your images and show them to the AutoML Vision algorithm.

AutoML Vision will train on your specific data and get back to you with a fully trained model.

Natural Language

The idea is similar to Vision with the exception that it’s for text, not images.

Let’s say you have a lot of articles and your labels are article categories (politics, sports, etc.).

Again, you label your articles with your categories and expose the data to AutoML Natural Language, which will train on your data and return a model for you to make predictions on previously unseen articles.

Translation

Google already has a Translation API so the value of AutoML Translation doesn’t show immediately. The custom models that AutoML Translation can provide are most beneficial with jargon text where the usual Translation API might not perform as well.

All of the versions provide you with a scalable REST API prediction endpoint, so you can easily integrate it with your code and start making predictions.

More on AutoML is available here.

Number 5: New BigQuery UI and Data Studio Explorer

BigQuery UI got a makeover and, yes, you guessed it, standard SQL is now the default. Yes!!!

The UI, for now, has pretty much the same functionality as the old one but one cool addition is a deeper integration with Data Studio where you can visualize your data with the click of a button.

The new UI brings many of the little features we missed in the old one as well as the look and feel now aligns with the rest of the GCP. With things like faster and easier searching of projects, you can now actually text search for the one you want as opposed to scrolling down a list of them.

Creating and, especially updating, views has also become easier to do. One nice addition is a deeper integration with Data Studio, where you can visualize your data with the click of a button.

New UI is available here.

Number 6: New App Engine runtimes and Cloud Functions

This may not be a very exciting announcement to some, but we use a lot of App Engine for all sorts of things and we also use Cloud Functions in many scenarios. It was nice to see Cloud Function finally out of beta and into the general availability category, as well as a new Python 3.7 standard App Engine environment being introduced.

More on this and other serverless additions can be found here.

Final thoughts

That’s it. We promised our top 5 but you got 6 instead. There was a huge list of more announcements, which Google has 100 you can read from here. We can’t wait for next year!

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview