Follow LunaMetrics | Google Analytics, AdWords & SEO on Feedspot

Continue with Google
Continue with Facebook


Things are moving quickly in the Google product ecosystem. One change that’s flown under the radar is the switch to parallel tracking on October 30, 2018. If you’ve been procrastinating on prepping for the change, don’t worry! We’ve got your back.

What Is Parallel Tracking?

Announced way back in 2017, parallel tracking is Google’s next advertising tool to speed up performance in the “mobile-first world,” alongside features such as Accelerated Mobile Pages (AMP) landing pages and the Mobile Speed Scorecard. With a one-second delay in load time decreasing conversions by up to 20%, every moment counts.

Parallel tracking changes the way click measurement happens when using 3rd-party tracking software that relies on redirects (Marin, Kenshoo, etc.) outside the Google ecosystem. If you don’t use one of those, you don’t need to do anything. Go optimize your accounts!

When using a tracking URL with linear or sequential tracking, a user clicks on an ad, is sent to an intermediate landing page, then redirected to the final destination. While this often happens so quickly most people can’t perceive it, Google insists that it can negatively impact conversion rates.

Sequential tracking

With parallel tracking, the user is taken directly to the final URL while the tracking URL is loaded in the background, similar to asynchronously loading scripts. The idea is that the user is taken directly (and quickly) to their ultimate destination, improving the user experience and increasing conversion rates.

Parallel Tracking

This is currently an optional feature for Search and Shopping campaigns, but starting October 30, 2018, advertisers will be automatically switched over to parallel tracking for all campaign types. You can see if you have parallel tracking turned on by going to All Campaigns > Settings > Account Settings and looking under “Tracking.”

What Do I Need to Do to Prepare?

Ultimately, you’ll need to talk with your 3rd-party vendor to ensure your tracking templates are compatible. Google has been working with the major players, so you’re likely covered.

Marin has already announced support for parallel tracking and an upcoming webinar to help users get prepped. And according to Kenshoo’s Twitter account, they’ve been working with Google since the change was announced, although they don’t have public documentation on what that looks like.

Here are some general best practices to keep in mind and get ahead of the switch:


Ensure the tracking server supports HTTPS. Also, make sure to include the HTTPS protocol in the tracking template URL to keep everything consistent and clean.


Make sure the tracking redirects use server-side redirects as opposed to on-page redirection through JavaScript. The tracking sequence will stop otherwise.

Ad Changes

If you’re making changes to the ads themselves, the review process will be initiated and ad delivery will be paused until the process is complete. If the changes are at a higher level (ad group, campaign, account), the review process will be initiated in the background, allowing the ads to still serve unless flagged.

AdWords Editor

If using the AdWords Editor Tool, doublecheck that you have the latest version. If you’re not, any uploaded changes may delete previously made URL or parallel tracking changes.


Make sure the GCLID (Google Click Identifier) is still being added. We would recommend using auto-tagging to accomplish this.

Testing Campaigns Ahead of the Switch

Test individual campaigns (without opting into parallel tracking for the whole account) to make sure the templates are compatible before rolling the change out to the entire account. This can be found under Campaings > Settings > Campaign URL options.

Under the Campaign URL options setting (you may have to expand Additional settings), create a custom parameter called {_beacon} and assign it the value true. You don’t need to use this custom parameter in any of your URLs.

If Google finds the presence of this parameter, all clicks under that campaign will get parallel tracking treatment.

Verify parallel tracking is enabled by looking for &gb=1 in your tracking calls (this indicates a background call from the browser).

Additional Resources

Google has also provided several resources for partners to refer to, including an implementation checklist we recommend looking at before you make the switch. We’ve compiled them for you below:

The post Google Ads Parallel Tracking Guide appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We’ve written about this topic before, but it still seems to haunt us … sampling. If you’re using Google Analytics Standard, we feel your pain. Let’s talk about why sampling happens, and how to know if your reports are affected.

When Does Sampling Happen?

Sampling can happen when you apply segments, secondary dimensions, or filter your Google Analytics reports. Whenever we modify our standard reports or create custom reports, Google has to query our data, which can lead to sampling. Basically, Google estimates the data based on a percentage of sessions. Jonathan wrote an awesome post that gets into the nitty-gritty of how accurate sampling is in Google Analytics.

How Accurate is Sampling in Google Analytics?
By: Jonathan Weber
Published: March 3, 2016

The threshold for sampling is 500k sessions at the property level for Google Analytics Standard or 100M sessions at the view level for Analytics 360 for your date range. Sampling at the property level means that you can’t use filters on your views to limit the number of sessions.

How Do I Know If My Reports Are Being Sampled?

Google is constantly updating and improving the Google Analytics interface, and that includes sampling warnings. In the past, you may have noticed a warning banner in your reports that your data is based on a sampled number of sessions that are a percentage of total sessions.

The sampling warning has since received a makeover. We now have the sampling shield that changes color when our reports are being sampled. You can find the shield icon next at the very top of your report next to the title. If your shield is green, you’re good! No sampling here.

But if your shield is yellow, hover over it to see your sampling levels

Flow Reports

The Flow reports (User Flow, Behavior Flow, Events Flow & Goal Flow) are almost always heavily sampled, even if you’re using Google Analytics 360. The sampling for these flow reports works differently than our other reports. The flow reports are always sampled at 100k sessions at the property level. You’ll find this warning banner at the top right of your Flow visualizations, under the date range.

Data Studio

If your Google Analytics reports are being sampled, you’ll see this sampling in Data Studio, too. You can check for sampling in Data Studio under the bottom left-hand corner of your report. Normally, you will see a small footnote about when the report was run and a link to the Privacy Policy. If your report is being sampled, you’ll also see a link that says “Show Sampling.”

Clicking on this link will show you which widgets are being sampled and the sampling levels.

What Can I Do About Sampling?

Sampling can be a huge pain because it sometimes paints an inaccurate portrait of your data. If your sampling levels are a small percentage of your overall sessions, the data can be almost unusable.

If you’re looking for a quick fix, you can shorten your date range or try using standard reports. Try shortening your date range to fit within the sampling thresholds. If your data is still being sampled after shortening your date range, your company should consider Analytics 360.

You might also be able to find the data you’re looking for by using some of the built-in reports like Mobile Overview rather than applying segments. Check out our full list of ways to solve sampling.

Even Google Analytics 360 customers will still see sampling, but there are a number of 360-only options available to help address sampling, even when connecting to other platforms. Check out Unsampled Reporting with Google Analytics 360 for more details.

The post Are My Google Analytics Reports Being Sampled? appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Data quality issues can plague a Google Analytics account, making analysis difficult, time-consuming, or at its worst, leading to incorrect conclusions. There are many ways to affect the data that you collect and use in Google Analytics, and with recent improvements to Google Tag Manager, it’s even easier to have clean, readable data.

For each data challenge, you can typically address the problem either on the “sending” side or on the “reporting” side, with generally the same final result. Which one you choose is often a question of internal team dynamics – who has the technical know-how, the right access, and the time and focus to address these problems.

Why Should We Format & Filter?

Let’s take a step back to talk about why scrubbing, formatting, and filtering our data is so important in the first place.

Reason #1: Data Consistency

Data consistency is one of the most important components to successful, accurate analysis. Consistent capitalization, page URL structure, and data symmetry allow for comparisons to be drawn and business objectives to be measured and examined. Formatting & Filters enables data consistency, and should be one of our closest allies in our quest for analytics accuracy!

Reason #2: Data Relevancy

Making the data make sense to more people is a worthwhile exercise. Where undefined or (not set) might be accurate information within certain Google Analytics reports, it’s not easy to understand why that value might appear. Cleaning these values up and putting in human-readable labels can help provide valuable information for future analysis.

Reason #3: Automation

We love automation. Really, we can’t get enough – it makes life so simple! In the past, addressing data quality might require chaining together variables in Google Tag Manager, complicated filters in Google Analytics, or advanced ETL processes in a third-party visualization. Cleaning and storing information correctly, using the tools available to us, can save us time and improve efficiency.

Reason #4: It Doesn’t Require A Developer

I considered titling reason #4 “Because you can!” — because it’s true! Filtering and formatting ultimately allows us to clean up messy data that would otherwise skew our reports – and we can sometimes complete these steps without touching a single piece of code or enlisting the help of our trusty development team. Hooray for that!

“Sending” Changes vs “Reporting” Changes

Over the years, we’ve written more than a few posts on the benefits and wizardry of Google Analytics view filters. These live at the “reporting” step and are the last chance option to sift through the data sent to your Analytics account, retrieve the pieces you need, and ultimately ensure data quality for analysis and reporting.

Until now, view filters were one of the most automated ways to format data to lowercase values, remove those pesky undefined values, and clean up messy URLs or other data. In the past, most formatting & filtering work was completed within the Google Analytics admin window. Filters also help control what data is included or excluded from a particular GA View, though we’ve also made the case that this too can happen at the “sending” step, and be handled partly from Google Tag Manager. Check out our post: A Better Alternative To Exclude Filters in Google Analytics.

Until recently, cleaning up data in Google Tag Manager was a relatively challenging process – requiring nesting custom JavaScript variables or lookup tables to perform repetitive cleanup functions. Then, along came the Format Values option for User-Defined Variables in Google Tag Manager, which gives us greater control over cleaning our data before sending information to Google Analytics or other tools. Check out Simo Ahava’s post here: #GTMTips: Format Value Option In Google Tag Manager Variables.

It should be noted that good data doesn’t require Google Tag Manager and that on-page changes could help classic JavaScript or GTAG implementation of Google Tag Manager. When possible, using naming conventions and standardization at the page level (e.g. classes, data attributes, or data layer variables, etc.) will help everything that follows. However, this isn’t always an option, especially if relying on user input or anticipating eventual human error.

Fixing the Data First

When possible, fixing the data at its source or as close to the source is often preferred. For us, that might mean using the feature available in Google Tag Manager to clean up data before it ever gets processed by Google Analytics. While this potentially shifts the burden to a more technical point of contact, there are a few scenarios where this is especially useful.

While one of the most common uses for Google Tag Manager is to send data to Google Analytics, let’s not forget about the other places that we send data. Often, we have pieces of information that are sent to Google Analytics as well as third-party tags. These might include product/transaction info that is shared with third-party conversion tags, page or section level information shared with retargeting or recommendation engines, or copies of data sent to other analytics, CMS, or CRM platforms.

Additionally, with Google Analytics, data is collected at the Property level and then flows down into the various Views underneath that property. When cleanup occurs at the end of the collection process, at the View level, there can be issues with consistent usage of filters. New Views will have no filters applied, and it’s up to the individual to remember to add the existing data cleanup filters to the correct views.

These scenarios should lead you to prefer a “sending” side solution, when possible, fixing the data in Google Tag Manager or on the page so that it’s consistent in Google Analytics and across other platforms.

Fixing the Data Last

On the other hand, with the ease of Google Analytics view filters, it’s entirely reasonable to handle many of the cleanup items inside of Google Analytics. This can be especially helpful when you don’t have the access or resources to use Google Tag Manager.

Other scenarios may help sway your decision towards GA as well. Consider scenarios where many different data sources are flowing into a Google Analytics property. Perhaps you have multiple sites, apps, or offline data that is being sent to the same Google Analytics property. In this case, it may be easier to add one filter in GA instead of tracking down the implementation across many sites/technologies.

Events are a common area where you may want to encourage some standardization – lowercasing all the event dimensions with a view filter is an easy and quick change, and works across any event from any source. Compare that to the effort of remembering to lowercase all variables for all events set up through Google Tag Manager.

If you’ve followed best practices in Google Analytics and have a Test View created, it’s also a fairly easy and standard process to test new filters on the test view and letting it run for a period of time before moving that filter to your main reporting view. This testing process can be trickier to handle in Google Tag Manager, or may require a greater level of testing sophistication.

Tools For Cleaning Up Data

With those considerations in mind, let’s talk through the various ways to clean up data, using both Google Analytics and Google Tag Manager.

“Format Values” on Variables in Google Tag Manager

For issues where you need to uppercase or lowercase a variable’s value, this is a great new feature. As you create new user-defined variables in Google Tag Manager, look below for the Format Values option and use the built-in features to standardize the format before using it in Tags. This is great for any text fields, whether you’re pulling them from the data layer, form fields, or elsewhere.

You can also use this feature to cleanup unwanted “undefined” or “not sets” – helping to make sure that missing data doesn’t muddy your reports or worse, block a hit from sending to Google Analytics. Consider a custom dimension for an Author field on a content website. If we’re missing that specific piece of data, consider replacing that with an appropriate replacement like “other” or “Author Not Set.”

Filters in Google Analytics

The Google Analytics filters are great and can cover a number of issues we want to clean up – like lowercasing, uppercasing, but also gives us filters like Search and Replace and Advanced filters, to replace common mispellings, pull information out, and move things around.

Basic Google Analytics Filters for Every Site
By: Zee Drakhshandeh
Published: December 10, 2015

Check out our section on Data Consistency for tips on how to handle:

  • Prepend Hostname to Request URI
  • Lowercase Hostname
  • Lowercase Request URI
  • Lowercase Search Term
  • Lowercase Campaign Dimensions
  • Remove Query String
  • Append Slash to Request URI
  • Search and Replace Filter
View Settings in Google Analytics

In addition to the common filters we can add, the View Settings in Google Analytics can also help us with a few common issues. Head here for common challenges with extra query parameters, the default page name, or site search.

Again, with a challenge like query parameters, you may find this will work best in Google Analytics or in Google Tag Manager. We covered this debate as well in an earlier post.

Methods to Strip Queries from URLS in Google Analytics
By: Samantha Barnes
Published: April 17, 2015
When Should I Use GTM vs GA?

There’s no one-size-fits-all for variable filtering and adjusting, but generally our rule of thumb is to create a strategy, document, and disseminate the strategy among your team, and stick to it! For example, if you decide to use Google Tag Manager for making lowercase variables, make this your official process. Ensure that all users with access to GTM are trained on the tool, and add variable filters to your publishing checklist.

The post Cleaning Your Data with Google Tag Manager and Google Analytics appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The enterprise products in the Google Marketing Platform are constantly evolving, with new features added frequently based on feedback from users, partners like LunaMetrics, and an internal drive to make tools that provide value.

Some tools are in the spotlight more frequently than others; we love talking about Google Analytics 360 and its benefits but we haven’t spent as much time profiling the differentiators of other 360 products, like Google Tag Manager 360. In the Google Marketing Platform, Google Tag Manager is the worker tool: behind-the-scenes and technical and too often overlooked. The benefits of GTM 360 go hand-in-hand with the enterprise processes and people needed to support a scalable tagging solution across many teams and/or sites.

Before we get into GTM 360’s specific differentiators, let’s review Greg’s post 10 reasons to start using Google Tag Manager:

  • Tag Implementation Speed – tags can be updated and added without code changes and pushes to production
  • Flexibility – Newbies to the experienced developer are covered with Google Tag Managers extensive tag offerings
  • Version Control – Pushed a change that broke tracking? Easily rollback with built-in versioning
  • Workspaces – More than one person working on creating tags, they will be able to work in their own space (limited to only 3 in standard GTM)

To help clarify the differences we decided to put together a handy comparison one-pager. Download the one-pager or read through the high-level overview below.


Do you need to easily grant different permissions to departments or teams? Are you working with agencies who need to be able to add a limited selection of tags to your sites? Do you have very similar sites with some slight differences that cause you to need separate containers for each of the sites that are almost identical?

The addition of zones to GTM 360 has been a game changer for added security and ease of implementing tags. Zones help to eliminate several issues that can be frustrating to enterprise clients.

It’s not every day that you are able to go to your security team with a solution that can make both their job and your job easier. Zones allow you to grant granular access for different teams or agencies, managed from one larger GTM Container. As an example, you can create a separate container (called a ‘zone’ that is loaded from within a container) that allows a partner like LunaMetrics to only add or update Google Analytics tags so that you can be sure that your marketing tags are not accidentally impacted or blocked.

Zones are also great to minimize the need to copy and paste when creating containers for similar but slightly different template sites. You can create one master container for all shared information such as pageview and shared events tags. Each site can then have its own linked container for any of those bothersome differences from the master template site that used to require its one container or super strict triggers that were hard to maintain.

What Can Google Tag Manager Zones Do For Me?
By: Logan Gordon
Published: March 14, 2018
Workspaces For Teams

Do you find that you or other people on your team keep overriding each other’s changes? Do you have more than 3 simultaneous projects ongoing within your container that make it hard to keep track of changes? Do you require a tagging environment that is more similar to a developer interface, with branching and merging capabilities?

Workspaces help to eliminate these pain points and make collaboration across teams and agencies easier. Each project or team member can have their own instance of a container, without risking overwriting changes of another person or project within the container. When you are ready to publish a workspace, GTM makes it easy to merge each team members workspace changes together.

With the standard GTM, you can use up to three workspaces. This might not be enough, however, if you have one container spanning several sites or you have several tagging projects ongoing. In today’s fast-paced world, how many times does a project — say a site redesign — get put on the back burner, taking up one of those three valuable spots?

Honestly, workspaces are one of my favorite features of GTM, and unlocking unlimited workspaces can be worth it on its own. Read more about the benefits and how to start utilizing workspaces.

Approval Queue

Are you getting pushback from your IT team or security team to implement a Tag Manager due to risk? Or has your team accidentally pushed changes that break tracking all together?

The approval queue is the perfect solution for companies where there are multiple teams of people and/or who are working with different agencies. Basically when there are several people who all need access to make changes to GTM.

The GTM 360 Approval Queue features enable you to build an internal process around who can and should be able to make changes to your site through Google Tag Manager. With GTM 360 you are able to specify who has publish permission while allowing individuals to make their updates and test changes to tags before submitting for approval. The designated publisher is then able to test prior to releasing, minimizing the risk of breaking the live site.

Here’s the workflow on how approval process works:

Implementation and Support

Last but not least, when you purchase Google Tag Manager 360 through a Google Marketing Platform Sales Partner; we work with you to guide you to get the most out of your implementation. As expert trainers, you’ll benefit from our continually evolving best practices, our mountain of resources, and super helpful recipes.

At a minimum, you get a contractual service-level and uptime guarantee as well as a support analyst to call or email with questions and problems.

Getting Google Analytics 360

Google has added some amazing features to Google Tag Manager 360 to support enterprises that need better coordination of ownership, control, and implementation. Typically, Google Tag Manager 360 is bundled with Google Analytics 360, but these features and future additions are likely to entice organizations that may have a lower amount of traffic but require features that enable a higher degree of coordination and process.

The post What Features Are Available in Google Tag Manager 360? appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“Advertising for everyone,” Sridhar Ramaswamy professed at Google Marketing Live. Though perhaps what he should have added was “…but mostly for small businesses.”

Google’s annual Marketing Live Conference was held this year in San Jose from July 9-11. Our team attended and had a great time learning about Google’s new products, its focus on machine learning and new functionality. (Not to mention the free lattes.)

By now, you’ve read the conference highlights, and learned about new features from Store Visits to Smart Bidding. What we’d like to explore is the implications behind these announcements.

Google’s focus at the conference was on machine learning, automation and simplification.

They rolled out new functionality that will allow Google Ads campaigns to be:

  • Created without keywords
  • Fueled by user intent
  • Dynamic
  • Easy to set up
  • Quick to manage

This sounds like the exact opposite of every campaign an agency has ever created. Adjectives like “easy,” “simple” or “quick” would never be included in a campaign build.

Until now.

Smart Advertising

Google’s Smart products allow the novice advertiser to take advantage of machine learning and optimize campaigns and increase conversions at scale. In Google’s own words, its innovations, “are helping marketers meet the heightened expectations of today’s consumers” by delivering “better results, simpler experiences, and stronger collaboration.”

The products below are some of the highlights that focus on combining machine learning and paid search. They might not all apply to your current advertising needs, yet think how you might be able to take advantage of the new features and which ones you’d like to prioritize testing.

Smart Campaigns

This is Google’s new product “for small businesses specifically.” These campaigns incorporate machine learning from Google Ads into an easy-to-use interface to help small business owners drive real tangible results. This new automated account management will eventually replace AdWords Express.

Smart Bidding for Store Visits

Advertisers can set store visits as a conversion type in their Search & Shopping campaigns. Target ROAS will allow advertisers to work towards an omni-channel return, which was previously difficult to optimize for.

This technology is all about creating advertising that matches a user’s context. Smart bidding for store visits allows advertisers to optimize against total store conversions or the total conversion value. For businesses whose primary goal is to drive in-store visits, this product can be used to optimize for store visits only with a cost-per-visit goal.

Smart Shopping Campaigns

Previously known as Universal Shopping Campaigns, this new format will “drive simplicity, performance and reach” by using machine learning to optimize towards a retailer’s business goals. Smart Shopping Campaigns consolidate Shopping and Display Remarketing campaigns into a single feed for easier management.

Responsive Search Ads

A unique ad format that simplifies the creation and management of search ads. It lets advertisers provide multiple headlines and descriptions that will be used to generate (by machine learning) a single ad to show the right message at the right moment. By embracing the new Responsive search ad format, advertisers can potentially show a text ad with up to three headlines and two descriptions. This format changes advertising as we know it.

Local Campaigns

This is a cool new automated campaign type that leverages Smart Bidding. It’s designed to optimize for offline visits (store visits) and sales. These ads appear across Google’s sites and networks like Search, Maps, GDN, & YouTube. Can you say small business win? Expect to see these released in the upcoming months.

Maximize Lift Bidding Strategy

A new bidding strategy for TrueView advertising on Youtube in-stream and bumper ads, it measures the effectiveness and strength of a brand through lift bidding. In short, you can use these ads to determine how users feel and respond to messages about your brand. Maximize Lift is being released this year.

Our Experience with Smart Products

Our consultancy has a lot of experience with Google products. After all, we are a Google Marketing Platform Partner.

We’ve experimented with all of their Smart solutions like Smart Goals in Google Analytics and Smart bidding solutions in Google Ads like Maximize Clicks and Maximize Conversions, smart ad rotation and ad formats like responsive display. Some have worked very well. Some, not so much. But you should always test!

Even though we don’t receive the best outcomes from those tests sometimes, we’re going to continue to experiment with new products.

Google’s AI and Machine Learning capabilities are always improving and we’re excited to see them develop.


While you might have struggled with some of Google’s smart products in the past, we encourage you to keep experimenting.

Nothing will ever replace your brain power and advertising experience, but it’s important to remember that machines can be smarter than we are and they have more time. So if a machine can help you do your job better and stress less in the meantime, let it help you. Machine learning doesn’t mean less control, it means more opportunity.

Create a Campaign Experiment to test against your own campaigns and see which performs better. Once you know it works, you can continue to use that smart bidding strategy, ad or campaign type without worry. This is especially important for small businesses. Your budgets are small, and that’s okay. But these experiments will help you stretch your marketing dollar.

Google is Pushing Limits

Google Marketing Live 2018 showed advertisers that Google is undoubtedly moving toward a machine-learning world. They’re testing everything and using the results to push the limits of advertising while maintaining results, transparency and customer loyalty. Their dedication to improvement is helping to create an effective set of tools for advertisers from all companies, big and small.

Advertising for everyone, right?

The post How Advertisers Can Use Google’s Smart Products appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Companies should be making constant tweaks to their businesses to get better, whether it’s a new marketing campaign or website redesign. And ultimately, we want to know whether or not these changes have an impact on their goals – does the new landing page drive more email signups? How about articles read? Has the PPC campaign increased transactions?

Answering these types of questions is not quite as simple as pulling numbers. We can’t just look at a snapshot in time; we have to put our data into context if we want to learn about the causal relationship between the changes and the numbers.

What Is Causality?

Causality is a phenomenon through which one thing (the cause), under certain conditions, gives rise to or causes something else (the effect). However, there is a difference between “after this” and “because of this.” Thus it is important that when we study causality, we actually try and measure the influence of changes analytically.

How do we know that our new campaign is responsible for an increase in new customers? Can we be sure that our new checkout process redesign is responsible for more funnel dropouts? We’ve all heard the phrase “correlation is not causation” enough times that we don’t need to be reminded with an example. Rather I’d like to discuss some methods available to actually measure the impact of our marketing or design efforts.

Why Is It so Hard to Track?

When making changes in the real world, it is hard to account for all the different variables. Confounding variables such as seasonality, selection bias, geography, and many more, can make numbers imply one thing when the reality is different.

There are also other consequences of a new campaign or promotion, such as cannibalization or halo effects. Cannibalization is when the increase in one “thing” decreases a related “thing.” An example of this could be your PPC ad campaign is increasing paid traffic, but is decreasing (cannibalizing) your organic traffic. This example situation is sub-optimal because you are now paying for traffic that you were already getting organically.

On the other hand, a halo effect, is when the increase in one thing increases a related thing. For example, having a promotion on shampoo can also increase the sales of (non-discounted) conditioner. There are usually intricate relationships at play, and we want to be able to control as many of them as possible, to get clear answers.

What Can We Do?

Whenever a change is made, we want to think about how we are going to see the effects. It’s important to treat these changes like experiments. Ideally, we would love to have a control group. A control group is a group in an experiment that does not receive “treatment,” and is then used as a benchmark to measure how the test subjects (the group that received the treatment) do. We would have only one change made (for example, change the wording on a CTA button or change the colors on a banner), and have a measurable goal in mind. Do we want to see increased purchases? Do we want a higher average time on page?

Essentially, we need to determine what our response variable is going to be.

Controlled Experiments

In a controlled experiment, we have a random sample of participants that see or experience a new version that we want to test, and everyone else experiences the old version with no changes. The participants who did not see the change are considered the control group, and serve as a baseline in order for us to make a comparison.

Google Optimize is a great tool to use in these cases. Read the following blog post for more on why you should run A/B tests:

Why Should I Run A/B Tests on My Website?
By: Becky West
Published: April 29, 2016

Google Optimize is a WYSIWYG editor that lets us run experiments on our websites and integrates natively with Google Analytics. Using Google Optimize, we get to try out design changes on our website and select a percentage of users who come to our website on whom to test the change. It then gives us feedback on which A/B variation is performing better with respect to a pre-selected goal.

Once enough data is collected, Google will determine which A/B version is the winner.

Since the Optimize change is the only thing that is different between the experiment and control group, we can say scientifically that any change to our goals was caused by that change. Note that you will have to run the experiment for as long as it takes to get enough data, in order for a result to be statistically significant.

With Optimize 360, we can target specific audiences, and try multivariate tests instead of just A/B test. It’s important that our test and control group come from the same population of users, to avoid selection bias or other confounding factors.

Data Science Solutions

Sometimes, a randomized controlled experiment is not an option. Maybe we’ve changed our entire checkout process. Maybe we’ve added pictures to some subset of our product detail pages, and we want to see the effect they’ve had on total transactions. Or maybe we’ve made a redesign to one of our international websites. With a measurable goal in mind, we want to see how these changes have impacted the goal. In such cases, we need to dive into some data science solutions to do a causal analysis.

The basic idea behind these models is that we use historical data as training data, forecast (predict) what we would expect to see if no change had occurred, and then compare it to what actually occurred with the change.

We can use a data science tool such as R or Python for these analyses and import our data either from BigQuery or Google Analytics (read more about data imports and exports here).

Getting Started with R and Google Analytics
By: Becky West
Published: June 2, 2016

Forecasting should be used when we have no control or comparison group. It is a good option for site-wide changes. It uses time series data, that is, data taken at regular intervals (daily, weekly, monthly, etc), to run the analysis. For a single-variable time series data, we could import daily total transactions, and then we use the “forecast” package to help analyze the data before the change occurred.

Once the data is analyzed, it gives a prediction, within a confidence interval (usually 95%), of what it would expect the future data points to be. Plotting the actual observed data on top of the predicted interval shows whether or not the change has had a significant impact.

The gray area represents the 95% confidence interval. Notice that as we get further away from the historical data, the range of the confidence interval grows.

Causal Impact

This method is good for measuring changes that only a subset of our users encounter. Causal Impact considers the difference in historic patterns between the test and control groups and uses that difference to determine whether or not a measurable effect has been observed with the change. This implies that the control group needs to be somewhat similar or correlated to your test group. For example, if we’ve redesigned our Canadian website, but not our US or Mexican websites, then the web data from the Canadian site would be the test data, and the US and Mexican website data would be our control. (Notice how this is different from a controlled experiment!)

In a causal impact analysis, we define a predictive variable (our control group), a response variable (our test group), a pre-period (the historical data before the change occurred), and a post-period (starting at the time your change occurred). The CausalImpact R package can then produce a few graphs and a summary of the analysis. See this article on the CausalImpact R package. An analysis produced with example data is shown below, where the change occurred at 70 on the x-axis.

The first panel shows the data and a counterfactual prediction for the post-treatment period. The second panel shows the difference between observed data and counterfactual predictions. This is the pointwise causal effect, as estimated by the model.

The third panel adds up the pointwise contributions from the second panel, resulting in a plot of the cumulative effect of the intervention. There are many R and Python packages to do forecasting and causal impact models.

Using a large amount of historical data is important because it can catch seasonal and other long-term trends, and take those into account when doing the statistical analysis. However, it is more important that we are feeding our model reliable data, and that our goal has been tracked consistently throughout that time.

These methods should help us analyze any changes or experiments that have been made and also help us think about how we can test any future changes that we make to our websites.

The post Seeing Causality in Google Analytics Data appeared first on LunaMetrics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview