Loading...

Follow Branded3 | SEO & digital marketing blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Tuesday saw Google’s Senior Vice President of Ads, Sridhar Ramaswamy, and his team, take to the stage to talk us through the latest innovations coming to Googles Marketing Platform in 2018. The Keynote covered everything from the recently announced rebrand of Google AdWords, to the highly anticipated Cross Device reporting on Google Analytics.

In case you missed it, here are our key takeaways from the keynote:

Goodbye Google AdWords…hello, Google Ads

Initially announced at the end of June, the rebrand of Google AdWords came as no surprise during the keynote. Effective from the 24th July, Google AdWords is becoming Google Ads – which will encompass all paid search, display and video products.

There’ll be no immediate impact for anyone who’s adopted the new AdWords (or should I say Ads?) interface. Although, if you’ve been putting it off, I’d suggest making the leap now as, come the 24th July, it’s likely the old interface will be inaccessible – especially as Google have warned us the switch date is coming this month.

DoubleClick is also seeing a rebrand to be part of the new Google Marketing Platform, alongside the Analytics 360 Suite. This becomes a single port of call to plan, buy, measure, and optimise your digital marketing activities. DoubleClick Search is becoming Search Ads 360, while Display & Video 360 will bring together the features from DoubleClick Bid Manager, Campaign Manager, Studio, and Audience Centre.

Google Marketing Suite will see a new integration centre to better connect the range of products within the suite.

New targeting comes to YouTube ads

Nicky Rettke walked us through the latest products coming to YouTubes advertising range – focusing on three new targeting methods:

  • TrueView for Reach: bringing the simplicity of impression-based buying to YouTube – this is ideal for driving awareness to a broad customer set.
  • TrueView for Action: optimised for driving website conversions, TrueView for Action will see video advertising paired with a prominent call to action linking direct to your website. We will also see Form Ads coming later in the year for lead generation goals.
  • Maximise Lift Bidding: will leverage machine learning to help reach people who are more likely to consider your brand after seeing an ad.
Supercharge your ad copy

Machine learning was the main talking point of the search ads section. Google’s push towards machine learning has been apparent over the last year, with increased smart bidding and optimisation already available. Google are now extending this to ad copy, with the help of responsive search ads.

These have been rolling out since beta testing ended in June and are already available to many advertisers. Responsive search ads bringing a level of multivariant testing to your ad copy optimisation.

Not only are Google giving us a helping hand with machine learning, they’re also rewarding our adoption with prime real estate – responsive search ads display up to 33-character headlines and two 90-character description lines per ad.

By inputting up to 15 headlines and four description lines in to the ads, the machine learning algorithm uses multiple variations to find the optimal configuration based on the user’s search.

Cross Device reporting and remarketing

Clearly a product many have been waiting for, Anthony Chavez’s introduction of the new cross device reporting saw a cheer ripple through the audience.

The growth in mobile device usage has been rapid in recent years, making it harder to decide where to focus your marketing efforts. New Cross Device reports available within Google Analytics seamlessly combines data from people who visit your site across multiple devices, giving a concise view of how users are interacting with your site and brand.

This new way of reporting will allow for cross device remarketing audiences to be built and used across Google Ads. With a brief touch on privacy, only users opted in will be shown within the report and no first party data will be passed over.

The new Cross Device reporting will give more insights into users’ habits, enabling us to link moments together to gain a better understanding of our customers and where to focus our marketing efforts. For me, this was the highlight of the keynote and I look forward to unlocking its potential in the future.

Automated product feeds & smart shopping

Maintaining a product feed for shopping ads can be a manual and time-consuming process, so Google are bringing their automation to shopping campaigns. Automated feeds will launch later this year, crawling your website to create its own feed. This will open the door to advertisers who have been put off previously by the daunting prospect of setting up and maintaining their first shopping feed.

We saw smart shopping campaigns launch in May, these will take automated feeds one step further, by optimising your shopping campaigns to the goal you select, taking the manual optimisation out of shopping.

Third party integration with eCommerce platforms will also be coming to shopping campaigns later in the year, alongside new business goals to drive local store conversions and new customer acquisition.

New local campaigns coming to Google Ads

Aimed to help offline performance, the new local campaigns help drive in-store visits by linking directly to your Google My Business account to help build effortless campaigns. With minimal input, machine learning will do the rest of the work and optimise your campaigns to drive footfall and offline conversions.

An interesting keynote was delivered by Google this year, machine learning was again a common theme running through all of the new products announced, making adoption of machine learning and automation almost inevitable. If you haven’t already, its time to start taking advantage of machine learning and watch your campaigns grow!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Ok the title is click baity – Google Data Studio is an awesome tool and many of you are probably already using it on some level. For those of you that don’t know Google Data Studio, it’s time to get on board. After all, it’s free and allows you to simplify the visualisation and accessibility of your data.

There are a multitude of systems it can connect to by default – don’t forget it is a Google product, so it’s connections are biased towards the Google ecosystem.

Being able to visualise the data from multiple systems in one place simplifies and streamlines reporting. For example, surfacing organic Analytics data alongside Google Search console clicks and impressions or connecting it with Google sheets to help pull in aggregated search data that’s updated live.

There are some gripes I have it with though, and it’s a good idea to be mindful of the shortcomings with the platform before you dive straight in.

So let’s start with seven pitfalls you need to be mindful of when using Google Data Studio.

  1. Aggregating metrics from different sources

This is probably my number one gripe. You want to add metrics for AdWords, Bing, Facebook etc. to create a total aggregated metrics to calculate ROAS, but there is currently no direct way to aggregate metrics from multiple data sources in Google Data Studio. The messy solution to aggregate the metrics, as in the example below is to use Google Sheets to aggregate the sources, and then connect it back up to Google Data Studio to display the results. This isn’t ideal (in fact, it’s awful). For agencies and in-house analysts who need aggregated metrics, this is probably the number one reason you’ll be looking at other providers.

  1. Limited data connectors

It’s a Google product so connections to systems are limited. There are third party connectors available to download and use but beware not all are maintained. We use the Supermetrics suite of connectors extensively and the support that these guys offer deserves a special mention.

  1. Metrics labelling

Let’s say for example you display the Site CTR metric from Google Search Console. If you were to relabel the heading to CTR, for example, it also relabels the metric – this is not cool. Debugging this involves going into the metric and removing the name to see what actual metric is being used to pull in the data. They really should include a display name field.

  1. Calculated metrics

One of the cool features of Data Studio is the ability to use calculated metrics.

For example, you have 3 goals that you need to sum up in analytics to give you an aggregated view of engagement, you can do something like this in Data Studio. Don’t forget though, you can’t aggregate this over different sources.

You create a calculated metric by hitting the ADD A FIELD when editing the source connection.

Calculated metrics appear in the source mapping appended with ‘fx’. Below is an example of an ROI calculated metric.

The problem is, custom metrics definitions are not saved when you change to a different source e.g. if you change the source from Joe Blogs AdWords to Mr Smiths Adwords, you have to create every single custom metric again for each source – there is no way to save this into a template.

  1. Exports

The only way to export a Data Studio dashboard in the interface is to export to PDF.

This can mean trying to work it into another deck is a bit problematic – I’d really like to see some form of integration with Google Slides in future.

  1. Comparison arrows in tables

For actionable data, you’ll need comparison metrics. The problem is, you can’t select how you want to display the up and down arrows per table column in Data Studio – changing down to be green would change the whole table. In the below example, an increasing Avg. CPC is probably not a reason to celebrate.

  1. Comparison metrics precision

Similiar to the above, you can’t change the precision of the comparison metrics in tables. They are always to a single decimal point.

I’m sure there are other gripes you have with Data Studio – feel free to tweet me @_AlanNg and share your frustrations!

On the flip side, it’s not all doom and gloom. Keep your eyes out for a new post on all the reasons we do like Data Studio.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Day one of SEO training and you come away learning one thing, Google own search… But what if it doesn’t? Well at least when it comes to eCommerce. I’m here to show you how Amazon rose to overtake Google and share with you how to use Amazon search optimisation to rank higher on Amazon.

Who the hell is this guy trying to challenge the all-powerful Google?

Hey! I was just as surprised as you, but when you see figure 1 and figure 2, this brings us into a strange new world, where Google is second best.

Figure 1 where shoppers start their search 2015

Figure 2 where shoppers start their search 2016

9/10 retail customers go to Amazon after they’ve found the product on your site. 45% of the UK search on Amazon before they use Google and in the US it’s 52%. This coupled with the rise of voice search, where Amazon are again killing it, means we as SEOs need to be prepared.

How is Amazon killing it in voice?
  • They have sold more than 5.1 million smart speakers in the US since launch in 2014.
  • Amazon account for circa 90% of all voice shopping spend.
  • Over 70% of people who did a voice search last year used Alexa.

To give you some perspective, Will Reynolds recently announced on LinkedIn that less than 1% of his client’s enquiries starts with “Ok Google”.

We need to get on board with Amazon search optimisation now, because by 2020 there will be 24.1 million smart speakers in the US alone and by 2019 the voice recognition market will be worth $601 million.

Here’s what you need to know about Amazon SEO.

The top 20% of retailers make 80% of the revenue and Amazon’s A9 and A10’s algorithm uses conversion rate, relevance and customer satisfaction to rank products.…Sound familiar?

Do Amazon care that their algorithms are massively behind Google’s? Not in the slightest. Why? Because Amazon’s main priority is targeting visitors with products that will most likely result in a purchase. They push the best performing products to the top to increase their exposure, enabling more sales. This leads me on to a little known fact about Amazon…. Amazon holds no loyalty to its sellers. How do I know? Try putting Asics into Amazon. They’ve been a long-term customer, they supply directly to Amazon but what’s the first thing that appears at the top of the search? A New Balance running shoe. Looking for batteries? You’d expect to see Duracell or Energiser up there, nope, just Amazon’s own brand, Amazon basics. I imagine if you’re in retail and not on Amazon, you maybe sweating just a little bit.

This is where I come in to help.

How to rank higher on Amazon

Now if you came late to SEO, you’ll have no idea what Google was like before Penguin or Panda. I’m here to show you that Amazon optimisation breaks down into four key factors and those factors will help you rank higher on Amazon:

  • Organic
  • Paid
  • CRO
  • …but what about links?

In terms of organic, content is king again and what you really need to focus on is your product – the devil is in the detail when it comes to your product title and your products description.

How to structure your title
  • Don’t stack keywords, use 2-3 and place them early in the title.
  • Use characters to turn your title into phrases.
  • Try to optimise your title readability by varying your title lengths for mobile and desktop (See below).
  • See Figure 3.

Figure 3 The Organic top results for iPhone chargers 23/05/18

Title Text Lengths
  • For desktop organic results use around 115-144 characters.
  • For paid ads stick to around 30-33 characters.
  • For mobile titles keep them between 55-63 characters.

Mobile, like everything else in SEO is very important – Amazon users switch between searching on mobile and desktop all the time. Earlier this year, Amazon shoppers used mostly desktop, but now mobile searches are about 100,000 in front.

Back end search terms are by far the most important tool for organic ranking on Amazon. This is a great place to enter all the buyer keywords you may have not been able to fit in your product listing. It’s not visible to customers but is indexed by Amazon, so now is the time to stack keywords!

  • There are five fields to fill in – remember only 250 characters are indexed – try not to exceed this. If you do, make sure you start each field with the best keywords to ensure they are counted.
  • Make sure you include common misspellings but there’s no need to duplicate.
  • You don’t need to worry about punctuation, repetition, singular or plural words as Amazon already has got your back on this.
Amazon Keyword Research

For keyword research use the Amazon Keyword Tool. It’s free to use and gives you long tail keywords based on Amazon suggestions and can be used for organic and paid keywords as well.

Amazon PPC

This is the easiest way to drive your sales up on Amazon (Surprise, surprise Amazon is just like Google, you give it money and it helps you out). Not sure where to start? Start by creating manual PPC campaigns to target your keywords, then pump up the bids to drive exposure. Again, just like Google, running your paid activity for a short period at cost or even at a loss will be worth while to increase your organic ranking.

Figure 4 The Paid top results for iPhone chargers 23/05/18

See how much clearer the ads look on paid than they do in the organic top results? Nice short to the point ads. Creating a campaign is really simple as well.

  • You type in the campaign name, so you can monitor the results.
  • Add in how much your daily budget is (Min is £1).
  • Pick the dates you want your campaign to run.
  • Finally, you choose whether you want Amazon to target your ads on their data or you can configure this manually. I’d recommend the latter because nobody knows your customer as well as you do.

Now it’s time to set the key words you want to target – again, you have the choice to do this manually or automatically through Amazon. If you’ve used the keyword research tool from earlier, it’s time to use it again but now, like on AdWords, you can decide if you want the keywords to match exactly, broadly or as a phrase. A mixture of all three is the best way forward.

Now it’s time to see what the CPC for your keywords are and decide how much you’re willing to spend. What’s the best way to get conversion on your ads? Offer a deal.

You should always let your campaigns run for at least four weeks to get the best data you can from Amazon. After that four weeks, review what’s working and what’s not and amend your campaigns accordingly.

Now for CRO
  • Customer reviews: The higher your reviews, the higher Amazon is going to rank you. Like Google, you need to get a few reviews before Amazon starts to rank you against the bigger competitors. No one really can agree on the magic number but from what I’ve seen you need a minimum of 50.
  • Answered questions: These are essentially what blog posts are to your standard SEO content strategy. The best way to get these are to ask your customers what questions they have after buying your product (Also it’s a good way to try and get a review).
  • Image size and quality: I can’t stress this enough, these must be 1000 x1000 pixels so your buyer can zoom in and you need photos from all the angles of your product.
  • Price: Because Amazon values buyers over its sellers, it will always push the lowest price to the top, so you need to be auditing your competitors regularly to see what they are charging.
  • Exit rates: If someone looks at your product then immediately exits the Amazon site, Amazon is not going to be happy and will knock you down. Make sure you are in no way misleading buyers to avoid this.
  • Bounce rate: Same as the above, if someone lands on your product then leaves quickly to look at someone else’s, Amazon is going to see you as a hindrance to its buyers and penalise you.

But if Amazon is so much like Google then where do links play a role? In short, they don’t, they’ve been replaced with something that Amazon hold more dearly than any other factor. The quickest and easiest way to win at Amazon SEO is a sale.

Sales are Amazon’s equivalent to links. The more sales you have, the higher you’ll rank, and higher rankings lead to more sales – just like downloads for App store optimisaton. Outsell the competitors that outrank you for your keywords and you will shoot to the top. DO NOT be tempted to put through fake sales and fake reviews. Just like Google, Amazon hits you with penalties and these are a lot harder to get passed. You can’t even contact your buyers to offer them discounts directly without Amazon picking this up.

Figure 5 The scope of search in 2018

Now some of you may have seen the above chart and think this undercuts everything I’ve just said, but this chart only looks at search as a whole and doesn’t just focus on shoppers looking to buy products. When you take this into consideration, you get charts like figure 1 and 2.

In all honesty when it comes to eCommerce, Google is a lot like Abe Simpson here; a ton of knowledge and lots to shout about but no teeth, because what’s the point of Google if it’s not to make money?

Final thoughts

What to expect from Amazon’s A11:

  • Amazon will probably link with PayPal to cut down on black hat sales and reviews.
  • Content will probably be more precise, so stick to shorter and clearer titles.
  • We’ll probably see improved ranking for stores that have their own eCommerce sites but still use Amazon Pay.
  • CPC is expected to be controlled by relevance score and much more.

Keep a weather eye on the horizon because Amazon is making big moves and I guarantee it won’t be long before you can do your monthly grocery shop, your banking, buy a car and book your holidays on Amazon.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Previously, we’ve seen how R can be used to retrieve data from APIs such as Google Analytics. Often with data, you’ll conduct the same type of analysis repeatedly, using the same kind of code, for various projects. To make life easier, wouldn’t you want to build a front-end, so you can just plug in your numbers and get your analysis out? With something called Shiny, you can!

What is Shiny?

Shiny is an R package built by Rstudio. With a Shiny application there are two parts, the backend and the front-end. The backend is the Server part of the code. This is the backbone of your work and where your normal repetitive analysis would take place. The front-end section of the code is the UI file of your code. This is where you can build the visuals to be able to run those repetitive analyses. However, it is possible to combine these parts and build an application in one single file, or just build the UI into your backend.

Why should you use Shiny?

Consider this scenario, you’ve built some code that lets you regularly interact with APIs. But you always change one aspect of the code so it’s specific to the project you are working on, such as changing the Google Analytics View ID to get the correct view. With Shiny, you could build an application that calls the API automatically and get the View ID that you need, and then retrieve the data required with the press of a button. A more explicit example is using AWR. You can access their API to get your list of projects and ranking dates, then display them in your UI. This way, you can load up your application, choose the project and date that you want, and get the keyword rankings in the format you need.If that doesn’t convince you, how about SEO forecasting? You could build a front-end onto your forecasting code so that when it is required, you can just plug in your data, press Go, and speed up your analysis. Or even better, building the application so one of your colleagues can do the forecasting for themselves using the same methodology that you use. To go a step further, put your application on a server so it can easily be accessed without them having to use R.The first advantage of this is that it makes the whole process more efficient. You don’t have to open your code up and edit it. You’ve done it before, why do it again? A second advantage is that it brings reproducible and robust analyses to people who don’t need to build or even understand R code to be able to conduct an analysis in R. Third, you now have a process and methodology in place that can easily be referred to.

There’s so many possibilities with R that make it so powerful. We’ve previously seen that we can integrate R into our data extraction process using APIs, so that we can combine data extraction with data analysis. What Shiny brings to the table is that we can bring the data extraction and analysis process to people who don’t know statistics and data. This also allows the process of making an actionable decision, whether it is by someone who knows R or not, much more efficient.

Summary:
  • Use Shiny for extracting keyword rankings
  • Use Shiny for SEO forecasting
  • Use Shiny to performing data analysis
  • Use Shiny to do anything!
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last week Instagram introduced the “next generation of video” by launching IGTV. The IGTV app allows creators to post longer videos within its platform or directly via the new app, putting a new spin on the mobile video experience.

The app appears to have been created to compete better with the likes of YouTube and Snapchat, allowing creators to post much longer videos (there’s currently a 60 second limit on standard Instagram posts). This allows for better storytelling for users and the full screen, vertical video style makes it easy for users to watch.

Introducing IGTV, a new app for watching long-form, vertical video from your favorite Instagram creators. https://t.co/7aSMmmAsyB pic.twitter.com/Py8QI1rS23

— Instagram (@instagram) June 20, 2018

Why has IGTV been created?

Instagram reported that, we’re all watching less TV and more digital video[1] and younger audiences are looking for more homemade, authentic content in comparison to professional video[2]. IGTV appears to be Instagram’s response to this research, allowing users to follow the type of content they want.

Unlike Facebook’s failed video attempt, “Facebook Watch”, IGTV haven’t paid any creators to start using the app however, well known influencers have already jumped on the new feature. This suggests the app’s more “authentic” approach could be a winner, building a genuine user base.

The longer video capability, permanent recordings and full screen mode make storytelling easier for creators (great news for brands and influencers) and though there are no adverts yet, they will look to monetise it in the future.

In retaliation, YouTube have been rolling out new tools to try and keep creators on their site, mainly involving creators being able to charge monthly subscription fees and selling merchandise directly from the channel.

Where can I find IGTV?

To start using the feature you will need to download the IGTV app. You’ll then be able to create and watch videos directly through the app (for no “distractions”) or alternatively, click the IGTV button in the top right-hand corner of your normal Instagram homepage, just above stories – this will take you through to the IGTV area.

It will be interesting to see the pick up this gets over the coming months, but with such an engaged following already, Instagram are in a strong position to make this a success.

Images via techcrunch.com

1 – Emarketer, 2014-2019 Reports: Time Spent With Media

2 – BCG Study 2016, The Future Of Television: The Impact Of OTT On Video Production Around The World

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you use a Google Tag Manager variable whose type is “number” to send data to Google Analytics, you may notice that some of your data comes through as (not set), rather than the value you wanted.

The example above shows an event label which is pulled in from the score on a quiz with 5 questions. This is pulled when the quiz is completed. There is one score missing – 0.

Using Google Tag Manager debugger, I can tell that the variable fires from the data layer as 0. It doesn’t appear as (not set) or undefined.

Above: score – Data Layer Variable – number – 0

As well as this, if I look at the event firing, the label it is pulling in is 0.

Above: Label – 0

From this information, it seems that the event label in Google Analytics will be 0. However, along the way 0 is transformed to (not set). This is not the first time I’ve noticed Google Tag Manager debugger reading unreliably.

TIP: When debugging, make sure to check the real-time data in Google Analytics after your tag is live, to ensure the data appears as expected.

I managed to rewrite (not set) to 0 by creating a new look-up table variable which maps 0 to itself. Because all the other scores were working I set the default value as the original score, so they would remain unchanged. I then used this new variable as the label for the event tag.

What this new variable does is transform 0 from a number to a string. For some reason Google Analytics parses the number 0 as (not set) but the string ‘0’ as 0.

Reviewing what Google Tag Manager Debugger shows you can see the slight difference in using score (number) and score string (string) as a variable.

In Google Analytics we can now read the event label as 0 instead of (not set).

There are some other solutions that I haven’t personally tried, such as ensuring the data layer variable is coded as a string to begin with (unless you’re using this variable as a metric or for some other calculations) or using a custom JavaScript variable to transform numbers to strings. There may be other scenarios where this occurs, but the number 0 is the only instance I’ve noticed this occurring.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Napoleon famously didn’t see the British as a threat because, in his words, we are “nation of shopkeepers.” And it’s safe to say that we do love shopping.

So why are big brands like Toys-R-Us, Maplin, BHS, and New Look being forced to close stores?

The media seems to consistently blame one major culprit: Millennials. However, millennials are not to blame for the fall of these big retail brands. And here’s how I know.

Millennials shop in actual stores

Millennials prefer real life stores! I thought the opposite when I started my research for this piece but it’s true! Crazy right?

Not when you think about.

Millennials grew up with Amazon, Facebook, and Instagram, so they have had ample opportunity to shop online. But as we can see from the Forbes study cited above, their behaviour is not much different from that of other, older consumers.

It seems millennials like the way stores allow us to try before we buy, pick up a bargain, browse similar items, or, scarily enough, just buy something outright!

Figure 1: Struggling UK retailers vs Amazon.co.uk

Why the high street is getting quiet

Only 3 of the top 20 retailers in the UK are not bricks and mortar stores and (if you don’t include Amazon), the other two store-less retailers don’t appear until the teens.

So why are big high street brands disappearing?

Amazon is one of the key reasons that physical shops are on the decline (see figure 1).

Between 2010 and 2017, Amazon’s sales increased from £11.3bn to £56bn. Half of all US households are now on Amazon Prime, and Amazon’s UK sales smashed the £6 billion mark last year!

But in truth, there’s more to the story than just Amazon’s dynasty.

Online shopping is extremely convenient for consumers, and traditional retailers can’t keep up. One thing in the economy rings true: to succeed in business, you must adapt – or you die! Sadly, it seems many companies are not willing to adapt (See Figure 2).

Figure 2. Graph showing the visibility of the UK’s top 4 retailers

Why does online have the edge on store fronts?

Whether we’re shopping for cupcakes, flowers, or a honeymoon in Greece, the Internet has changed how we decide what to buy. There are many factors at play that have made online shopping a more favourable route for consumers:

  • Location, location, location? Not for ecommerce

Shops usually do the bulk of their business in the area the store is located, but this isn’t the case with e-commerce. Customers from one corner of the world can easily order almost anything from anywhere with the few clicks, making it easy and convenient to shop from your screen.

  • Gain new customers with SEO

Stores rely on traditional advertising, branding, and repeat visits to build a customer base. While online stores also rely on these aspects, a huge amount of their traffic comes from search engines.

While it doesn’t look good for business if you’re standing outside your store trying to herd people in, that’s pretty much what Google does for e-commerce. Consumers start by Googling what they want and, based on how high up in the SERPs it is, clicking on a site that they might eventually make a purchase form.

  • Lower costs

In most cases, running an e-commerce website is cheaper than operating a physical store, and these savings can be (and are often are) passed on to customers. An e-commerce site doesn’t need instore staff, it doesn’t need to own and manage a physical store front, and marketing is mainly dependant on SEO, pay-per-click, and social media traffic which is cheaper than the traditional advertising that physical stores often also have to invest in.

  • Knowledge is power

There’s a limit to how much merchandise you can display in store and how much information employees can retain for customers enquires. But there’s virtually no limit to what e-commerce websites can stock and say about their products! An online platform allows for endless possibilities in terms of what (and how) products are sold.

  • Targeted marketing

Using customer’s data, e-commerce sites can access a lot of information and use it to connect with customers. For example, if you are searching for something on Amazon, you will see listings of similar products and they’ll also email you to keep you updated on the latest products that you may like.

It’s great, but it’s not perfect: The challenges of e-commerce

If done right, e-commerce will help your business to grow. But if you mess it up, it can have a negative effect. These are the top 3 mistakes companies make on their e-commerce websites:

  1. Low quality content

When Google released the Panda algorithm, quality content became a top priority for any site. This is easy to fix – for example, product pages need to be more than pages with a photo and blurbs about a product.

Amazon, for example, features customer product reviews, photos, and videos. You also can’t just use the same, generic descriptions provided by the manufacturer. To increase your SEO rankings, you need to have custom product descriptions – Vue do this extremely well. (Yes, this is one of our case studies, but can you think of a better example?!).

  1. Poor technical SEO

I have clients coming to me all the time saying “Dan, we need you to build links.” “Dan, we need content.” “Dan, we need PPC.”

First question: Do you clean up your house before inviting people over… Of course you do! So why don’t you make sure your site is in a fit state before you start outreaching with digital marketing?

If you don’t get your technical SEO right, none of your hard work will index and you’ve just flushed away all that budget spent on outreach. Make some nice clean URLs, add detailed breadcrumbs, make the journey from viewing to purchasing as simple and as straightforward as possible, speed up your page loading times, and, for the love of Rick, make sure you’re on HTTPS!

  • No links + No trust = No customers

Now this is hard, but it’s 100% worth your investment. Whether it’s through content, influencer marketing, discount sites, or review sites, you need links. Why? Because links increase site traffic, give your site legitimacy, and provide a great place for your customers to access content that they can’t find anywhere else!

Marrying e-commerce and the high street

In short, high street stores are dying off because people have no reason to go to them. How do you fix this? Give people a reason to go!

We can’t dictate the way the world works, and the world wants e-commerce. However, you can focus on transforming your in-store experience so that it’s worth the journey.

Make your store more engaging, and you’ll increase footfall and sales. Inject digital into the physical shopping experience, and you’re on your way to getting ahead of the curve – check it out at Adobe Summit 2016 for inspiration.

Apple is one brand that has a famously popular in-store experience – here’s why:

  • It brings the digital world into its stores

Apple knows that most of the world is online and most of that population shop online. By having features such as the ability make Genius Bar appointments online, Apple is blending the in-store experience with digital capabilities.

  • It offers a seamless customer experience

Whether you’re shopping online or in store, you know what an Apple product looks like, and Apple stores look like Apple products! It is so good at branding, their storefronts don’t even need a logo.

  • It knows that knowledge is power

You can’t buy an Apple product from an Apple Store while uninformed (believe me, I’ve tried). You basically have no choice but to try out products in store, and you benefit from the human element as the store specialists are armed with iPads and computers everywhere you turn. Apple makes sure you know how good the product you’re buying is!

To conclude: Don’t shy away from digital

It’s not just Apple who have adapted to the modern consumer – other brands are taking notice. At Yext’s Explorer conference, Jaguar Land Rover said it is using Land Rover experience days to sell Land Rovers, and that each Jaguar showroom is going to have a classic car for its customers to see and feel. Argos, Sainsbury’s, and Tesco are finally starting to think about in store experience as well!

If there’s one key takeaway from this post, it’s don’t treat digital commerce as an enemy – treat it as a support structure that helps you drive sales on all channels.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

How to factor trust into your link building strategy (without ripping it up and starting again)

Back in April Bill Slawski covered a patent apparently updating PageRank, the calculation Google originally used to rank its results (and still a major feature of the algorithm as far as we know). Bill notes that the update bears a strong resemblance to the TrustRank patent grated to Yahoo! in 2004…as did the previous continuation patent, granted in 2006.

The MozTrust metric did a great job of illustrating what Yahoo! tried to do with TrustRank:

We determine MozTrust by calculating link “distance” between a given page and a “seed” site — a specific, known trust source (website) on the Internet. Think of this like six degrees of separation: The closer you are linked to a trusted website, the more trust you have.

Using both MozTrust and Majestic’s equivalent TrustFlow metric to assess offsite factors we recently found that trust generally correlated better with ranking improvements than relevance – though we’ve had a strong indication of this for a while and it seems likely that Google has been using some variant of “TrustRank” to value pages for at least a decade (again, see Bill’s previous post). Moz’ Domain Authority metric is effectively a combination of MozRank (a proxy for PageRank) and MozTrust…so to use that metric to illustrate what seems to be happening: MozRank (authority) should be a slightly less significant aspect of Domain Authority, with MozTrust taking greater importance. We’ve always been more fond of the MozTrust metric than Domain Authority because it’s harder to game – we’re taking Moz’ interpretation of which sites are considered trustworthy, but the crawler can easily determine whether a site has links from those sites or not (and trusted sites are much less likely to be added to a disavow file, which skews Domain Authority massively).

The patent Bill wrote about, titled “Producing a ranking for pages using distances in a web-link graph”, references many of the central components of MozTrust and shows how the distance (i.e. number of linking “hops”) between a seed site and your website will be factored in when Google calculates search rankings. As with most patents it isn’t clear whether the exact techniques outlined are in use (or when they came into use), but there are strong indications that the “TrustRank” patent is, or should be.

Why using trusted “seed” sites makes sense

There’s a reason it’s called the web: linking chains aren’t linear and no two websites’ link profiles look the same.

How the link graph looks when trusted seed pages are separated from non-seed pages.

PageRank “flows” through paid and earned links equally. Anchor text also seems to be equally applicable to paid and earned links. It’s likely that devalued links – backlinks from sites Google has determined to be selling links or spammy in some other way – do not pass less PageRank and the effects of anchor text are not “switched off”. Rather, Google decides to simply not trust those sites anymore.

The Beginner’s Guide to SEO encourages us to think about linking in the context of voting for sites we approve of: in this scenario nobody is prevented from casting their ballot but votes that have been paid for are thrown out for fraud and never counted. This will sometimes go undiscovered for months (even years), but once found out, that site’s vote will never be counted again…regardless of whether future votes are unbiased with money.

I’d strongly recommend considering (and writing down) your policies around building links on sites that you suspect of selling links – this will save you time in the long run. Consider adding some steps to your QA process:

  1. Use Bing’s linkfromdomain: search operator (e.g. linkfromdomain:moz.com) to see every website your link target already links to. Are there brands linked to that don’t seem to fit the theme of the website? A few quick site: searches will show you how natural those posts actually are.
  2. Read the 10 most recent posts at a minimum. You’re looking for anchor text that doesn’t seem like it should be there; mentions of brands where a competitor could easily be substituted (e.g. credit card providers like company name); and disclaimers (sponsored post, guest post, advertorial post, in collaboration/partnership with). Put the site on a list not to contact in future if any of these show up.
  3. Keep your do not contact list in a separate tab to your hitlist and use VLOOKUP to make sure you’re not contacting someone you previously decided not to. We’ve turned this into a Chrome extension to save time: when we visit a website we know if we’ve previously worked with a journalist or blogger using the site, who contacted them (and therefore who already has a relationship) and what the content was about – or if we’ve decided not to work with the site (or added it to a client’s disavow file).

If you’ve made contact with a site owner for the first time; they’ve shown interest in the brand and/or content you’d like them to talk about; but now they’re asking for payment…should you walk away? Not necessarily. This call should be made on a case by case basis because – assuming you followed a QA process to identify the site and it appeared natural enough in the first place – simply because you don’t want to waste time. Again, this is where a written process or policy comes in handy: your reply should say that it’s against your company’s policy to buy links (and therefore you couldn’t do it even if you wanted to) but also highlight Google’s policy on link selling. No brand ever wants to be outed for buying links since that often leads to manual actions; but no blogger wants to be outed for selling them either because that will ultimately destroy their source of income (that doesn’t mean your reply should sound threatening!). In my experience around 1 link in 5 gets placed for free where payment was initially asked for – I’ll walk away from the other four and place the site on a do not contact list.

Private Blog Networks (PBNs) are gaining in popularity once again because they work – but it’s no coincidence that most of the rhetoric still concerns how to keep them hidden. Between 2012 and 2014 it was pretty usual that links from a PBN to your website uncovered by Google’s Webspam team would result in a manual action or algorithmic penalty…it’s since become more likely that those links will just stop counting (become “devalued”) and your rankings will fall over time. That’s not to say that manual actions are unheard of in 2018 but it’s 100% clear that some brands are getting away with it.

Although Google has previously claimed that it doesn’t use data from disavow files uploaded to Google Search Console to determine rankings, it would provide a way to crowdsource large numbers of seed sites. From the patent’s abstract:

Generally, it is desirable to use a large number of seed pages to accommodate the different languages and a wide range of fields which are contained in the fast growing web contents. Unfortunately, this variation of PageRank requires solving the entire system for each seed separately. Hence, as the number of seed pages increases, the complexity of computation increases linearly, thereby limiting the number of seeds that can be practically used.

Through the disavow links tool Google has access to lists of websites that SEOs don’t think are trustworthy (often because they paid for links from those sites). As Marie Haynes pointed out last year, Penguin is not a machine learning algorithm – but it’s not beyond the realms of possibility that Google is using machine learning to determine which sites can be trusted from disavow file data.

How this changes our link acquisition tactics

Any updates Google makes to its algorithms shouldn’t affect the actual process of outreach: we’re still talking to humans and link building is about relationships and mutual benefit (and a completely different kind of trust).

What should change is how we choose our targets.

Links that benefit us most come from seed sites (or sites with links from seed sites). We obviously don’t really know which websites Google trusts but there’s a good chance that websites with a high MozTrust score are trusted by Google too – and they’re more likely to be linked to from trusted seed sites, even if they don’t fall into that category themselves.

Metrics aside, it should be pretty obvious which websites are the most trusted on the web – they tend to be universities and government organisations (and the SEO industry has been clear that .gov and .edu links are more valuable for a long time); plus national and international press and sites with extremely high readerships…that doesn’t mean that highly trafficked sites with contributors frequently offering to sell links (Forbes, Huffington Post etc.) are well trusted – they probably aren’t.

Other link targets that should definitely be at the top of your priority list are any publishing sites which compete with your own website in search results – tech companies absolutely want links from techcrunch.com, not just because they’re relevant, but because they’re clearly trusted enough to rank competitively for terms we also want to rank for. The patent “Producing a ranking for pages using distances in a web-link graph patent” also references the importance in diversity of themes covered by seed sites – links from trusted sites outside of your industry or country are still likely to pass a significant amount of trust and therefore benefit your search rankings.

Ultimately we want to evidence the trust these seed sites have in our websites. Though there’s still a strong argument for a diverse link profile, we’ve found results have been best when we keep returning to the well and getting multiple links from a site we’re confident is trusted. This has always made sense from an efficiency point of view: if we’ve built a relationship with a journalist who writes for TechCrunch, for example, it’s more likely that she will continue to cover our brand/feature our content – it’s usually easier to get a second, third or fourth link from techcrunch.com than it is to get the first.

In reality there’s no way to know whether Google trusts a website – and how much. What’s clear now more than ever is that trust has a significant bearing on whether a site will rank or not. Considering this in our strategies using the information available has been paying dividends for a while.

I’d love to know what you think – is trust a core component in your link acquisition? Tweet me.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

R has many uses, a particular highlight being its simple integration with various APIs, such as the Google Analytics API. These simple API integrations usually rely on packages, and integration into the Google Adwords API is no different. We have built our own R package for interacting with the Adwords API: adwordsR.

How does the adwordsR package work?

The current version of the adwordsR package (0.3.1) has three basic components: Google authentication via OAUTH2, reporting, and SOAP requests.

  • Google Authentication

This has effectively two parts: generating the access token and refreshing the access token. All the user has to do is try and load their token. This will either generate the access token if it does not exist in their work directory, or load the existing acess token.

Users must have their client ID, client secret, and Adwords Developer token handy and enter them when prompted.

  • Reporting

This accesses the reporting service from the Adwords API. It works by building the AWQL query, and then sending it to the API. The response is then a CSV text file which is cleaned into the requested format.

  • SOAP Requests

This is for accessing all parts of the API that are not reporting. The package allows you to choose the service that you want to access and then builds an XML file for you. The XML file is sent to the API and an XML response is returned.

Unfortunately, the XML response is not cleaned on its response, however we do highly recommend the XML package as this can parse an XML file into a list. We do plan to integrate XML parsing functionality into future versions.

A diagram of the workflow of retrieving Adwords data using adwordsR is found below.

Why did we build the adwordsR package

Before the adwordsR package, we had raw R code in our processes and applications that didn’t make the code particularly elegant, or we needed to copy the functions from elsewhere when we wanted to reuse it.

Yes, we could have stored the functions in a file, and called those source files, but this is basically how a package works – so why not build it properly?

The advantage of building the functions into a package is that we now have clean code that is much easier to maintain, which saves us a world of time when we want to upgrade any of our toolkit.

Will it make my life easier?

If you use R and Adwords, this package will make it much easier to get data from the API through a robust process, rather than just exporting files and data from the Adwords interface.

If you’ve already used the RAdwords package, the main points of interest are for parts of the Adwords API that are not easily accessible using R. You can easily get a list of all of your clients that are child to your parent’s MCC account using the ManagedCustomerService, or search volumes using the TargetingIdeaService, like we see below.

Why did we not want to rely on the RAdwords package?

The current RAdwords package is great – we’ve learnt a great deal from this package and incorporated a lot of its concepts into our package. But unfortunately, RAdwords was lacking for our requirements and did not offer us is the ability to interact with other sections of the Adwords API, such as the Managed Customer Service, which was a necessity.

Our package aims to integrate multiple services within the Adwords API into one package. Currently, the package is limited to Reporting, the Managed Customer Service, and the Targeting Idea Service. However, we plan to integrate more services into the package.

Why did we not build on the RAdwords package instead of building an entirely new package?

Before building the package, we had our R code hardcoded into our processes. The sections of the code that did not rely on the RAdwords package were hardcoded and the functions that were built were declared within the session.

They did not come from the RAdwords package, and for ease of use we decided to build these into a package. We then decided to incorporate the workflow of the RAdwords package into our own package so that two packages did not have to be loaded within the process.

By using this approach, it was an opportunity to revamp our existing adwordsR code and the RAdwords code. As such, we do recognise the author of the RAdwords package, Johannes Burkhardt, to be a large contributor of the adwordsR package, as this package would not exist without the existence of the RAdwords package.

Where can I access and install the package?

There are multiple places that you can get the package, and all it takes is a line of R code, depending on your preferred method.

I present two methods here of acquiring the package: CRAN and CRAN Github. All you need to do is copy and paste either of the following pieces of code and run them on your installed R version

  1. CRAN:

install.packages(“adwordsR”)

  1. Github:
    1. devtools::install_github(“cran/adwordsR”)

Alternatively, contact us for a tarball of the package.

Let us know how you’ve been using the adwordsR package!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

What is SearchLeeds?

SearchLeeds is the biggest digital marketing conference in the north of England. This year more than 1,500 people joined us at Leeds first direct arena on the 14th June to watch 36 speakers deliver talks about paid and organic search, content marketing, data and digital.

What we learned this year

Shout out to some #SearchLeeds speakers…

We have 5 first time speakers and almost a third of our speakers have done less than 5 talks: you’re on the bill because 1. We know you’ll be absolutely awesome and 2. You wanted to talk about something that we think will blow our minds pic.twitter.com/e0rekBzfae

— Stephen Kenwright (@stekenwright) June 14, 2018

We’re hugely grateful to everyone who gave up their time to speak at SearchLeeds. We received nearly 4x as many speaking pitches as we had slots to fill (and we approached some of our eventual speakers ourselves).

5 speakers delivered their first talks at SearchLeeds this year – including Mayflex’s Luke Carthy who stepped up 3 days before the event – and we want to make sure we get similar numbers of new speakers next year. A third of our speakers had delivered less than 5 talks – we put two of them on the main arena stage and they got some of the best feedback. So we know to programme the tracks on the strength of the talks rather than the experience of the speakers.

This year was the first we had specific themes for each stage: the Search Laboratory Stage returned yet again but exclusively as a paid media and data themed track; this year we also had the SISTRIX Technical SEO Stage. Last year the least attended talks were the PPC focused talks on the main arena stage; this year the PPC talks on the Search Laboratory stage were totally full, with people sitting on the floor (sorry!)

We did plan for an increased attendance (50% bigger third stage, 300% bigger second stage) but they were still both full. We’re looking at potentially taking even bigger spaces in the arena for next year.

It’s pretty clear to us now that the best pitches translate into being the best attended talks and the more detail speakers can give us, the more people end up in their sessions. We’re looking at additional labelling of talks next year so attendees know how advanced a session is and what they can expect to get out of it.

We recorded the talks on the main arena stage – we’re looking into the possibility of live streaming all the talks next year because even though we got attendees from 8 countries we know not everyone can make it.

The slides Stage One – main arena

St. Ives Chief Digital Officer J Schwan

The Future Doesn’t Exist in Silos

Purna Virji – Bing

Intelligent Search and Intelligent Assistants: Exploring the AI-era of Search

Rob McGowan – Edit

Useless Projects: Where AI Meets Human Creativity

Kirsty Hulse – Manyminds

Content Marketing Tips That Won’t Break the Bank (Or Your Spirit)

Hannah Smith – Verve Search

What Happens When a Werewolf Bites a Goldfish?

Jon Myers – DeepCrawl

The Mobile-First Index: What, Why and Most Importantly When

Kristal Ireland – Virgin Trains East Coast

Will Robots Destroy Us All? Putting the Ethical Debate Back into the Narrative Around the Future of AI

Jasper Bell – AmazeRealise

Retailers – Stop Thinking Store, Start Thinking Story

Lexi Mills – Shift6

Advanced Integrated Influence Strategies and Tactics

Kelvin Newman – Rough Agenda / BrightonSEO

Three Practical (and Inventive) Ways of Pinching Keyword Insight from Your Competitors

Branded3 Strategy Director Stephen Kenwright

Customer-Centric Search: Serving People Better for Competitive Advantage

SISTRIX Technical SEO Stage

Bastian Grimm – Peak Ace

Super Speed Around the Globe

Gerry White – Just Eat

Past, Present and the Future of Mobile

Steve Chambers – Stickyeyes

How Not to F*ck Up a Migration

Dawn Anderson – Move It Marketing

Power from What Lies Beneath: The Iceberg Approach to SEO

Dave Freeman – Treatwell

Creating Knockout On-site Content by Simply Understanding Your Customers

Luke Carthy – Mayflex

How to Optimise the S*** out of Your On-site Search

Rachel Costello – DeepCrawl

Stop Confusing Search Engines with Conflicting Signals

Julia Logan – Irish Wonder

How to Audit Your Site for Security

Oliver Brett – Screaming Frog

Why SEO Wizards Need User Testing Hobbits

Craig Campbell

How to Fix the Most Common SEO Issues Using SEMrush

Search Laboratory Stage

Jill Quick – The Coloring In Department

Track Your Campaigns Like a Bloodhound: Making Your Marketing Work Harder

Branded3 Senior Insights & Analytics Queen Emma Barnes

Analytics Tracking: Or How I Learned to Stop Worrying and Love Google Tag Manager

Andraz Stalec – Red Orbit

5 False Assumptions About Your Traffic

Hannah McKie – Missguided

PLAs: Small Company or Large, Everyone Has to Start Somewhere

Chris Rowett – Journey Further

Supercharing Google Shopping

Angus Hamilton – Search Laboratory

Enterprise Attribution

Holly Ellwood – Receptional

What’s New in PPC

Matt Holmes – Distrelec

The International Paid Search Playbook

Anu Adegbola – MindSwan

AdWords Script Automation & Pitfalls

John Rowley – Ferrero

Creating a Data-Driven Customer Journey with Personas and Smarter Investment

Elizabeth Clark – Dream Agility

The Future of Shopping

Branded3 & Edit Media Directors Jon Greenhalgh & Sam Wright

How to Deliver Growth in the Most Efficient Way Possible

Please any feedback about the event, positive or otherwise – we’re looking for suggestions to make next year even better.
Do you want to speak next year? Let us know. The more detail you can give us around your talk, the more likely people are to like it and the more likely we are to put you on stage!
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview