Retour haut de page

Something you may not have heard of, or even understand, is now possibly, going to impact your website pages rankings in Google results, in May!  We’re doomed I tell you.

What Are Core Web Vitals?

Google introduced its “Core Web Vitals Report” earlier last year, and later confirmed that they will become ranking signals for search results in May 2021.

“What are Core Web Vitals?” is a question many online businesses may be asking, or “Isn’t it too late to do anything now?”.

The answers are:

  • They are part of Googles’ User Experience metrics that use real-world user data (from Chrome and other sources) to measure if a site provides a good user experience in terms of
    • Page Load Time – How long does it take to load the largest element?
    • Page Visual Stability – Is the page stable as it loads or does it shift/jump around?
    • Page Interactivity – How long does it take before a user can interact with a page (scroll, click, fill forms etc.)

While Google gives reports on both mobile and desktop, your focus should be on mobile-first.

Does Core Web Vitals Impact Your Website?

The first thing you need to do is check if your site is impacted by these metrics by having a look in the Google Search Console account for your website (You don’t know what Google Search Console is? Ok maybe you need to get reading before you address Core Web Vitals).

The Core Web Vitals report was introduced into Search Console in May 2020 and there is no getting away from the fact that to resolve many of the issues brought up by this report, you will need technical knowledge or access to a developer, since most Content Management Systems will enable you to address some, but most of these issues require developer support.

The only other option is to go down the route of paying for one of the many new plug-ins that are appearing for systems like WordPress and Magento, however, using these without some background knowledge or researching that the Plugin does the job it says it does, could cause problems.

So first off log into your website Google Search Console Account and have a look at the Core Web Vitals Report.

Follow the Google recommendations for non-technical and technical users below, but be aware that for non-technical users Google still adds a step about passing on the report to “the development team”, so some level of technical knowledge is still needed.

Steps To Address and Fix Core Web Vitals Issues

Non-technical users steps to address Core Web Vitals Issues Report:

1. Focus on everything labelled as “Poor” first, then look at the issues that impact the largest set of URLs first or by your most important URLs. Those pages assessed as poor by the report are the ones Google will impact after May.

2. Once sorted by priority, pass the report over to your web(developer) team.

3. Common page fixes:

      a. Reduce page size – best practice is under 500Kb for a page and all its resources (images, JavaScript, CSS etc), but as we know ‘best practice’ and the real world are two different things so just look to reduce image size without impacting visual quality, getting rid of large JS or CSS files that are not used on a page and order the way a page is loaded by focusing on ‘above the fold’ elements – the part of the page that shows in a browser without the need to scroll down.

      b. Limit the amount of resources to no more than 50 for best mobile performance.

4. Test your fixes using PageSpeed Insights Testing Tool (or the Chrome Lighthouse tool, if you want to use an in-browser tool).

5. When you consider a page issue fixed then you should click “Start Tracking” on the issue details page in the Google Search Console Core Web Vitals report.

6. Track your validation process, you will likely have to go through the process a few times before getting a passed state.

Website Developers' steps to address Core Web Vitals Issues Report

1. Prioritise issues/pages labelled as “poor” first and focus on “Mobile” since fixing for mobile will likely resolve most of the issues on desktop and Google has been mobile-first focused for years now. If you manage to clear the “poor” URLs by all means start working on those URLs labelled “Needs Improvement”, but it is the “Poor” pages that will be impacted most when these metrics go live in May.

2. Page load speed will resolve many of the flagged issues, so have a look at dev fast loading guidelines for theory and guidelines t improve page load speed. (The web.dev site is a go-to resource for improving performance resources and tips as well as all things Web development.

3. Test your fixes using the PageSpeed Insights testing tool (or the Chrome Lighthouse tool, if you want to use an in-browser tool).

4. When you consider a page issue fixed then you should click “Start Tracking” on the issue details page in the Google Search Console Core Web Vitals report.

5. Track your validation process, you will likely have to go through the process a few times before getting a passed state, the validation states are:

      a. Not Started: There are URLs with an instance of these issues that have never had a validation request.

      b. Started: You have begun a validation attempt, and no remaining instances of the issue have been found as yet.

      c. Looking good: You have started a validation attempt, and all issues checked so far have been fixed.

      d. Passed: All URLs are in a passed state. You must have clicks “Validate Fix” to get to this state, if issues disappear without your having requested validation, the state would have changed to N/A.

      e. N/A: Google found the issue was fixed on all URLs, even though no Validation attempt was started.

      f. Failed: One or more URLs are in a failed state after a validation attempt.

The reality for many eCommerce sites who use systems, such as Shopify, is that you are restricted in how many changes you can make, but even here developers can optimise images, add Lazy loading, Preloading, setting width and height attributes for containers, checking 3rd party code and so on.

So keep calm, access your Search Console Account to see what issues exist, and work through the poor rated pages one step at a time, the more you manage to fix, the smaller the potential impact is going to be in May, and remember, the more you do and the less your competitors do on this issue, the better placed your website will be.

Later this month we will look at some developer actions your team could try that have already worked for large eCommerce sites who have put the time and effort in to address this issue.

If you are interested in hearing more about SEO, do not hesitate to get in contact.

You can also follow us on Twitter and Facebook for the latest updates.

Google Analytics Goes Down – What Does This Mean For Us?

Many of you may have experienced outages with certain Google tools yesterday morning. Google Analytics, Tag Manager and Optimise are among the platforms which experienced issues, causing disruption for marketers. These outages were a result of a major problem with Google’s Cloud Platform which was reportedly down for 1 hour and 39 minutes. Does this issue reflect a failure in the Google Analytics infrastructure? And, what does this mean for us?

Although it was only down for about an hour and a half, it still had a big impact. Issues like this can cause deadlines to be missed, daily analysis to be interrupted and overall productivity to dip.

What Caused It?

It is still unclear as to what caused the disruption as Google have stated that they are still carrying out internal investigations. The Google tools that were down include Analytics, Tag Manager and Optimise, which could highlight a flaw in the system.

The main disruption appears to just be a few tools being down. Let’s hope that this issue doesn’t cause any data outages in reports, as we saw with the Search Console bug this April. Time will tell if their tools being down has caused any other disruptions.

What Can We Take From This?

As marketers, you’ll understand how frustrating it is when an outage like this occurs. We need to be able to access these tools in order to carry out daily tasks. Working in a fast-past industry means we need reliable tools. Overall, the disruptions didn’t last too long, so shouldn’t impact us too much – unless any data is lost. We’ll keep an eye out for any further updates.

Did you experience any issues with Google Analytics tools yesterday? Tweet us with your thoughts.

Stop Using Robots.txt Noindex By September

Let’s all get prepared ahead of time, ready for the 1st of September, when Google will officially stop supporting noindex in the robots.txt directory.
Over the past 25 years, the unofficial standard of using robots.txt files, to make crawling an easier process, has been widely used on sites across the internet. Despite never being officially introduced as a web standard, Googlebot tends to follow robots.txt to decipher whether to crawl and index a site’s pages or images, to avoid following links and whether or not to show cached versions.

It’s important to note, robots.txt files can only be viewed as a guide and don’t completely block spiders from following requests. However, Google has announced that they plan to completely stop supporting the use of the noindex in the robots.txt file. So, it’s time to adapt a new way of instructing robots to not index any pages in which you want to avoid being crawled and indexed.

Why is Google stopping support for noindex in robots.txt?

As previously mentioned, the robots.txt noindex isn’t considered an official directive. Despite being unofficially supported by Google for the past quarter of a decade, noindex in robots.txt is often used incorrectly and has failed to work in 8% of cases. Google deciding to standardise the protocol is another step to further optimising the algorithm. Their aim with this standardisation is to prepare for potential open source releases in the future, which won’t support robots.txt directories. Google has been advising for years that users should avoid using robots.txt files so this change, although a major one, doesn’t come as a big surprise to us.

What Other Ways Can I Control The Crawling Process?

In order to get prepared for the day that Googlebot will stop following noindex instructions, as requested in the robots.txt directory, we must adapt to different processes in order to try and control crawling as much as we possibly can. Google has provided a few alternative suggestions on their official blog. However, the two we recommend you use for noindexing are:
• Robots meta tags with ‘noindex’
• Disallow in robots.txt

Robots meta tags with ‘noindex’

The first option we’re going to explore is using noindex in robots meta tags. As a brief summary, a robots meta tag is a bit of code that should be located in the header of a web page. This is the preferred option as it holds similar value, if not more, to that of robots.txt noindex and is highly effective for stopping URLs from being indexed. Using noindex in robots meta tags will still allow Googlebot to crawl your site but it will prevent URLs from being stored in Google’s index.

Disallow in robots.txt

The other method to noindexing is to use disallow in robots.txt. This form of robots.txt informs the robot to avoid visiting and crawling the site, which in turn means that it won’t be indexed.

A disallow to exclude all robots from crawling the whole site should look like this:
example

A disallow to exclude one particular robot from crawling the whole site should look like this:
example

A disallow for certain disallowed pages to not be crawled by all robots should look like this:
example

To exclude just one folder from being crawled by all robots, a disallow should look like this:
example

Important things to bear in mind

There are some important things to keep in mind when using robots.txt to request for pages not to be indexed:
• Robots have the ability to ignore your instructions in robots.txt. Malware robots, spammers and email address harvesters are more likely to ignore robots.txt, so it’s important to think about what you’re requesting to be noindexed and if it’s something which shouldn’t be viewed by all robots.
• Robots.txt files are not private, which means anyone can see what parts of your site you don’t want robots to crawl. So, just remember this because you should NOT be using disallow in robots.txt as a way to hide certain information.

And over to you

We’ve given you an overview of our two recommendations for alternative noindexing methods. It’s now up to you to implement a new method ahead of the 1st of September so that you’re prepared for Google to stop supporting noindex robots.txt. If you have any questions, make sure to get in touch with us.

Sign up for our newsletter at the bottom of this page and follow us on Facebook and Twitter for the latest updates.

Google Search Console Data Outage

As digital marketing professionals, we heavily rely on Google Search Console for extracting important information and to help us better understand how websites are performing in Google. However, the recent data outage in Google Search Console, which resulted in a major loss of data, means we should be more concerned about the reliability of Google’s reporting channels.
It was the 5th of April when Google reported an indexing bug, which hit 4% of Google’s indexed pages. Four weeks on and the bug has finally been resolved. However, despite the issue being fixed, many users have noticed bad/missing data in their Search Console reports. Not only does this major indexing issue negatively impact on our analysis, but it also presents an important question about the reliability of Google.

Ineffective April Reports

The data outage means that reports for April cannot be deemed as accurate and is extremely problematic for Google Search Console users. Without correct data for the majority of April, users are unable to fully distinguish whether any of their website pages were affected by the indexing bug or if any other major changed occurred.
Following on from this, users are unable to use their inaccurate April reports from Google Console to improve optimisation for their website. A major data loss like this will set marketing professionals far back. This data is significant for understanding the performance of websites in Google’s results search and is a major part of planning the optimisation process.

Can we really rely on Google?

An important question we should ask is, was the bug acting randomly or systematically? If the bug was systematically targeting certain sites this could raise the possibility that Google could be testing a new algorithm. The fact that the bug took a long time to resolve also questions the reliability of Google’s data channels. How can we fully trust a medium that is unable to resolve a bugging issue more efficiently?
Though many marketing professionals rely on Google’s tools as a main source of data, the recent issues with bugs should lead users to question the reliability of Google’s software. The de-indexing bug highlights the importance of using a variety of channels to ensure that not only do you have enough data to work with and optimise, but also that should Google encounter another bug, you have the traffic to minimise the impact of these issues in the future.

Sign up for our newsletter at the bottom of this page and follow us on Facebook and Twitter for the latest updates.

If you use Google Tag Manager, you’ll probably have noticed that there is now the capability to track how far down the page your visitors scroll. It’s been possible for a long time with custom JavaScript, but now it’s available out of the box and I can’t overstate how useful this is.

Tracking how far down a page your visitors scroll is a great metric as it lets you understand how engaged people are with your content. Are they reading all of your articles or are they bailing as soon as they read your intro? Is your landing page turning people off instantly, or is it just not the right time for them to buy? Scroll depth tracking will let you know.

But scroll depth tracking isn’t without its controversy: unless you change one specific setting, it changes the way Google Analytics reports on your bounce rate.

Since events are seen as an interaction, a user that only visits one page but triggers the scroll depth event won’t be counted as a bounce, even if they only view one page. This becomes more of an issue when scroll depth triggers are set at low thresholds like 10 or 25% since users with decent sized screens will often trigger those as soon as they view the page so even if they do bounce, they won’t be reported as such.

There are a couple of schools of thought on this.

The Case For Changing The Bounce Rate

Some argue that if a user has scrolled, they have interacted with the page, even if they’ve left afterwards. This is relevant for blogs and other content sites like recipes, where visitors typically land on an article, read it and leave. It’s not uncommon for these sites to have very high bounce rates because of this, so why wouldn’t you want to count a scroll as a proper visit and what advertiser wouldn’t want to be on a site with a low bounce rate?

The Case Against Changing Your Bounce Rate

There are two key reasonings against changing this: firstly, you won’t have historic data to compare against, since this will only work from the moment you install it, so if you make improvements, you won’t be comparing like for like. Secondly, it’s down to accuracy.

As we’ve mentioned, there is the problem of screens automatically counting visits as a scroll if they have seen enough of the page. A lot of desktop screens will show a quarter or more of the page upon a view, so even if they bounce, they'll be counted as an interacting visit. This is where the case against changing your bounce rate comes in. Aside from the event triggering early, there are pages where you want to send people to other pages.

Your homepage, for example, should be used to funnel people through your site. If visitors are scrolling but not doing anything else on a page such as this, they should still be counted as a bounce.

What Should You Do?

Before you change anything, you should consult your measurement plan and decide if this is the right thing for you and your business’ KPI's. Do you want your bounce rate changed on certain pages, such as your blog? If so, set up different events for those pages and make sure your triggers are configured appropriately. If not, click the “non-interaction hit” slider as shown below.

tag-configuration

If you’ve never heard of measurement plans or everything we're talking about today sounds new, swing by our digital analytics page to find out more about what we do in this field and get in touch to see how we can help your business understand its performance.

Sign up for our newsletter at the bottom of this page and follow us on Facebook and Twitter for latest updates.

One of the most common questions I’m asked about Google Analytics is the difference between a segment and a filter and the main use case of each of them. I’m often asked why you would ever use a filter when a segment does the same job and vice versa.

In today’s post, I’m going to briefly run you through what segments and filters are, how they work and the reasons for using each of them.

WHAT IS A GOOGLE ANALYTICS SEGMENT?

A segment in Google Analytics lets you view your metrics based upon specific criteria, for example only organic or paid traffic. They allow you to change your data on the fly and you use the whole of the Google Analytics interface just focusing on that data and, crucially, they do not change your data the way a filter does.

A segment can be applied retroactively, so you can see how your organic performance was last year and so on, and you can also create your own segments based on certain specific conditions. You can even share those custom segments with other Google Analytics users.

You can apply a segment to your Google Analytics like so:

Audience_Overview

Click the Add Segment button and you’ll see the list of pre-configured ones. As you can see, there’s a lot to play with and with the ability to import new segments from the Google Analytics gallery and create your own, there’s plenty of flexibility there to investigate your data from a variety of perspectives.

Segments are great and an essential part of your Google Analytics arsenal, but they’re not without their weaknesses.

Weaknesses Of Segments

As handy as it is being able to alter your data on the fly, there is inherently some lost functionality compared to filters. Firstly, there is less flexibility in what you can do with a segment than a filter – you cannot exclude a specific IP address or series of IP addresses with a segment, for example.

They also have a habit of triggering sampling within Google Analytics, where the data shown in a report is less than 100% accurate. If your dataset is small, you should be OK, but segments do bring this on much sooner

WHAT IS A GOOGLE ANALYTICS FILTER?

A filter is applied to a Google Analytics view and permanently changes the way that the data is collected for that view, rather than changing the way it’s reported on the fly. Unlike a segment, a filter will not change your data retroactively.

Filters offer a great deal more functionality than segments – as well as just replicating the capacities of segments, which would be prudent if you have a high amount of traffic, you can also make sweeping changes to the way your data is collected, processed and reported. You can use a filter to rewrite the URLs in your page reports, for example, or to double-check the hostname or simply to exclude a section of traffic which you know is not relevant (your own team, for example, or bots). You can also unleash the power of regular expressions to really take control of your data.

Filter

Filters are a far more powerful solution than segments, but they don’t offer the same flexibility. You would use a filter for a specific task within a reporting view (excluding your own office’s traffic, for example), rather than using it to check the performance of a specific metric in most cases.

Weaknesses Of Filters

With the power of filters comes responsibility in their use. They permanently change the data in a view from the moment they’re applied to the moment you remove it. There’s no going back. They also can’t be applied retroactively in the same way a segment can. It’s this permanence, plus the additional Google Analytics knowledge required to set up a filter that is the key weakness of them.

In line with best practice, you should always have a completely unfiltered “All Website Data” view, to ensure data continuity and to use for checking that your data is coming through properly. You should then have other filtered views depending on the kind of requirements your site has.

At the very least, we suggest having the All Website Data view and a view which filters out your own IP address and the IP address of any partner agencies/ other offices etc, although we would typically go much deeper than this with a Google Analytics setup.

WHEN TO USE SEGMENTS & FILTERS

A segment is the best way to isolate a certain metric, channel or device in your reporting view and apply that to your historic data. If you want to see how many people have come to your site over the last three years from Facebook on their tablets, a segment is the way to go.

If you need to permanently change the way your data is collected, such as excluding your IP address, removing bots, or rewriting your URLs so that they’re easier to read in reports, you’ll be looking for a filter.

The key thing to understand about filters vs segments is that there is really no “vs” at all. They’re different tools for different tasks and a good setup uses them together. For most reports, you’ll be relying on segments to isolate and highlight different metrics, but to ensure that your data is as clean as it can be, you’re going to need filters to be involved.

Unsure of how well your Google Analytics setup stands up to best practice? Get in touch through the contact form and let us see what we can do to help.

Follow us on Facebook and Twitter for the latest updates.

Meta robots tags are something that you’re almost inevitably going to come across if you work in SEO, but what are they, how do they work and how do they differ from the good old robots.txt? Let’s find out.

What Is A Meta Robots Tag?

A meta robots tag is a snippet of code that’s placed in the header of your web page that tells search engines and other crawlers what they should do with the page. Should they crawl it and add it to their index? Should they follow links on the page? Should they display your snippet in search results in a certain way? You can control all of these with meta robots tags and, while there may be a bit more development resource required in certain content management systems, they’re generally more effective than robots.txt in a lot of regards. I’ll talk more about that later.

Typically speaking, a robots tag would look like this in your HTML source.

As you can see, it’s comprised of two elements: the naming of the meta tag (robots, in this case – meta tags have to declare their identity to work) and the directives invoked (the “content" – in this case, “noindex, follow").

This is probably the most common meta robots tag that you’ll come across and use; the meta robots noindex tag tells search engines that, while they can crawl the page, the noindex directive tells them that they should not add the page to their index. The other directive in the tag, the “follow" tells search engines that they should follow the links on the page. This is useful to know because even if the page isn’t in the search engine index, it won’t be a black hole with the flow of your site’s authority – any authority which the page has to pass to others, either on your site or off, will still be passed by using the “follow" directive.

If you wanted to completely void that page and not have any links on there followed, the tag would look like one of the following:

By adding the “nofollow" attribute, you are telling search engines to not index the page, but also not to follow any links on that page, internal or external. The “none" directive is effectively the same as combining noindex and nofollow, but it’s not as commonly used. In general, we recommend “noindex,follow" if you need to noindex a page.

What Other Meta Robots Tags Are There?

Now we’ve covered the anatomy of the most common meta robots tag, let’s take a look at some of the others:

  • noimageindex: Tells the visiting crawler not to index any of the images on the page. Handy if you’ve got some proprietary images that you don’t want people finding on Google. Bear in mind that if someone links to an image, it can still be indexed.
  • noarchive: This tag tells search engines not to show a cached version of the page.
  • nosnippet: I genuinely can’t think of a viable use case for this one, but it stops search engines showing a snippet in the search results and stops them caching the page. If you can think of a reason to use this, ping me on Twitter @ben_johnston80.
  • noodp: This tag was used to stop search engines using your DMOZ directory listing instead of your meta title/ description in search results. However, since DMOZ shut down last year, this tag has been depreciated. You might still see it in the wild, and there are some SEO plugins out there that still incorporate it for some reason, but just know that since the death of DMOZ, this tag does nothing.
  • noydir: Another one that isn’t really of any use, but you’re likely to see in the wild and some SEO plugins push through – the noydir tag tells search engines not to show the snippet from the Yahoo! Directory. No search engines other than Yahoo! Use the Yahoo! Directory, and I’m not sure anyone has actually added their site to it since 2009, so it’s a genuinely useless tag.

When Should You Use Meta Robots Tags?

There are a number of reasons to use the meta tags over robots.txt, but the main one is the opportunity to deploy them on a page-by-page basis and have them followed. They are typically more effective than robots.txt and robots.txt works best when it’s used on a by-folder basis rather than a by-URL basis.

Essentially, if you need to exclude a specific page from the index, but want the links on that page to still be followed, or you have some images that you don’t want indexed but you still want the page’s content indexed, this is when you would use a meta robots tag. It’s an excellent, dynamic way of managing your site’s indexation (and there are loads of other things that you can do with them, but that’s another post).

But here’s the challenge: it’s really easy to add another line to your robots.txt file, but with some content management systems, it’s not that easy to add a meta tag to a specific page. Don’t worry, Google Tag Manager has you covered.

Adding Meta Robots Tags Through Google Tag Manager

If you have Google Tag Manager installed on your site to handle your tracking (and, seriously, why wouldn’t you?), you can use it to inject your meta robots tags on a page by page basis, thus eliminating the development overhead. Here’s how.

Firstly, create a new Custom HTML tag, incorporating the following code:

meta robots noindex google tag manager

Replace YOURDIRECTIVE1, YOURDIRECTIVE2 with what you want it to do (noindex, follow, for example) and if you want to remove one of the directives, that’s fine. The screenshot below will show you how to do this.

meta robots tag manager

Now create a trigger and set it to only fire on the pages you want the meta robots tag to apply to, as seen below.

meta robots tag manager trigger

And there you go, that’s how you can inject your meta robots tags through Google Tag Manager. Pretty handy, right?

And We’re Done

Hopefully today’s post has given you a better understanding of what meta robots tags are, what you’d use them for and how to use them. Any questions or comments, drop me a Tweet or send us a message through our contact form.

Google Analytics best practices event On the 29th of June, we hosted our first in-house event, Google Analytics Best Practices, led by our Head of Data Analytics, Ben Johnston. We welcomed our clients, partners and digital marketing professionals for a morning event, breakfast and networking session. During the event, we covered Google Analytics basics, view filtering, goal tracking, event tracking, the difference between goal tracking & event tracking, common issues and measurement planning. If you have missed this event, check out the presentation below.

Google Analytics Best Practices from ESV Digital

Don’t miss out on our next events, sign up for our newsletter at the bottom of this page and follow us on Facebook and Twitter for latest updates.

Measuring and optimising the performance of their investments is the goal of every marketer. To do this, a new concept has emerged: algorithmic attribution.

What is attribution?

Attribution allows one to understand the contribution of each facet in the path of conversion of a user (purchase, lead, reservation, etc.), in order to be able to measure its impact and its performance.

In practice, attribution creates a model by assigning weight to each interaction between a prospect or customer and a brand. These interactions can be online: SEO, SEA, Display, Social, .... Or offline: television, call centres, physical stores. Thus, the assessment goes beyond the reductive (and inaccurate) practice of assigning conversions to the last click of the conversion path. Attribution models are used to measure the profitability of each interaction and maximise the return on investment (ROI) of the advertiser. Attribution is a tool appropriate for all advertisers: small and big budgets, pure player or traditional actor, and to any type of industry: insurance, travel, e-retailer, humanitarian association, etc…

What is attribution for?

In practice, attribution provides answers to many questions:

  • What is the true performance of my budgets and campaigns?
  • I have an extra budget, where do I invest it to boost my sales?
  • I have to reduce my investments by $10,000, where do I have to disinvest?
  • What is the impact of my online investments on the offline and vice versa?
  • How can I avoid duplicating the same sale to different channels?
  • What is the actual impact of display ads on my performance?
  • Visualise and understand synergies between channels; How to exploit them?

How does algorithmic attribution work?

The goal of attribution is to attribute a share of the conversion to the channels involved in the path, according to different rules. To do this, two main categories of models have emerged: "ad hoc" models and algorithmic models.

Ad hoc models are user-defined models, which use assumptions to build their model that reflect their understanding of customer dynamics. For example, an ad hoc model may consider that a click on a product page should weigh more compared to a click on the home page. These assumptions more often reflect the intrinsic biases of the designer's opinions than the reality of the customer journey.

Conversely, algorithmic models abstain from any ab initio hypotheses and allow the computer to use all available data to find the model that best represents the customer path. Although much more complex to implement, algorithmic attribution represents the best assessment of the weight of each step in the customer journey. It is important to pay close attention to how these algorithms function, and to understand the variables on which they are based to ensure that they evolve according to the media mix and time. Certain metrics and characteristics are essential and must be present as foundations of the algorithm:

  • Consider all channels.
  • Take into account the nature of the point of contact (click, print or real view).
  • Take into account on-site user engagement.
  • Take into account how soon after the interaction a conversion is gained.
  • Adapt according to the evolution of the media mix.

Integrate as much data as possible with the model: Why? How?

Algorithmic attribution does not only apply to digital channels. If you are able to recover structured and comprehensive offline data, you will be able to measure the performance of your media mix as a whole. The allocation model will then combine the online and offline data and give you a holistic view of your different channels as well as existing synergies between channels.

In particular, granular integration of offline media investments will make it possible to measure return on investment (ROI). Similarly, a cross-device reconciliation or the integration of your CRM data will allow for a more precise understanding of customer paths towards the act of purchase. It is therefore necessary to maximise the use of connectors in order to automatically integrate all this data within the same platform. Once the data is collected, it is then possible to measure, understand, and improve its media mix as a whole. As a result, you will be able to control, via a single tool, all the channels you use.[box]

What are the key success factors?

When starting up an attribution project, several key success factors need to be given particular attention: first, the quality and completeness of the data used, in order to create the model that best reflects the reality of the impact of each marketing channel. Using false or partial data creates an inaccurate model. Second, you should measure your return on investment by analysing the lifetime value of your customers rather than a discrete event, such as a purchase, that only partially reflects the interaction of your customers with your brand.

Finally, it must be taken into account that algorithmic attribution is a technically complex subject. Choosing the technology that suits your needs and being guided, temporarily or over time, by subject experts is an investment that will pay over time.

For more information on how you can incorporate multichannel attribution into your marketing strategy, tweet at us @ESV_Digital_UK or follow us on LinkedIn.