Technical SEO is one of the most important things to consider as part of any SEO campaign. Whether you’re a seasoned pro or just starting out in SEO, you need to get the basics right; otherwise you’ll spend a lot of time (and money!) on content and links and you’ll always be hindered by a poor site.
If you’re not sure what things to consider or how to check for certain issues, then look no further – our guide on how to complete a technical SEO audit will answer your questions.
Our technical SEO audit will cover the following core areas:
Within some sections there’ll be sub-sections for you to consider.
So – let’s get on with the audit!
There are some manual checks involved in a technical audit, but to help you along the way and speed things up a bit, we recommend using the following:. We’ll touch on each tool in more detail as we come to use it:
If you’ve got Screaming Frog installed, then simply open it and paste in your domain. If not, download it from here (if your site is more than 500 URLs, you’ll need to buy a licence to get the full site)
If you’ve not used Screaming Frog before, then check out their full user guide which explains how to do each task.
If you’ve used it before, then bookmark it as it will come in handy in future.
As a side note – Seer Interactive have this epic guide on how to do almost anything in Screaming Frog. If you’ve never used it before or have been using it for years; it’s really, really useful.
Paste your URL into the address bar at the top, and click “start”. Depending on the size of your site, it will take anything from a few minutes to several hours. Once it’s finished – save your crawl! You’ll thank us for this one; to save you having to re-do your crawl and wait for it to run again.
Once the crawl has completed, there are several things to export from SF depending on the size of your site and what you’re hoping to achieve, but at a top level we’d recommend:
Remember to keep your crawl open for the duration of your audit, as you’ll need to export other things along the way.
Ideally, we want to find major issues with our site that may be hindering performance – this may come in several forms, but what we want to know is that the users and search engines can find our content easily on any device.
There are several things that can hinder a site’s performance in the SERPs, so let’s dive in and get the basics right.
So, a site can resolve (load) at the following URLs:
That’s four different versions of the same site. To avoid confusion for users and search engines, we only want one instance of the site indexed, so we need to pick ONE preferred version, and ensure that there are proper redirects in place.
To check if you have this issue, simply try your website domain with and without the different variations. You *should* see the non www version of the domain redirecting to the full URL (or vice versa depending on how you’ve set it up). The same goes for http:// to https:// – simply try both versions of your URL and see what happens.
If there’s no redirect in place, then there’s no need to panic – it’s an easy fix!
How to Fix This Issue
So, if you’ve got duplicate versions of your site – first, check if there are canonical tags in place. A canonical tag tells search engines which is the correct version of your page to crawl/index. To check this, we can use the exports from Screaming Frog (under the ‘directives’ heading on your ‘internal-all’ export, or in the ‘directives’ export itself), or simply check the source of the page (crtl +U) and search for ‘canonical’. It should look like this:
If there’s a canonical tag in place, then although it’s not ideal to have several different versions, if all your canonical tags point to the preferred URL then there shouldn’t be an issue.
The best way to solve this permanently is to redirect everything to the preferred version – so, if for example the preferred URL is https://www.yoursite.com, we need a redirect that pushes everything doesn’t contain www. To the www. URL and any http:// visits to the https:// version of your site.
If your site is in WordPress, it normally does a pretty good job of handling these things. Otherwise, ask your web developer to redirect them for you. If you’d like to handle them yourself, then use the code from these URLs to force the www. URL and the https:// version:
Add these to your htaccess file and this will sort the redirects for you.
A robots.txt file is a guide for search engine crawlers to follow (or avoid should you wish). The main things we want to ensure are:
If you want your site to be seen by search engines, you do not want your robots.txt file to look like this:
This directive tells ALL web spiders not to crawl your site. If this is how your robots.txt file looks and you want your sit e to be indexed, then you need to delete the “/” ASAP.
To test this, go to https://www.yoursite.com/robots.txt and see what it looks like. In most cases, it is usually fine and there’s nothing to worry about, but it’s worth a check.
Also, check that your XML sitemap location is added to your robots.txt file, so it looks like the below:
If you want to block a certain file type or path, simply add them to your robots.txt file as pictured below:
Google has a full list of robots.txt directives here, and a list of rules you can apply to your robots.txt file.
If you want to test that you have your robots.txt file set up correctly, use Google’s robots.txt tester – if there’s any errors they’ll be flagged in the testing tool:
An XML sitemap is a list of pages on your website; that will help search engine spiders crawl and index the content on your site.
Whilst it isn’t an absolute necessity to have one – it’s definitely worthwhile. Google’s own description of sitemaps is as follows:
“A sitemap is a file where you provide information about the pages, videos, and other files on your site, and the relationships between them. Search engines like Google read this file to more intelligently crawl your site.”
Using a sitemap doesn’t guarantee that all the items in your sitemap will be crawled and indexed, as Google processes rely on complex algorithms to schedule crawling. However, in most cases, your site will benefit from having a sitemap, and you’ll never be penalized for having one
If you’re not sure if you have an XML sitemap – ask your web developer. In most cases, it will reside at /sitemap.xml, however it does depend on the filename of your sitemap.
If you need to create one, you can do this in Screaming Frog by clicking “Sitemaps” and “create XML sitemap”
This will export an XML sitemap for you to upload to the root of your site.
Once you’ve got an XML sitemap, submit it to Google Search Console (click to jump to that section). This will help flag any errors with your sitemaps, which you can then check out in more detail. When you’ve submitted your sitemap, you’ll see a screen similar to the below:
If there’s an error with a particular sitemap, just click on the corresponding sitemap and it will explain what the issue is:
Once you’ve created and submitted your sitemap, just check regularly for any errors in GSC. To find out more about sitemaps, check out Google’s sitemap resource section.
A site: search will give us an indication of how many pages Google has indexed from your site.
Simply type “site:yourdomain.com” (don’t include the https://www), as pictured below:
As you scroll down Google’s results, you can see if there’s any pages indexed that we don’t want/shouldn’t be there.
The number of results should be close to the number of pages on your site (it will never be exact), but if it’s say 1000s of URLs out then there’s a problem somewhere.
One of the best ways to get your content seen by Google is to include it in your XML sitemap and submit it to Google Search Console. Login into GSC, and under “index”, click ‘sitemaps’:
At the top of the page you’ll see a field that looks like the below, where you can add your own sitemap – simply enter the URL of your sitemap and click ‘submit’ – it’s as easy as that!
Once your sitemap has been submitted, if there any issues; Google will flag them here.
How you pass users through your site can have an impact on your site’s rankings, so it is important that this is done right.
Earlier on, we said export the “response codes” tab from Screaming Frog – you’ll need this now. Here’s a quick overview of some of the most common response codes you’ll see:
You may sometimes see a 403 or 502 code – in these instances the solution is to fix the broken page or redirect to the most relevant page.
Ideally, if a page is no longer valid (such as a discontinued service or product, or if you’ve renamed a page), rather than this page just not working, we want point users to another relevant part of the site.
The way to do this is through a 301 (permanent) redirect – this tells search engines that the address of that page has permanently changed to a new address. These will show in the SF export as “301” in the ‘status codes’ column on your export:
What we’re looking for are any pages that are redirecting from A->B->C, this is known as a redirect chain. Ideally, a URL should redirect as follows:
https://www.yoursite.com/old-url/ to https://www.yoursite.com/new-url/. If there’s an instance where it redirects from /old-url/ to /new-url/ to /new-url-2/ this is a redirect chain, and really we want to remove the middle redirect, and instead redirect from the first URL to the last URL.
In instances where there’s a 302 redirect; these will show as ‘302’ in the status codes column:
There may be times when a 302 redirect is useful; such as if you’re redirecting users to a seasonal page (such as a summer or Christmas page for example), however if that redirect is to remain in place permanently, these redirects should be updated to a 301 redirect rather than a 302.
Copy all of the URLs that are returning a 302 response code, and create a new sheet in your excel audit, and paste them into the new sheet. Rename the sheet ‘302’ response codes – that way, if you’re passing on the audit to someone to implement the recommendations, they’ll have a list of everything in one handy file.
As the majority of searches take place on a mobile device, and is only going to increase in future; mobile friendliness is a ranking factor in Google’s results. If your content doesn’t render properly or takes a long time to load on a mobile device; then you’re less likely to rank over a competitor that serves content on a mobile device correctly.
Head over to Google’s mobile friendly testing tool: https://search.google.com/test/mobile-friendly , and enter in your site’s URL to run the test and see if your site is mobile friendly or not:
If your site isn’t mobile friendly, you’ll be met with something similar to the below highlighting the errors:
The most common issues are:
These items mean that either your site isn’t responsive, doesn’t serve content correctly or that users will find it difficult to read text, or click items on your site. An example of this can be seen below:
To fix the viewport & the content wider than the screen issues, we can kill three birds with one stone, simply by adding the following code to your site’s header:
<meta name=”viewport” content=”width=device-width, initial-scale=1″>
This should also help with the ‘text too small to read’ issue, but it’s also advisable that you’re using a standard font, but there are also a few guidelines from Google to ensure your text can be read properly:
Sometimes, some sites serve content to users on different devices at different URLs, such as:
Img source: https://developers.google.com
If this is the case on the site you’re auditing, the main thing we need to look for is annotations that tell search engines that the same site exists at another URL. What this looks like is:
<link rel=”alternate” media=”only screen and (max-width: 640px)”
<link rel=”canonical” href=”http://www.example.com/page-1″>
These tags need to exist on a page to page basis, so for every page on the site, there needs to be the corresponding tag on the mobile version of the site, rather than just one at a domain level.
This tells search engines that the same content is being served at another URL, but it is part of the same site, rather than two separate sites.
The alternative to this is to ensure that your site is responsive; although Google has no preference as to which setup you have, as long as it is accessible to users and all Googlebot user-agents.
Page speed is one of Google’s confirmed ranking factors – particularly on mobile devices.
No one wants to wait forever for a site to load, so…
If your site takes a long time to load, then it’s likely that a user will click the back button and go to another site, so it’s important to get the page speed sorted on your site.
To test your site’s page speed, visit either of these two:
They will both usually give similar recommendations to improve your page speed, but it’s good to test in both as you’ll get confirmation of which areas to focus on.
To test your page speed, simply paste your URL into both tools and press “analyse”
For the purpose of this post, we’ve picked a site that we know needs a bit of improvement page speed-wise:
The suggested improvements are:
With most sites; it’s usually images that are slowing things down. The best thing you can do is compress your images so that they load quickly – if you’ve got Photoshop then make sure you “Save for web and devices” when saving any images.
If your site already has a lot of images, and you’re using WordPress, there are plugins out there such as Smush It which compresses images for you, meaning that you can serve scaled images to your users.
The main considerations for most page speed tools are:
Sometimes, depending on your site’s theme/layout, there will be recommendations to remove JS code from above the fold of your page – more information on how to do this can be found in Google’s developer insight guidelines.
International SEO is essentially ensuring your site is effectively targeting other countries (and languages where appropriate).
Unfortunately; if you get it wrong it can have a significant negative impact on your site…but fear not as we’ve got you covered here.
The main thing is to ensure that your site is set up to effectively target other countries – especially if you are an international company. An example could be a retailer that sells clothes to the UK, US, Australia, Spain, France and so on.
With this in mind, there are a few things to consider in how you approach this:
You could also look at redirecting users based on their IP/location, however Google have advised against this in the past, so it’s best to ensure that your site is set up to serve international users correctly.
So, with those above considerations in mind, here’s how you go about it.
Either way, the first thing to do is check that they’re added to Google Search Console, and set to target the correct location – you’ll need to use the old version of GSC to do this:
Check that a country is set (sometimes this is missed during the initial GSC setup), and if there isn’t one defined, simply click the tick box and select your target country.
A CCTLD is a great signal to search engines that you want to target a specific country, for example www.yoursite.it is obvious that you want to be targeting users in Italy, however this will mean that no link equity is passed from the main domain, compared to if it were to reside at www.yoursite.com/it/.
Check what solution your site has in place (if any), and ensure that international targeting is set up in GSC.
While having translated content is a great signal to search engines as to where you want to target, having poorly translated content is a big no-no, and can do you more harm than good.
Make sure that your content is translated to the language you’re targeting by someone who speaks the language fluently or it is their native language – having machine based translations or relying solely on Google translate is a recipe for disaster.
A hreflang tag is a way of telling search engines that another version of your site exists in another language, or in the same language targeting a different country (for example, you may have an English site that targets England, Ireland, USA and Australia).
Without the tags, it’s duplicate content which could lead your sites to not rank as highly as they could/should be doing. Check out Google’s advice on hreflang tags here.
Check the source of the site you’re auditing, and see if there’s code similar to the below:
<link rel=”alternate” hreflang=”en-gb”
<link rel=”alternate” hreflang=”en-us”
<link rel=”alternate” hreflang=”en”
<link rel=”alternate” hreflang=”de”
<link rel=”alternate” hreflang=”x-default”
Important: whilst you’re telling search engines that there’s several versions of your page, you also need a ‘self-referencing’ tag that tells which is the original.
The Screaming Frog crawl should find all pages that have hreflang tags –when you’ve crawled the site, click the “hreflang” tab:
Img source: Screaming Frog
This will then give you a list of URLs and if there are any issues with the hreflang tags, such as incorrectly set up, not containing a self-referencing tag etc.
If there are no tags in place, but you have an international site, you’ll need to create tags for each site.
Aleyda Solis has covered this in her blog post about how to avoid issues with hreflang tags, and as an additional bonus she’s created a fantastic, easy to use hreflang generator that creates the tags for you.
If you want to check the tags on your own site, visit https://hreflangchecker.com/#/
Important: Sometimes; hreflang tags can be found in the XML sitemap instead, so if you don’t see the tag in the HTML of the page, don’t automatically assume that there are no tags in place.
The content on your site is one of the most important factors in where your site ranks within a search engine’s results – does it match user’s queries, is it factually correct, is it unique to your site, is it up to date – all these things can play a part, so it’s vital that your content is as optimised as can be.
Duplicate content is when the text on a page on your site has the exact same content either somewhere else on your own site, or on an external website. Search engines reward great, unique content; so if your content is duplicated across several pages of your own site, or copied from somewhere else, you’re not going to rank well.
There is no actual penalty for having duplicate content, but Google will crawl your site less frequently, and de-value your site over another that has regular, informative, unique content.
To check if you have duplicate content on your own site, or if someone else is using your content, you can use Siteliner or Copyscape – they both offer free and premium versions, however one of the best/simplest ways is to take some lines of text from one of your pages, and paste it into a Google search with “ before and after the text, for example copying everything below including the “:
“The use of mobile devices to surf the web is growing at an astronomical pace”
There are several thousand results all using the same text –if someone has duplicated your text, you can mail the site to ask them to alter it, however chances are you’ll not get a reply. Instead, it might be worth rewriting your copy slightly so that it is unique to your site.
If you have duplicate content across several pages of your own site, then make a list of the affected pages, and create unique content for each page, removing all instances of duplicate content.
The page title is one of the most important on-page SEO elements, as it tells users and search engines what your page is about, and helps your site stand out in the SERPs.
In Screaming Frog, click the ‘page titles’ tab, and filter by “Missing” – this will show any pages that have no title tag on the page. Export this and add it to a new tab in your excel doc, so you have a list of all issues in one place.
For all pages that should have a tag, create a new page title that meets the following requirements:
A duplicate title tag is confusing to search engines – if you have several pages with the same title tag, how is a search engine spider meant to know which is the correct page to rank/index within its results?
In Screaming Frog, click the ‘page titles’ tab, and filter by “Duplicate” – export these and add to a new tab in your audit file.
Following the same guidelines as above, create a unique title for each page that has a duplicate title tag.
Whilst not a huge issue, page titles can contain up to 65 characters, so if your titles aren’t utilising this space, you could be adding extra keywords in there to help your pages rank higher. In Screaming Frog, click the “Page titles” tab, and filter by “Below 30 characters” – here you’ll have a list of all page titles that potentially could be expanded on.
An h1 tag is an HTML tag that describes what is on the page, and usually acts as the main heading on the page.
Search engine spiders use the page title, headings and content on the page to understand what the page is about, so it’s important to have an optimised h1, however sometimes depending on your site’s theme, h1s can be missing completely.
Using Screaming Frog, click the h1 tab, and filter by “missing” (see below)
This will give you a list of all pages missing an h1 tag. Export these and add them to your spreadsheet so you have a list of all pages that need new h1s.
Generally, pages should only have one h1 tag per page, however can have as many h2s/h3s and so on as you’d like. When there’s more than one h1 on the page, these can sometimes be ignored by search engines, so you’re wasting a great opportunity to send an extra signal to search engines.
Use Screaming Frog again, and instead filter by “Multiple”, this will show you all the pages that have multiple h1 tags. Add these to your missing tab, and you’ll have a list of all pages that need h1s redoing.
The Meta description (pictured below) is the messaging you see in the search results below the page title and URL:
While not a ranking factor, the right messaging can help improve click through rate (CTR), and therefore they should be addressed alongside your on page content.
Within the screaming frog exports, we’re looking the following:
Using Screaming Frog, click the “meta description” tab, and as before, filter for each of the description types:
Export missing, duplicate and below 70 characters, and save them to your spreadsheet in a new tab, as these will need new Meta descriptions creating for all of them.
Guidelines for Creating Meta Descriptions:
A canonical tag is a line of code that tells search engines that there’s a specific version of a page or URL that we want them to index. As mentioned earlier in the audit, a page can resolve at several URLs:
When this happens, we need to tell search engines which page to index, so we need canonical tags in place. Below is a live example from Thomas Cook’s website:
We’re looking for any pages that are either missing a canonical tag, or are pointing to a non-indexable page. Export the ‘missing’ and ‘none—indexable canonical’ tabs, and add them to your spreadsheet. Any page that doesn’t have a canonical, ensure there’s one in place – if there’s not a correct page to go to, simply add a self-referencing canonical (a page pointing to itself).
Any non-indexable ones (such as if the canonical page is returning a 404 error) should be updated so that they canonical to live pages.
Search console is a free tool from Google that helps track the performance of your website within Google’s results. It can be used to highlight technical issues, what terms people are finding your site with, and any other issues Google is having with your website.
Firstly, check if you have access to your site’s search console account – if you’re working for a client, ask for full access so you can do your checks effectively.
Search console has recently been updated, with some features having been removed, and others set to follow in future, so for the purposes of this review, we’re focusing on the latest version of GSC.
When you first login, you will see a page that looks similar to the below:
Select your site from the top left of the screen, and you’ll see a menu that looks like the below:
We’ll run through these in more detail.
The performance section highlights how your site is doing in Google’s results in terms of search queries, clicks, impressions, CTR and average position:
Simply click on each of the different headings highlighted, and you’ll see which pages are performing best, which queries are driving the most traffic to your site, along with which country your users coming from and what device types.
From this data, you can see which terms people are searching for, and where you potentially need to improve – if there’s a high impression term that has a low CTR, then you can start optimising your site for these terms.
The URL inspection tool gives you up to date information about how Google is seeing and indexing your content within its results. Simply click the “URL inspection” tab on the menu, and paste in your URL – you’ll be greeted with a page similar to the below with a list of any issues:
Within the tool we can see:
Depending on how your site is seen by Google, you’ll get varying results depending on if it its indexed or not, and any issues Google may have faced (such as the page is being blocked, is an alternate version of another URL, if the page is returning a 404/500 error) or something else. A full list of status issues can be found here.
Check your most important URLs with the inspection tool, and if any issues are flagged, work on these as a priority.
The coverage section shows you how many URLs are in Google’s index, how many pages are missing, and how many are showing errors (if any). Click “coverage” and you’ll see a screen similar to the below:
This is probably one of the most important sections within the new search console, as it will flag pages that are returning crawl errors. If your site is showing any errors, click on the error and you’ll see the offending pages:
If for example you have pages that are returning a 500 error, you can export this list and set about fixing the pages or redirecting them. Once you fixed them, click the “validate fix” button which tells Google you’ve fixed these pages, so they should stop showing in future reports.
Some of the most common errors are:
If a URL is being flagged here, click on the URL and you can dive into this further:
Click “fetch as Google” and try and see what Google is seeing with that particular page, and you may begin to understand why the page is broken.
We touched on submitting your sitemap earlier, however this gives an overview of any issues that may have arisen with your XML sitemap, or you can submit another sitemap here.
Ideally, you want to see something similar to the below with no issues:
If there any errors, they’ll be flagged in here – there are quite a lot of potential sitemap errors, outlined in Google’s page about them here.
The mobile usability section shows which pages (if any) have problems when viewed on mobiles. The first page (pictured below) gives an overview of some of the issues, then you can drill down into each individual issue:
Click on each issue, and it will show the offending URLs:
To fix these issues, simply head back to the mobile friendly section of the audit and implement those fixes, and click “validate fix” within search console.
A manual action is effectively a penalty from Google, usually handed out for doing something that is against Google’s guidelines.
What you really, really want to see on this tab is the below:
A manual action is quite hard to remove – you’ll have to clean up whatever you’ve done wrong, and request a review from Google.
The most common manual actions are for:
There are other types of manual actions – head over to Google’s help section for more information on the less common issues.
One of the most common causes of a manual action occurs when someone buys an expired domain, puts new content on there and then receives a penalty. If this happens to you, request a review, and include a snapshot of what the old site looked like from https://archive.org/web/, highlighting what you’ve changed, and you should get your manual action removed.
If your site has been hacked, users may be alerted with a warning in Google’s Chrome browser. They’ll also flag in here if there are any security issues; usually showing the URLs that are affected and what the problem is. Typically, the cause is the site has been hacked through a 3rd party plugin, and your content has been changed on your site without you knowing.
Sometimes, phishing pages can be setup to try and capture user’s personal details, so keep an eye on this section to see if there’s anything wrong with your site.
The links section shows you all of the links to your site that Google has seen. There may be some that you know are live that aren’t showing in here yet, but don’t worry – over time they’ll start to show.
Within this section, you can see:
With this data, you can identify which areas need more internal and external links to try and help boost your visibility overall.
If you’ve not already got a GSC account, it’s really simple to get started and set one up. Using the menu, click “add property”:
You’ll then be asked which type of property you want to setup – we recommend the domain level one highlighted below:
You’ll then be required to verify the site via DNS record, using the code provided:
If you don’t have access to this, setup as a prefix property instead, and add the HTML file to your site, or the easiest way is to copy and paste the HTML tag into your site’s header:
Once you’ve added the tag, click “verify”, and ‘done’ and you’re ready to go!
If you’re using WordPress, you can use the Yoast SEO plugin, copy the HTML tag and paste it into the dashboard and that’s it.
Once you’re verified, give it a few days and wait for the data to populate in search console, then you can monitor your site’s performance on a regular basis.
So, you now know how to audit a site, and which parts are the most important (indexation, mobile friendliness, how your content is served to both users and search engines). All of the elements outlined in this audit play a vital part in how your site runs technically – so, now you can go and build solid foundations for your own successful SEO campaign.
My name is Aires Loutsaris and I am an eCommerce Search Engine Optimization Consultant working with some of the world's biggest VC funded Startups and eCommerce companies. I have over 14 years of experience in SEO, search marketing and user acquisition. I have an SEO course with over 2.5k paying students on Udemy, I was an SEO Lecturer at the University of Arts, London College of Fashion and have been shortlisted for five awards winning two. Fun Fact: I have performed SEO Services for all Three of the Universities I studied at: The Open University, The University of Hull and Kings College University of London. I have also been the Head of SEO at the agencies SEO.co.uk and Net Natives. Contact me for any reason, I'm happy to help and I am very approachable. Additionally, feel free to connect with me on LinkedIn.
Please log in again. The login page will open in a new window. After logging in you can close it and return to this page.