The Definitive Guide to Technical SEO

featured image technical seo

Technical SEO is a health check of your website for search engine visibility. You can only diagnose your website by doing a full website audit – crawling every URL on your website. As you might imagine, performing a complete SEO site audit requires some knowledge of web development and how search engines work.

Most websites owners understand the need for technical SEO checks as search engines are getting more sophisticated. A technical SEO audit helps ensure your website is compatible with google’s webmaster guidelines.

For instance, submitting your sitemap and robots.txt file will let google better understand and find your webpages. If your website has duplicated content or spammy links pointing to your site, having a robots meta noindex tag will prevent your site from getting penalised.

On-going site hygiene checks for content duplication, keyword stuffing, user-generated spam and backlink growth will reap long lasting SEO benefits.

Quick Jump Menu

Required tools for SEO Audit:

  • Screaming Frog: Free version allows you to crawl up to 500 URLs.
  • Google Search Console: Google’s Webmaster Tool that every website owner should use to monitor your website’s search presence. (Free)

Basics of a technical SEO audit:

Technical SEO is about crawlability and indexation of webpages.

The goal is to:

a) Uncover any major issues with the site structure and underlying code that might cause indexation issues
b) Clarify a solid keyword strategy and on-page optimisation
c)  Create a list of action steps for improvement by level of importance

The key is to NOT have technical audits presented as a data dump. Having 50+ pages of reported data and analysis would inundate yourself with too much info to the point that it’s not actionable.

From a beginner’s perspective, technical SEO issues can be grouped into 5 key areas:
olivermoose technical seo 101 marketing

1. Indexation within search results

The more of your content pages search crawlers are able to index, the higher possibility your website will appear for a search query.

Indexation is about how search engines organise information.

Google’s search index is like a library of billions of webpages. After web crawlers discover and access content on a webpage, the  system renders the content on the page and takes note of key elements such as keywords and website freshness.

This is how search engines can serve up the most relevant result pages when users enter a search query.

2. Crawlability

For Google to crawl your website, website owners need to submit a sitemap.

Crawlability is about discoverability. It refers to search engines’ ability crawl and access content on a page.

Search engines find new or updated pages by using web crawlers or GoogleBots to follow links on webpages. They go from link to link and bring information about content of those webpages back to Google’s servers.

2.1 XML Sitemap

sitemap seo agency

A sitemap is a file that lists the web pages of your site. It tells search engines about the structure of your website and the important pages that should be crawled and indexed.

An XML sitemap includes information about each URL, including when it was last updated, how often it changes, and how important it is in relation to other URLs on your site.

Updating your sitemap regularly helps search engines index your new pages quicker. If you have a large website, I highly recommend reviewing your sitemap on search console to make sure all important pages including new ones are found there.

How to take care of your sitemap:

  • Update the xml sitemap every time you add a new page
  • Ensure the sitemap file is UTF-8 encoded
  • Check the sitemap errors regularly in Search Console.
    • Log in to search console, go to ‘Crawl’ -> Sitemaps
    • At the top right corner, click ‘add/test sitemap’ button. Enter the URL of your sitemap and click ‘Test”.
    • After the test is complete, you can view the results by clicking on ‘View test results’. This will reveal if there were any crawl errors in your sitemap.

2.2 Importance of Robots.txt

Robots.txt file is the first file on your website that web crawlers will access.

Website owners use the /robots.txt file to give web robots instructions about their site.

Robots.txt stands for The Robots Exclusion Protocol.

When GoogleBot wants to visit your site, before it does so, it first accesses the robots.txt and checks whether it shouldn’t visit any pages on your site.

Now, what can go wrong?

The worst mistake is to block web crawlers access from indexing pages on your website. Here’s the most painful thing you can find:

User-agent: *
Disallow: /

This means all pages on the website are blocked from indexing.

So why do we even create a disallow rule for robots.txt anyway?

  • Your robots.txt file can prevent duplicate content indexation resulting from “printer friendly” pages, log-in pages or other information that has no organic search value. Such as:

robots txt examples seo technical

  • To fix content duplication issues, add  rel=“next” and rel=“prev” or point out which internal link is the main link with rel=“canonical”.

To check if you’re blocking any important pages or resources, simply review your robots.txt file.

Just add a ‘/robots.txt’ after your domain.

check robots file for seo

Too much info to digest?

Here’s a visual explanation by former Googler, Matt Cutts on how search works:

The Fundamentals of Search by Matt Cutts

A website has no crawlability issues if web crawlers can access all the content by easily following links from page to page.

So what affects indexation and crawlability?

The problem arises when there are broken links and internal linking issues, which hurts search engines’ ability to access specific content on the site.

3. Internal Linking

Bad internal linking structure results in dead-ends and web crawlers missing some of your content.

An internal link is a hyperlink from one page that points to another page on the same website. Good internal linking structure enables web crawlers to reach pages that are deep in your site.

Internal linking is important for:

  • Users to navigate the website
  • Help establish information hierarchy
  • Spread link equity across the website

An SEO-friendly site architecture with internal links enable web crawlers to find all the pathways to your content. So pages with no direct, internal links point to them from the main navigation links would not be accessed by search engines.

For instance, the homepage has links pointing to page A and D, but no internal links point to B and C. Even though pages B and C are on the site, there are no crawlable, internal links connecting them – as far as Google is concerned – these pages don’t exist because web crawlers can’t reach them.

So no matter how great your keyword research and content optimisation, without a proper internal linking structure, it won’t help SEO if Google can’t reach those pages in the first place.

internal linking technical seo agency

3.1 Broken internal links

Broken links are the easiest to fix, identify and can have a significant impact on SEO.

A broken internal link hurts user experience and affect search engines from crawling your website properly.

Changing the URL structure

Maybe you’ve recently changed/updated the URL structure of a page, that old page will become a broken page (404). Because it no longer exists. So if that old URL had been ranking on Google previously, it’ll be a dead link.

URL Typo mistakes

This is when you made a typo when inserting the URL (as an image, text link, etc) to a page. Anybody could make a typo mistake so it’s better to run weekly or monthly website crawls to make sure there’s no broken links on your site.

Outdated URLs

If you’ve recently undergone a website migration, make sure you’re not linking to old or decommissioned pages from your old site.

To find 404 pages on your website, run a website crawl using ScreamingFrog.

Here’s a really easy to follow step-by-step tutorial from ScreamingFrog on finding broken links and their source (inlinks).

Fix your broken links by redirecting it to the new URL or next relevant page. My recommendation is to avoid broken links by using canonicalisation or 301 redirects whenever you’re changing/removing URLs.

4. Page Speed Optimisation

Web crawlers have a limited amount of time to crawl and index your site – also known as crawl budget.

You can find out your website’s crawl budget on Google Search Console, under ‘Crawl’ -> ‘Crawl Stats’. This is the average number of pages on your website that Google crawled per day.

The faster your website loads, the more pages GoogleBot can crawl and find new content to index. So the bigger your crawl budget, the faster this will happen. Also, site speed is highly correlated with search engine rankings.

Google announced in 2010 that site speed is a factor in their ranking algorithm.

Most importantly, “Time To First Byte” (TTFB) correlates highly with search rankings. TTFB is the amount of time it takes the web server to process a request of your URL and send the first byte of response back from the server to your browser.

What you can do now is (i) evaluate your site speed using Google’s PageSpeed Insights Tool and (ii) find out what is causing the longest TTFB on your site using WebPage Test.

How to use Google PageSpeed Insights Tool:

Enter your URL and voila. It evaluates your website’s performance and provides suggestions to improve.

The score indicates how well your site is optimised for speed. Generally, anything above 80 is fantastic.

But don’t be alarmed if your page speed score is “low” or “medium”, because the most important metric to look out for is ‘Server Response Time’. If your server responded in under 2 seconds, that is acceptable.

For a more detailed breakdown, these tools are very helpful for website tests:

  • GTMetrix
    • Performance report on your website’s page speed. Look at the waterfall chart to see which request took the longest time to fulfil.
  • Pingdom 
    • Analyses your website’s overall load time, number of requests, size and score on YSlow performance metric
  • WebPage Test 
    • provides a waterfall chart of your page’s load time performance and an speed optimisation checklist.

Note that each page of your website may not have the same page speed. Make sure to test at least 10 pages of you website, especially key pages to test for load time.

How to skyrocket your page speed with simple fixes

Increasing page speed is one of the hardest and complex aspects of technical SEO. But here are some easy fixes that can improve your site loading time:

  • Reduce Image Sizes
    • Use WP Smush or Imagify plugin for WordPress to compress any picture you upload automatically. If you want to compress your images beforehand, use online tools like TinyPNG or Optimizilla.
    • Another tip is to use JPEG files as they are smaller but not as high quality as PNG. The best option is to use SVG files which can scale image sizes without loss of quality.
  • Leverage Browser Caching
    • A popular free caching plugin is W3 Total Cache. If you have some budget, investing in a premium caching plugin is worth it, such as WP Rocket. For a detailed comparison between WP Rocket and free caching plugins like W3 Total Cache and WP Super Cache, head over to this post.
  • Minify CSS and JavaScript
    • Minification removes unnecessary white spaces and characters from your CSS and JavaScript files.
    • To fix this, install Gulpjs on your server and refer this Google guide to your developer. If you’re using WordPress, Autoptimize is a free plugin that works really well too.
  • Use a Content Delivery Network (CDN)
    • The CDN delivers cached version of your website’s static content on its own servers based on the geographic location of the user. So when somebody visits your web page, the static content is loaded from the sever closest to their location.
    • For example, if your website is hosted in USA and when a visitor from Finland visits your site, without a CDN, that user will have to wait for the server to load all the way from USA.
    • With a CDN, your website will be loaded from the server closest to the user.
    • Engage a developer to move static files from your website to a CDN and keep only the HTML file on your main server. Static versions of your site are like images, CSS and Javascript files.
    • If you’re using WordPress, simply install a plugin like MaxCDN (premium) or set up Cloudflare Free CDN on your WordPress site.
  • Useful info on how to set up Cloudflare on WordPress: How to Setup CloudFlare Free CDN in WordPress – WP Beginner 
  • More useful reads

4.2 Mobile Speed

A “mobile-friendly” site is one that is well-designed for mobile devices. This is a result of increasing number of searches on mobile and its an official ranking signal in Google’s algorithm, as announced in 2015. Based on a study by SimilarWeb, over 55% of all website traffic came from mobile and its projected to outpace desktop traffic.

Just recently, Google announced its imminent rollout of a mobile-first index.

Simply put, Google will begin to index and rank your website based on content from a mobile user experience, rather than desktop which was the case historically.

Basic checks to ensure your site is mobile-friendly:

  1. Check your website with the Google Developer Tool’s Mobile-Friendly Test
  2. Use the Mobile Usability Report in Google Webmaster Tool, that highlights any major issues mobile usability issues across your site
  3. [For Tech/Developer] Read this Google guide on mobile templates for websites using third-party software like WordPress, Joomla, Blogger, etc.

How to prepare for Mobile SEO

  1. Responsive Web Design: makes sure your website content looks great on all devices.
    • Serves all devices (desktop, mobile, ipad, etc) with the same HTML code and adjusts for screen size, using CSS to alter the rendering of the page on each device. Responsive design improves page load time as it avoids redirects to a device-optimised view.
    • Example – adding a meta viewport tag to the head of your HTML document instructs browsers to adjust the dimensions of your page to the width of the device.
      <meta name="viewport" content="width=device-width, initial-scale=1.0">
  2. Verify your Structured Markup across mobile and desktop. Type the URLs of both versions into Structured Data Testing Tool and check that the output is the same.
  3. Make sure the mobile version of your website also has high quality content as its desktop version and is crawlable, such as alt-tag attributes for images.
  4. Check your Robots.txt file to verify that the mobile version of your site is accessible to GoogleBot. Use the robots.txt testing tool.
  5. Avoid common mistakes that frustrate mobile UX such as featuring unplayable media (Flash video). Check your Robots.txt and ensure you’re not blocking search engines from crawling important aspects of your site.

Is this really happening?!

In December 2017, Google has confirmed rolling out mobile-first index for at least “a few sites”.

John Mueller from Google has confirmed that they have begun testing in live search results, but implementing it slowly.

Google has set up “classifiers” to select sites that are ready to be switched over for mobile-first. These determines how equal the desktop site is to the mobile version, such as content, schema, URL structure, etc.

It is one of those things that we need to make sure that the changes we make actually work out well. We are creating some classifiers internally to make sure that the mobile pages are actually equivalent to the desktop pages and that sites don’t see any negative effects from that switch. And those are things we need to test with real content, we can’t just make up pages and say this is well kind of like a normal web page. We really have to see what happens when you run it with real content.

Useful links to send your developer:

Worry not.

Most modern websites have responsive design or dynamic serving site, where the content and markup is the same across devices.

Unless your website has been built on old (m.dot sites) setups like /mobi. Then it is time to switch.

4.3 Poor redirects

Redirects are used to redirect a page that you want users/search engines to visit instead of the old one. It’s important that  you get visitors to the right page if it has moved to a different URL.

The best practice is to use a 301 redirect, which moves the redirected page permanently to the final destination URL.

Instances when you might use a redirect:

  • The URL is broken or no longer active
  • You have a new page that is more relevant that you want users to visit instead of the old one
  • You’re fixing a page and want to temporarily redirect users for continual web experience

Redirects can become an SEO problem when its not implemented correctly. Common issues:

  • Temporary redirects: Using 302 (temporary redirect) does not transfer link equity from the redirected URL to the destination URL. Temporary redirects also signals to web crawlers to come back and crawl the page again, spending more crawl budget.
  • Redirect loop: This happens when two pages get redirected to each other, causing web crawlers to be in a loop.

How to identify and fix redirect issues with free SEO tools

Use a free tool like ScreamingFrog to crawl your website (up to 500 URLs for the free version). This will give you a breakdown in the ‘Response Code‘ tab for ‘3xx redirects’, ‘meta refresh’.

For beginners, I’d recommend the SEO PowerSuite Website Auditor tool –  Site Audit. It’s free and under ‘Redirects’, provides a detailed breakdown of all the redirect issues your website might have with explanations too.

Use an online tool like SeoSiteCheckup to get a quick analysis.

4.4 Server Errors

Server errors are (5xx status code) and can affect web crawlers from accessing all your content.

To solve this, identify the list of pages with server-related errors and send them to your hosting provider.

5. Content Check

5.1 Duplicated Content

Most duplicate content are caused by URL parameters. For example, these URLs have the same page but have different parameters:

https://www.olivermoose.com/seo/keyword-research-update/

https://www.olivermoose.com/seo/keyword-research-update?source=organic/

https://www.olivermoose.com/seo/keyword-research-update?ref=email/

If google can index all these pages, they will be considered as duplicated content.

To fix this, add a canonical link pointing to the original URL. This tells Google that is the best version of the page you want users to visit.

<link rel=”canonical” href=”https://www.olivermoose.com/seo/keyword-research-update/” />

Check for duplicated content from Google Search Console

Go to “Search Appearance” -> “HTML Improvements” to see if you have any issues.

Use Siteliner tool to check for duplicated content, and even broken pages.

5.2 HTML Elements

html factors technical seo olivermoose

HTML (Hypertext Markup Language) is used by most websites to create web pages.

On-page SEO optimises your page’s title, meta description and header tags, alt attributes, etc.

Google search result pages show title and meta description snippets so visitors can click on your page.

Does CTR (click-through rate) matter?

Yes! Google uses CTR as a determinant for how relevant you are for a specific keyword (search query).

Optimising page-level keyword & content-based metrics influence your organic search ranking.

Title tags

title tag seo olivermoose

An element of on-page optimisation and affects how web crawlers determine the relevance of your page.

Format: 

Topic | Brand

Character limit: 65 characters

Characteristics of a good title tag:

  • Contains your focus keyword, preferably at the beginning
  • Includes call-to-action words/phrases that makes users want to click on your page

Meta descriptions

Your meta description is a summary of your page’s content.

meta description seo example

Character limit: 320 characters

Recently, Google made changes to the character length of search results snippets: from 160 characters to 320.

This gives us more space to write up actionable descriptions with more relevant keywords to increase click-through-rate.

Characteristics of a good meta description:

  • Contains targeted keywords
  • Includes call-to-action words/phrases
  • Written in active voice

H1 header tags

Among all the sub-headings (H2,H3, etc), H1 tag is the most important.

The h1 tag should contain your targeted keywords, which relates to your page title and content.

Open up your webpage and analyse the source-code yourself:

On windows, press CTRL + U

On mac, press CMD + OPTION + U

source code html content

5.2 Structured data

Adding Schema markup to your HTML improves the way your page displays in SERPs. It does so by enhancing the rich snippets below your page title.

schema markup movie

In the first example, only the meta description is displayed after the title.

The second example contains star rating and number of votes.

To feature the review rich snippet, you can use the following code in JSON-LD format:

movie schema markup

Generate your own schema markup code:

Need to brush up on technical SEO basics?

Before I could fully grasp the value of technical SEO, starting with a foundational course in web development helped me out a lot. The best way to learn is by understanding theory and practice.

  1. Start by learning HTML and CSS and JavaScript. Then practice with codecademy to get familiar with writing code. I’d recommend building your own website with a few basic features.
  2. Read The Art of SEO, SEO for Growth and listen to Neil Patel’s Marketing School podcasts. These will give you a solid foundation in SEO concepts.
  3. Check out the book .htacess made easy. It’s a common issue in technical SEO and well worth the read.
  4. Read and take notes of Google Webmaster Guidelines.

Now you have the foundations set to make sense of the data you’ll find in a technical SEO audit.