Skip links

Technical SEO: A Beginners Guide

What is technical SEO?

Why is technical SEO important?

Content SEO vs Technical SEO

Offsite SEO vs Technical SEO

Key Technical SEO Concepts

If you’re new to the world of SEO, the sheer amount of information and jargon can be overwhelming. But don’t worry, we’re here to help. This beginner’s guide to technical SEO will explain all the key concepts you need to know to get started.

Technical SEO is about optimising your website for Google’s search algorithm. By improving your site’s architecture, code, and content, you can make it easier for Google to find and index your pages. This, in turn, can lead to higher rankings and more traffic from organic search.

While there is no one-size-fits-all approach to technical SEO, there are some standard optimisation techniques that you can use on most websites. In this guide, we’ll cover some of the basics so you can get started on improving your site’s technical SEO.

What is technical SEO?

Technical SEO is the practice of optimising a website for organic search to improve your site’s visibility and organic traffic. That includes on-page optimisationlink buildingsite speed optimisation, and other strategies to help engines like Google index and rank your site.

Technical SEO is a critical part of any SEO strategy. Without it, your site will not be able to rank well in the search results. However, technical SEO is not the only factor that determines your ranking. You also need to have high-quality content and backlinks from authority sites.

The first step in any technical SEO project is to ensure that your website is accessible to Google, Bing, Yahoo, Yandex, and other search engines that your customers use. This means that your website can be crawled and indexed by crawlers like Googlebot (we’ll explain what crawlers and indexing are later on!). Once you’ve verified that your site is accessible, you can begin working on optimising your site for specific keywords.

On-page optimisation is the process of optimising individual web pages to rank higher in search results. This includes optimising title tags, meta tags, header tags, and body content. Link building is another essential component of technical SEO. Link building helps to improve your site’s authority and improve its ability to rank in search results.

Why is technical SEO important?

Search engines are constantly evolving and becoming more sophisticated. And as more great content appears on the internet, search engines want to find content for users which answers most effectively the questions that a user asks. As such, search engines emphasise technical factors when determining where to rank websites as answers to questions.

Let’s say someone makes a Google search for ‘Where should I visit on holiday in Prague?’ or ‘best human rights charities donation’. Google wants to (1) parse and understand that person’s search intent and (2) match them up with content that answers that search intent.

If your website has a clear list of places to visit in Prague on a webpage in a list format, or an answer box on your website saying, ‘the best human rights charity to donate to is … ‘, then Google is that much more likely to rank your content.

By contrast, if your website is so poorly structured that Google cannot understand your content; or your website loads so slowly that it doesn’t think the user can find an answer to their question very quickly on your website, or Google decides it doesn’t trust your website because it has no backlinks or other trust signals, then you won’t rank on Google.

Content SEO vs Technical SEO

Content SEO and technical SEO are two different but equally important types of search engine optimisation. Content SEO focuses on writing awesome content for your website, while technical SEO focuses on improving the website itself to make it more search engine friendly. Think of them as two halves of the same search engine coin.

Content SEO includes things like keyword research, title tag optimisation, and meta descriptions. Technical SEO includes things like site structure, sitemaps, and redirects. Both types of optimisation are important for getting your website to rank high in search engine results pages (SERPs).

There’s no right answer between what is better to focus on – technical SEO or content. However, typically, for a small company, you want to get the basics of your website right technically then focus on content SEO; for a big company, you’ll be able to make big gains by getting your technical SEO absolutely perfect (especially if your site uses non-standard elements, like javascript-intensive webpages, SPAs, or things like hand-coded Python, Apache, PHP, Liquid etc. to produce content).

Offsite SEO vs Technical SEO

Another key distinction is between offsite SEO and technical SEO. While technical search engine optimisation work is done on the website or webpage itself, offsite SEO is essentially link building, or finding other people’s websites and getting links from their site to start pointing to yours (and all the authority and other goodness that comes with it).

Some SEO consultancies and agencies talk about offsite SEO as a part of technical SEO.

Here at InnerPoint, our team treats offsite SEO as {part of | separate to} technical SEO because {we have trained experts in both disciplines and want to make sure our experts are spending time working on the things they’re amazing at | a lot of the skills are transferrable between the two disciplines}.

Now we’ve got the introductions out of the way – let’s break down the key technical SEO concepts you’ll be coming across as you learn technical SEO!

Key Technical SEO Concepts

What is Crawling and a Crawler Bot?

A crawler (also known as a spider or bot) is a software program that visits websites and reads their content in order to index it for search engines.

The most famous of these is the Googlebot*. Once Googlebot crawls a public site, it stores that content, and (when it’s finished indexing) that page will appear in the relevant search engine results that best match a searcher’s query.

However, many crawlers exist on the web. Other than search engine crawlers, you have academic research crawlers while trawling the web for research purposes; you have crawlers like Heritrix and HTTrack designed to store offline or permanent archival versions of websites for future visits; you have malicious crawlers designed for website scraping; and more.

*Technically, Googlebot is a generic name for Google’s search engine crawlers. Googlebot is actually made up of two different types of crawlers: a desktop crawler that simulates a user on a desktop (Googlebot Desktop), and a mobile crawler that simulates a user on a mobile device (Googlebot Smartphone). But in 99% of cases, these crawlers behave the same way, and we treat them very similarly.

What is indexing?

A crawler (also known as a spider or bot) is a software program that visits websites and reads their content in order to index it for search engines.

The most famous of these is the Googlebot*. Once Googlebot crawls a public site, it stores that content, and (when it’s finished indexing) that page will appear in the relevant search engine results that best match a searcher’s query.

However, many crawlers exist on the web. Other than search engine crawlers, you have academic research crawlers while trawling the web for research purposes; you have crawlers like Heritrix and HTTrack designed to store offline or permanent archival versions of websites for future visits; you have malicious crawlers designed for website scraping; and more.

*Technically, Googlebot is a generic name for Google’s search engine crawlers. Googlebot is actually made up of two different types of crawlers: a desktop crawler that simulates a user on a desktop (Googlebot Desktop), and a mobile crawler that simulates a user on a mobile device (Googlebot Smartphone). But in 99% of cases, these crawlers behave the same way, and we treat them very similarly.

What is a sitemap?

One of the best ways to help Google index your site is with a sitemap.

A sitemap is a file where you can list the web pages of your website to tell Google and other search engines about the organisation of your site content. Search engine web crawlers like Googlebot read this file to crawl your site more intelligently.

A sitemap tells the crawler which pages are available on your website and how often those pages are updated. This allows Google to crawl your site more efficiently and discover new content faster. For example, if you have a lot of blog posts that are all published on the same day, a sitemap will help Google understand that these are all new pieces of content that should be crawled and indexed.

If you have a large website with tens of thousands of pages, it’s important to break up your sitemap into multiple files so that each file is no larger than 50MB and contains no more than 50,000 URL entries.

An interesting example of a Sitemap is Amazon’s. With over tens of millions of products available, they have to work extra diligently to make sure everything they sell is indexed on their site, so people can find those products when they use Google or another search engine to look at what to buy.

Most sitemaps exist as an XML file, a kind of markup language designed to be simple and easy for bots to understand and crawl but don’t have the styling capability that HTML offers. After all, robots want clean and quick code, not good-looking well-rendered sites.

What is Robots.txt?

Did you try to look up Amazon’s sitemap based on the information above?

Amazon’s sitemap is actually not available to the average user – and that is because of its robots.txt file on their website.

A robots.txt file tells search engine crawlers which webpages they can access on your site. This is used mainly to avoid overloading your site with useless traffic from pseudo-persons.

That’s why Amazon’s robots.txt actually disallows non-recognised users (like you!) from viewing its sitemap. This reduces the traffic to pages which solely exist for the benefit of the dozens of crawlers that spend massive amounts of bandwidth trying to understand its website architecture and what key pages it should be visiting.

Every major website has a robots.txt to handle the vast amount of traffic it gets, so look up your favourites to see if you can see how their sitemap is set up.

What is a Canonical?

Let’s say you’re a big clothes company, Fancy Pants Inc. On your eCommerce site selling fancy pants, you might have URLs like the following:

http://fancypants.shop/product/fancypants

https://fancypants.shop/product/fancypants?small

https://fancypants.shop/product/fancypants?medium

https://fancypants.shop/product/fancypants?large

https://www.fancypants.shop/product/fancypants

https://www.fancypants.shop/product/fancypants?edit

https://www.fancypants.shop/product/fancypants?utm_param_4=35#!

And so on.

A technical SEO expert will use a canonical tag (‘rel=” canonical” ‘) to tell search engines what content, which version of a piece of content is the original, and what one you want to rank.

This is important because many times, there are multiple versions of the same content on the internet, and you want to make sure that your website appears in search results as the original source.

The canonical can be placed in the <head> section of your website’s code, and looks like this: <link rel=”canonical” href=”http://www.example.com/original-article”>.

When Google knows more clearly what content to make rank due to a canonical tag, it is more likely to consolidate all its ranking factors behind that one bit of content. So many big gains in technical SEO can be made by simply implementing a canonical tag correctly on large websites.

What is a hreflang attribute?

Suppose you’re a big brand like Amazon. You probably have a US version of your site, a UK version, a French version, a German version, and so on. You might even have a high-contrast version or disability-accessible version of your site, or an old or beta site version that lurks around somewhere.

However, Googlebot doesn’t see your site as having lots of different ‘versions’ – it sees duplicate content. And Google doesn’t want to rank duplicate content. It will pick one of your sites, decide it is the ‘right’ one, and only rank that.

If that’s the US version, it means French speakers in France will only be offered the US version of your site, not the one you purpose-built for your francophone customers. Merde.

A technical SEO expert sorts this by using a hreflang attribute, an HTML tag that tells search engines what language a website is written in.

This is especially important for websites that are available in multiple languages, as it helps search engines match users with the correct version of the website.

But the hreflang attribute can also be used to indicate the region that a website is targeted at, which can be useful for targeting local audiences.

What is Site Speed?

Finally, for this introduction to technical SEO, let’s talk about site speed.

Site speed is the amount of time it takes for a web page to load. It is important for both users and search engines. A fast site provides a good user experience, which can lead to increased traffic and conversions.

For search engines, site speed is a ranking factor, so a faster site can help you improve your visibility in search results. There are several ways to improve your site’s speed, including optimising your images and code, using caching, and reducing redirects.

If you want to improve your site’s speed, start by identifying where your pages are taking the longest to load. Then, work to optimise these pages for performance.

You can use a variety of techniques, including image optimisation, coding tips and tricks, and caching recommendations.

Reducing redirects can also help speed up your site. By reducing the number of requests made from your website to external resources, you can save time and energy.

Optimising site speed is one of the hardest things to do, because it requires a great knowledge of technical SEO, website design, marketing, and overall business objectives to work out what can be cut, what can be changed, and what can be added to improve a website without compromising on quality.

However, it is absolutely vital. It’s even why we mention it as one of our top five tips for optimizing your eCommerce store – 47% of online shoppers expect a page to load in no more than two seconds, and 40% will leave your website if it takes over three seconds to load.

That’s why we at InnerPoint have specialists dedicated to helping you with your technical SEO and site speed specifically. It is vital to get site speed right across your website to both improve user experience and improve search engine ranking positions – so make sure you spend an appropriate amount of time on this.

What is site architecture?

Site URL architecture is the basic structure of a website’s URLs. It dictates which domains are linked to which pages on your site and how search engines crawl and index your site.

A good URL architecture will help you:

– Organize your content by topic or section

– Create natural link paths between different sections of your site

– Keep track of changes to your site’s content so you can update links automatically

A simple site architecture might involve a homepage on your website root (/), a series of collection and standalone pages from that root (e.g. /about/, /products/, and /contact/), and a series of product or template pages based on your collections (e.g. /products/product-a, /products/products-b, and so on). This site architecture keeps your site organised, so crawlers and users know where to expect content to be found. It creates natural link hubs where your collection pages link to the rest of your site. And it makes it easy to manage your site as it grows and expands, as you already know what URL your new pages should have!

Wrapping up…

If you’re looking to improve your website’s technical SEO, there are a few things you should keep in mind. Hopefully, this guide has given you a boat load of information you can use as a jumping off point to improve your search engine ranking and visibility.

That means, with these guidelines in mind, you can start improving your website’s technical SEO today!

And if you have any questions, or you’d like support getting started with your technical SEO, then get in touch. We offer free no-obligation SEO analysis to tell you what we’d do to help you improve – just contact us for yours today.

Thank you to campaigning.digital for writing this in-depth post about technical SEO.

Need help with keyword analysis?

Click below and book a free keyword analysis + personal review with us.

BOOK AND TIME NOW