You have written excellent content. Your copy is well-researched, well-written and precisely targeted to the keywords your customers are using. Yet the page does not rank. Google cannot find it. Or it ranks, but more slowly than your competitors. Or it appears, but with a poor click-through rate because no rich results are triggered.
The reason is almost always the same: the technical foundation is not in order. Technical search engine optimisation is everything that happens under the bonnet — what your visitors never see, but what Google uses to assess whether your page deserves to rank at the top. Content is the furniture. Technical SEO is the foundation and structure of the building. Nobody wants to live in a house with a rotten foundation, no matter how beautiful the furniture is.
According to an analysis by Semrush, 59% of all analysed websites have at least one critical technical SEO issue. For many businesses, technical SEO is the lowest-hanging fruit — an investment that removes barriers actively preventing Google from giving you the visibility you deserve.
In this guide we cover all the most important aspects of technical SEO: what it is, why it matters, and — most crucially — what you can actually do about it. We are an SEO agency based in Aarhus that works with technical SEO every day, so the recommendations below are based on practical experience with the market.
1. What is technical SEO?
Technical SEO is the part of search engine optimisation that covers everything apart from content and links. It encompasses the technical characteristics of your website that determine whether search engines can find, understand, index and rank your pages correctly.
To understand technical SEO it helps to know Google's three-step process: crawling, indexing and ranking. Googlebot crawlers visit your website and follow links from page to page (crawling). The analysed pages are stored in Google's vast database (indexing). When someone searches, the algorithm sorts the indexed pages by relevance and quality (ranking). Technical SEO is primarily about removing barriers in the first two steps — and about optimising the signals that influence the third.
Three domains in SEO: On-page SEO = your content and your keywords. Off-page SEO = backlinks and external authority. Technical SEO = the foundation that makes both of the others possible. You can rank with a weak technical foundation, but you will never reach your full potential.
Technical SEO covers areas including: page speed and Core Web Vitals, crawl management via robots.txt and sitemaps, canonical tags and duplicate content handling, structured data and Schema.org markup, HTTPS and security headers, URL structure and internal link topology, and mobile-first optimisation. We cover all of these areas in detail in the sections below.
A good starting point is on-page SEO — which technical SEO complements — and a broader understanding of search engine optimisation as a whole.
2. Core Web Vitals: Google's page experience signals
Core Web Vitals are Google's official measurements of the user experience on your page. They are not merely advisory — they are direct ranking factors. In 2023 Google updated INP (Interaction to Next Paint) as a replacement for FID (First Input Delay), and these are the three current metrics that count:
| Metric | What it measures | Good | Needs improvement | Poor |
|---|---|---|---|---|
| LCP Largest Contentful Paint |
Time until the largest visible element loads | Under 2.5s | 2.5s – 4.0s | Over 4.0s |
| INP Interaction to Next Paint |
Response time on user interaction | Under 200ms | 200ms – 500ms | Over 500ms |
| CLS Cumulative Layout Shift |
Visual stability — does the content jump around? | Under 0.1 | 0.1 – 0.25 | Over 0.25 |
How to test your Core Web Vitals
The easiest place to start is Google PageSpeed Insights (pagespeed.web.dev) — free, requires no setup and gives you an overall score plus specific recommendations. Enter your URL and focus particularly on "Field data" (real users' data from Chrome) rather than "Lab data" (simulated tests). Field data is what Google uses for ranking.
In Google Search Console you will find the Core Web Vitals report under "Experience" — it shows you which specific URL groups have issues, and whether they are on mobile or desktop. This is where you get a full picture of site-wide health.
Practical tips for improving your scores
The largest element is typically a hero image. Use WebP format, correct sizing (no larger than displayed), and add loading="eager" and fetchpriority="high" attributes specifically to the hero image.
Use <link rel="preload"> for critical resources and font-display: swap for web fonts. Add <link rel="preconnect"> to external font servers such as Google Fonts to reduce connection time.
The most common cause of high CLS is images without explicit width and height attributes. The browser does not reserve space for the image and shifts content when it loads. Always add both attributes.
Heavy JavaScript that blocks the main thread is the primary cause of poor INP. Use defer and async attributes on non-critical scripts. Remove unused third-party scripts (chat widgets, tracking pixels that are not being used).
3. Crawling and indexing
Before Google can rank your page, it needs to find it and store it. That sounds straightforward, but this is where many businesses run into problems — often without realising it.
robots.txt: What it is and the common mistakes
The robots.txt file lives at the root of your domain (e.g. gezar.dk/robots.txt) and tells search engine crawlers which parts of your site they may and may not visit. It is not a security feature — it is an agreement that can be ignored by malicious crawlers — but all reputable search engines respect it.
The classic mistake is blocking too much. We have seen sites that accidentally blocked the entire site with Disallow: / — and a website that cannot be crawled cannot rank. Check your robots.txt file regularly and verify that you are not accidentally blocking important pages or directories. Google Search Console will warn you if your robots.txt blocks indexed pages.
Typical robots.txt for a standard website:
User-agent: *
Disallow: /admin/
Disallow: /checkout/
Sitemap: https://yourdomain.com/sitemap.xml
Only block pages that should not be indexed — admin panel, checkout, thank-you pages and search result pages with parameters.
XML Sitemap: Your guide to Google
An XML sitemap is a list of all the pages on your site that you want indexed. It tells Google precisely what exists, and provides hints about priority and update frequency. A sitemap does not guarantee indexing — Google can still choose to ignore pages it considers low quality — but it ensures Google at least knows about all your pages.
- Include only canonical URLs (no duplicate URLs with parameters)
- Include all important pages: services, blog articles, landing pages, product pages
- Do not include pages with
noindex— this is confusing for crawlers - Keep the sitemap updated — add new pages continuously
- Submit the sitemap in Google Search Console under "Sitemaps"
- For large sites (1,000+ pages): consider a sitemap index file with multiple sitemaps
Canonical tags: Avoid duplicate content issues
Duplicate content arises when the same or very similar content is accessible at multiple URLs. This can happen for many reasons: www vs. non-www, HTTP vs. HTTPS, URL parameters from tracking or filtering, pagination, or deliberately duplicated pages. Canonical tags (<link rel="canonical" href="...">) tell Google which version of a page is the "original" and should receive all link juice and ranking credit.
Canonical tags are not only relevant for large sites with many pages. Even a small website should have a canonical tag on every page — it is a straightforward and effective safeguard against accidental duplicate content issues.
Crawl budget: Important for large sites
For most small and medium-sized websites, crawl budget is not an issue — Google simply crawls all pages. But for large e-commerce stores with thousands of product pages, filter parameters and variants, crawl budget is a real concern. Google only allocates a certain amount of resources to crawling your site. If it wastes them on pages without value — filtered search results, parameter URLs, empty category pages — it spends fewer resources on your important pages.
4. Site structure and URL architecture
Good URL architecture helps both users and search engines understand your site. It comes down to two things: individual URLs that are logical and readable, and an overall hierarchical structure that is consistent and easy to navigate.
URL best practices
- Keep URLs short and descriptive:
/blog/technical-seorather than/blog/article?id=4721&cat=seo - Use hyphens (not underscores) to separate words in slugs
- Avoid special characters in URL slugs — use ASCII equivalents
- Avoid deep URL hierarchies: a maximum of 3–4 levels from the root is ideal
- Use lowercase consistently — uppercase letters in URLs can create duplicate content
- Avoid unnecessary parameters and session IDs in URLs for important pages
Flat hierarchy: Three clicks from the homepage
A rule of thumb that holds in practice: all important pages should be accessible in at most three clicks from the homepage. The deeper a page is buried in the site structure, the weaker a signal you send to Google about its importance. This is partly because crawlers use link depth as a proxy for priority, and partly because internal link juice is diluted with each level.
The practical implication is that you should link to important pages from your navigation, your homepage and your most visited pages — not just via a single path through the site structure.
Breadcrumbs with BreadcrumbList schema
Breadcrumbs are navigation links at the top of subpages showing the hierarchical path to the current page — e.g. "Home › Blog › Technical SEO". They benefit usability and give Google context about the site structure. Combined with BreadcrumbList JSON-LD schema they can trigger breadcrumb display directly in search results instead of the normal URL, which increases visibility and CTR.
5. Structured data (Schema.org)
Structured data is a standardised way of telling search engines exactly what your content is — not what it looks like, but what it actually is. You implement it as a JSON-LD script in your HTML, and it gives Google the opportunity to show your content as "rich results" in search: star ratings, prices, FAQ accordions, recipe cards and much more.
The most important Schema.org types for businesses
Essential for local businesses. Specifies name, address, phone, opening hours, priceRange and serviceArea. Use a specific subtype such as MarketingAgency, Restaurant or Plumber rather than the generic LocalBusiness.
Implement on pages with question-and-answer content. Can trigger an FAQ accordion directly in search results and significantly increase the space your listing occupies. Particularly effective for service pages and blog articles with FAQ sections.
For all blog articles. Specifies headline, author, datePublished, dateModified and description. Helps Google understand and validate article content, and increases the likelihood of inclusion in AI Overviews and other rich features.
Combine with visible breadcrumb links on the page. Shows the breadcrumb path instead of the URL in search results, giving clearer context and often a higher CTR than a long URL string.
How to test structured data
Google offers the free Rich Results Test (search.google.com/test/rich-results) — enter a URL to see whether your structured data is correctly implemented and which rich results the page is eligible for. You can also check implementation in Google Search Console under "Rich results" in the left menu. Remember: correct implementation is necessary but not sufficient — Google decides whether to show rich results based on context and the search query.
JSON-LD over Microdata: Google recommends JSON-LD as the preferred implementation method for structured data. It is easier to maintain because it is separated from the HTML content, and it can be placed in <head> or <body> without any difference in effect.
AI Overviews and structured data
With Google's AI Overviews — the AI-generated summary at the top of search results — structured data has become even more important. Pages with correctly implemented structured data, particularly FAQPage and BlogPosting, appear to have a statistically higher probability of being cited. This has not yet been scientifically documented, but practical experience points clearly in that direction.
6. HTTPS, security and HTTP headers
HTTPS is an absolute minimum in 2026. Google has used HTTPS as a ranking factor since 2014, and since 2018 Chrome has marked all HTTP pages as "Not secure" with a clear icon in the address bar. If your site is still running on HTTP, this is an issue that needs to be resolved immediately — both for SEO and for user trust.
Security headers: Low-hanging fruit
Beyond HTTPS, you can improve your site's security profile with HTTP response headers. Most web servers and hosting platforms make it straightforward to add them — in Caddy it is just a few lines in the configuration file:
- Strict-Transport-Security (HSTS): Forces browsers to always use HTTPS for your domain, even if the user types HTTP
- X-Content-Type-Options: Prevents MIME type sniffing and reduces the risk of certain XSS attacks
- X-Frame-Options: Protects against clickjacking by preventing your page from being embedded in iframes on other sites
- Referrer-Policy: Controls which referrer information is sent during navigation — good practice for user privacy
- Permissions-Policy: Controls which browser features (camera, microphone, geolocation) the page can access
Redirect chains: Minimise them
Redirects are necessary — when you change a URL, move content or consolidate pages. But redirect chains, where one redirect points to another redirect which points to yet another, are problematic. They waste crawl budget, introduce latency and dilute link juice. Keep your redirects direct (301-redirect from old URL to final URL) and clean up existing chains regularly.
7. Mobile-first indexing
Since 2019 Google has primarily used the mobile version of your page to index and rank it. This means that if your mobile page has different or incomplete content compared to your desktop page, you rank on the basis of the mobile version — which may be inferior. Mobile-first indexing is not an option that can be switched on or off. It is simply how Google works.
What to check
- Responsive design that works correctly on all screen sizes — check with Google's Mobile-Friendly Test
- Same content on mobile and desktop — do not hide critical text behind tabs or accordions that only appear on desktop
- Touch targets minimum 44px × 44px — buttons and links that are too small are both a usability and a ranking issue
- No horizontal overflow requiring horizontal scrolling — use
overflow-x: hiddenon bothhtmlandbody - Text readable without zooming — a minimum 16px base font size is recommended
- Pop-ups and interstitials that cover content on mobile — Google actively penalises this
Remember Googlebot-Smartphone: Pages that are blocked by robots.txt for Google's mobile crawler but accessible to the desktop crawler are a serious problem. Check specifically in your robots.txt that you are not accidentally blocking mobile crawlers from parts of your site.
Core Web Vitals on mobile are what matter most
The Google Core Web Vitals data used for ranking is primarily based on mobile data from Chrome users. It is not uncommon to see pages scoring 90+ on desktop PageSpeed Insights but only 40–60 on mobile. Always check mobile scores specifically, and prioritise mobile optimisation over desktop optimisation if you need to choose.
8. Technical SEO checklist
Here is a practical checklist for a thorough technical SEO review. Work through the points one at a time — most can be verified with the free tools mentioned in the next section.
robots.txt allows crawling of important pages. XML sitemap created and submitted to GSC. No important pages accidentally blocked with noindex. No broken internal links (404 errors). HTTPS on all pages with a valid certificate.
Canonical tags on all pages. www vs. non-www redirects to one version. HTTP redirects to HTTPS. URL parameters handled correctly. No identical pages at different URLs without a canonical.
LCP under 2.5 seconds. INP under 200ms. CLS under 0.1. Images have explicit dimensions. Hero images are WebP format with preload and fetchpriority. font-display: swap implemented.
Responsive design that works on all screen sizes. Touch targets minimum 44px. No horizontal overflow. Same content on mobile and desktop. Text readable without zooming.
JSON-LD implemented for relevant schema types. LocalBusiness on homepage and service pages. BreadcrumbList on subpages. FAQPage on pages with Q&A content. Validated with Rich Results Test.
Valid SSL certificate. HSTS header enabled. X-Content-Type-Options header. X-Frame-Options header. No mixed content (HTTP resources on HTTPS page). Direct redirects without chains.
Short, descriptive URLs. Hyphens instead of underscores. No special characters in slugs. Flat hierarchy (max 3–4 levels). Breadcrumbs on all subpages. Internal links to important pages from homepage and navigation.
hreflang tags on all pages with language versions. Bidirectional: the Danish page points to English, the English page points to Danish. x-default points to primary version. hreflang in sitemap.xml. Canonical and hreflang are consistent.
9. Tools for technical SEO
You do not need expensive enterprise solutions to run a solid technical SEO audit. Here are the most valuable tools — starting with the free ones:
- Google Search Console (free): The single most important tool. Shows indexing status, Core Web Vitals, crawl errors, manual actions and performance data. Use it actively — not only when something goes wrong.
- Google PageSpeed Insights (free): Run Core Web Vitals tests on individual URLs. Provides specific improvement suggestions with estimated time savings. Remember to test both mobile and desktop.
- Screaming Frog SEO Spider (free up to 500 URLs): Desktop crawler that crawls your site like Googlebot. Finds broken links, duplicate content, missing meta tags, redirect chains and much more. Indispensable for technical audits.
- Rich Results Test (free): Google's own tool for validating structured data implementation. Shows precisely which rich results a page is eligible for.
- Ahrefs Site Audit (paid): Complete technical audit with prioritised issues, trends over time and detailed reporting. Particularly valuable for larger sites with many pages.
- Semrush Site Audit (paid): Alternative to Ahrefs with broad technical audit functionality. Included in Semrush packages that also cover keyword research and competitor analysis.
See our full overview of recommended marketing tools for further recommendations within SEO, analytics and optimisation.
Always start with Google Search Console. It is free, uses real data from Google, and is the direct communication channel between Google and you as a website owner. If you are not set up yet, that is the first thing you should do today — verification takes only five minutes.
Frequently asked questions about technical SEO
Want a technical SEO health check?
We run a thorough technical SEO audit of your website — Core Web Vitals, crawling, structured data, URL architecture and much more. You get a prioritised list of concrete improvements with no obligation.
See our SEO serviceRead also