For each query, the search engine selects relevant results – pages that match the topic, and ranks them, displaying them in a list. According to research , 99% of users find information that meets a query already on the first page of the search results and do not scroll further. And the higher the site’s position in the top 10, the more visitors it attracts.
Before allocating resources in a certain order, search engines evaluate them according to a number of parameters. This improves the user experience by providing the most useful, convenient and authoritative options.
What is website optimization?
Website optimization or SEO (Search Engine Optimization) is a set of actions, the purpose of which is to improve the quality of a resource and adapt it taking into account the recommendations of search engines.
SEO helps to get the content of the site into the index, improve the position of its pages in the ranking and increase organic, that is, free traffic. Technical website optimization is an important stage of SEO, aimed at working with its inner part, which is usually hidden from users, but available to search robots.
Pages are HTML documents, and their display on the screen is the result of the browser rendering HTML code, which affects not only the appearance of the site, but also its performance. Server files and internal resource settings can affect its crawling and indexing by a search engine.
SEO includes the analysis of the technical parameters of the site, the identification of problems and their elimination. This helps to increase your ranking positions, outperform competitors, and increase traffic and profits.
How to spot SEO problems on a website?
The optimization process should start with an SEO audit – site analysis according to a variety of criteria. There are tools that evaluate certain indicators, for example, page status, loading speed, responsiveness for mobile devices, and so on. An alternative option is to audit a site on a platform for SEO specialists.
One example is the SE Ranking service, which combines various analytical tools. The result of the SEO analysis will be a comprehensive report. To start the analysis of the site online, you need to create a project, specify the domain of your resource in the settings, and go to the “Site Analysis” section. One of the tabs is “Error report”, which displays the identified optimization problems.
All site parameters are divided into blocks: “Security”, “Content duplication”, “Loading speed” and others. When you click on any of the problems, a description of it and recommendations for fixing it will appear. After technical SEO optimization and making adjustments, you should re-run the site audit. You can see if the errors have been fixed in the “Fixed” column.
Technical optimization errors and how to fix them
Code fragments of pages, internal files and site settings can negatively affect its performance. Let’s break down common SEO problems and find out how to fix them.
Lack of HTTPS protocol
The HyperText Transfer Protocol Secure (HTTPS) extension, which is part of the domain name, is a more reliable alternative to the HTTP connection protocol. It provides encryption and safety of user data. Many browsers today block links starting with HTTP and display a warning on the screen.
Search engines take into account the security of the connection when ranking, and if the site uses the HTTP version, this will be a disadvantage not only for visitors, but also for its positions in the SERP.
How to fix
To transfer a resource to HTTPS, you need to purchase a special certificate and then renew its validity in a timely manner. You can configure automatic redirection from the HTTP version (redirect) in the .htaccess configuration file.
After switching to a secure protocol, it will be useful to audit the site to make sure everything is done correctly, and if necessary, replace the irrelevant URLs with HTTP among the internal links (mixed content).
The site does not have a robots.txt file
The robots document is placed in the root folder of the site. Its content is available at website.com/robots.txt. This file is an instruction for search engines which content should be crawled and which should not. The robots turn to it first and then start crawling the site.
Limiting the scanning of files and folders is especially important to save the crawling budget – the total number of URLs that a robot can crawl on a given site. If instructions for crawlers are missing or incorrectly compiled, this can lead to problems with displaying pages in the search results.
How to fix
Create a text document called robots in the root folder of the site and use the directives to write inside the recommendations for crawling the contents of pages and directories. The file can contain the types of robots (user-agent) for which the rules apply; restricting and allowing commands (disallow, allow), as well as a link to a sitemap (sitemap).
Problems with the Sitemap.xml file
A sitemap is a file that contains a list of all resource URLs that a crawler should crawl. The presence of a sitemap.xml is not a prerequisite for getting pages into the index, but in many cases the file helps the search engine find them.
XML Sitemap processing can be difficult if it is larger than 50MB or 50,000 URLs. Another problem is the presence of pages in the map that are closed for indexing by the noindex meta tag. When using canonical links on a site that distinguish their main pages from similar pages, only the priority URLs for indexing should be indicated in the sitemap file.
How to fix
If there are a lot of URLs in the sitemap and its size exceeds the limit, divide the file into several smaller ones. XML Sitemaps can be created not only for pages, but also for images or videos. In your robots.txt file, include links to all sitemaps.
In the case when the SEO audit revealed inconsistencies, the pages in the sitemap that have a prohibition of noindex indexing in the code should be eliminated. Also, make sure that only canonical URLs are listed in the Sitemap.
One of the important factors influencing the ranking is the uniqueness of the content. It is inadmissible not only to copy texts from competitors, but also to duplicate them within your site. This problem is especially relevant for large resources, for example, online stores, where product descriptions have minimal differences.
The reason why duplicate pages get into the index may be the absence or incorrect setting of the “mirror” – a redirect between the site name with www and without. In this case, the search engine indexes two identical pages, for example www.website.com and website.com.
Also, the duplication problem is caused by copying content within the site without setting canonical links that determine the priority page for indexing from similar ones.
How to fix
Set up www redirects and check with an SEO audit to see if there are any duplicates left on the site. When creating pages with minimal differences, use canonical links to tell the crawler which ones to index. To avoid confusing search engines, a non-canonical page should only contain the rel = ”canonical” tag for one URL.
Pages giving error codes
Before displaying the page on the screen, the browser sends a request to the server. If the URL is available, it will have a successful HTTP status of 200 OK. If problems arise when the server cannot complete the task, the page returns a 4XX or 5XX error code. This leads to such negative consequences for the site as:
- Deterioration of behavioral factors. If, instead of the requested page, the user sees an error message, for example, “Page Not Found” or “Internal Server Error”, he cannot obtain the required information or complete the target action.
- Excluding content from the index. When the robot is unable to crawl a page for a long time, it can be removed from the search engine index.
- Crawling budget expenditure. Robots try to crawl a URL, regardless of its status. If there are many pages with errors on the site, there is a senseless waste of the crawling limit.
How to fix
After analyzing the site, find pages in the 4XX and 5XX status and find out what is the cause of the error. If the page has been deleted, the search engine will eventually exclude it from the index. The URL Removal Tool can help speed up this process . To find problem pages in a timely manner, periodically repeat the search for problems on the site.
Incorrect setting of redirects
A redirect is a browser redirect from a requested URL to another. Usually it is configured when changing the page address and deleting it, redirecting the user to the current version.
The advantages of redirects are that they happen automatically and quickly. Their use can be useful for SEO when you need to transfer the gained authority from the original page to the new one.
But when setting up redirects, problems often arise such as:
- too long chain of redirects – the more URLs it contains, the later the final page is displayed;
- looped (circular) redirects, when the page links to itself or the final URL contains a redirect to one of the previous links in the chain;
- there is a broken URL in the redirect chain;
- There are too many pages with redirects – this reduces the crawling budget.
How to fix
Conduct an SEO audit of the site and find pages with 3XX status. If among them there are redirect chains consisting of three or more URLs, they need to be shortened to two addresses – the original and the actual. If you find looped redirects, you need to correct their sequence. Pages that have a 4XX or 5XX error status must be made available or removed from the chain.
Slow download speed
Page speed is an important criterion for site usability, which search engines take into account when ranking. If the content takes too long to load, the user may not wait and leave the resource.
Google uses custom Core Web Vitals to rank a site, where LCP (Largest Contentful Paint) and FID (First Input Delay) values tell about speed. Recommended download speed of main content (LCP) – up to 2.5 seconds. The response time for interaction with page elements (FID) should not exceed 0.1.
Common factors that can negatively affect download speed include:
- volumetric by weight and size images;
- uncompressed text content;
- large weight of HTML-code and files that are added to it in the form of links.
How to fix
Aim to keep your HTML pages under 2MB. Particular attention should be paid to site images: choose the correct file extension, compress their weight without losing quality using special tools, reduce photos that are too large in size in a graphic editor or through the site’s control panel.
It will also be useful to configure the compression of texts. Thanks to the Content-Encoding header, the server will reduce the size of the transmitted data, and the content will load faster in the browser. It is also helpful to optimize the page size using GZIP archiving.
How to fix
Lack of mobile optimization
When a site is only suitable for large screens and is not optimized for smartphones, visitors have trouble using it. This negatively affects behavioral factors and, as a result, ranking positions.
The font may be too small to read. If the interface elements are placed too close to each other, it is easier to click on the button and link only after enlarging a fragment of the screen. Often, a page loaded on a smartphone goes off-screen, and you have to use the bottom scrolling to view the content.
Problems with the settings of the mobile version are indicated by the absence of the viewport meta tag, which is responsible for the page adaptability for screens of different formats, or its incorrect filling. Also, the site performance indicator Core Web Vitals – CLS (Cumulative Layout Shift) informs about the instability of page elements during loading. His rate: 0.1.
How to fix
As an alternative to a separate version for mobile devices, you can create a website with a responsive design. In this case, its appearance, layout and size of blocks will depend on the screen size of a particular user.
Note that there are viewport meta tags in the HTML of the pages. In this case, the device-width value should not be fixed so that the page width adapts to the size of a PC, tablet, smartphone.
Lack of alt-text for images
In the HTML code of the page, the <img> tags are responsible for the visual content. In addition to linking to the file itself, the tag can contain alternative text with a description of the image and keywords.
If the alt attribute is empty, it is more difficult for the search engine to determine the subject of the photo. As a result, the site will not be able to attract additional traffic from the “Pictures” section, where the search engine displays images relevant to the query. Also, the alt text is displayed instead of the photo when the browser cannot load it. This is especially true for users of voice assistants and screen readers.
How to fix
Add alternative text to site images. This can be done after installing the SEO plugin for the CMS, after which special fields will appear in the image settings. We recommend that you fill in the alt attribute using multiple words. Adding key phrases is okay, but don’t overload the description with them.
Technical errors negatively affect both the perception of the site by users and the position of its pages in the ranking. To optimize a resource taking into account the recommendations of search engines, you must first conduct an SEO audit and identify internal problems. Platforms that perform a comprehensive analysis of the site cope with this task.
Common optimization problems include:
- the name of the site with HTTP instead of the secure HTTPS extension;
- missing or incorrect content of robots.txt and sitemap.xml files;
- slow loading of pages;
- incorrect display of the site on smartphones;
- large weight of HTML, CSS, JS files;
- duplicate content;
- pages with error codes 4XX, 5XX;
- incorrectly configured redirects;
- images without alt text.
If we find and fix the problems of technical optimization of the site in time, this will help in promotion – its pages will occupy and maintain high positions in the ranking.