x-robots-tag
How to implement x-robots-tag from the server and its differences with the meta robots meta tag

- Author:
- Carlos Sánchez
- Topics:
- Metatags ,
- servers
- Publication Date:
- 2026-01-08
- Last Review:
- 2026-01-16
The X-Robots-Tag is an HTTP header alternative to the HTML meta tag that is applied from the server.
In a "conflict" between a robots meta tag and an X-Robots-Tag, the more restrictive option wins. That is, for example, if we have a noindex tag and an index tag, the more restrictive tag (in this case, noindex) will prevail, regardless of whether it is in the meta tag or in the HTTP header.
Among the advantages of implementing directives via the X-Robots-Tag, we can find:
- Flexibility for global implementations, as Regex can be used.
- The ability to apply robots directives to files that do not have HTML, such as images, videos, or PDFs.
- Quick meta robots configuration when access to the website is limited or poorly structured.
It also shares other advantages and disadvantages with the standard meta robots. For example, selected directives will not be readable if they are blocked via robots.txt or blocked from the server.
How to Implement the X-Robots-Tag
First, server access is required. If you use Apache, you can do it simply by having write access to your .htaccess in the ROOT folder of your project. If you use NGINX, you will need to edit the .conf, apply the changes, and restart the server.
All directives valid in the meta robots tag are equally valid in the X-Robots-Tag.
Apache
In Apache, the ifmodule mod_headers.c must be enabled, and the command should be applied within that ifmodule. It is not mandatory, but I recommend placing it inside the ifmodule in all cases.
For example, this is how you would set an X-Robots-Tag for all pages of a project. You can also target specific spiders such as GoogleBot.
Header set X-Robots-Tag "noindex, nofollow"
</ifModule>
If you want to target the content of a specific subdirectory (e.g., the test subdirectory), it would look like this:
Header set X-Robots-Tag "noindex"
</If>
With Regex, you can also target only specific file types:
<Files ~ "\.(avif|webp|svg)$"> Header set X-Robots-Tag "unavailable_after: 27 Jun 2045 15:00:00 PST" </Files>
This way, these files will not be indexed after the specified date.
NGINX
Although NGINX as a server is much more efficient than Apache, it has a small limitation regarding SEO configuration: you need to restart it every time you make a change to its configuration. This is a major drawback when implementing this type of setup continuously. It is the price of efficiency, but these changes can still be made. They must be applied inside the "server{}" block in the .conf file.
To apply the change server-wide, it would be as simple as:
To target a particular format or several formats using Regex, it would look like this:
add_header X-Robots-Tag "noindex";
}
And for a specific file:
add_header X-Robots-Tag "noindex, nofollow";
}
References
I currently offer advanced SEO training in Spanish. Would you like me to create an English version? Let me know!
Tell me you're interested