Robots Txt Invalid WHY? (My Panic Story P1)

robots txt invalid

Robots Txt Invalid WHY? (My Panic Story Part 1)-

Every blogger, whether new or experienced, will have problems with his or her website at some point.

So this is one of the problems which I encountered as a blogger.

I was working on a report for my website when my SEO audit analysis revealed that my robots.txt file was invalid.

If you have or want to create a website, you will see that the robots.txt file is a core part of your website.

What is work of robots txt file?

Robots.txt file grants search engine bots permission and guidance to crawl and index your website.

And, because I didn’t have much expertise with it, having such a critical portion of my website become invalid was worrisome.

It was also confusing since there were times when my robots txt file was valid and times when it indicated that my robots.txt file was invalid.

First step I took to understand why robots txt was invalid?

Robots.txt file invalid

The first thing I did was double-check that all of my codings was accurate and that I hadn’t blocked any search engine bots from crawling my website.

As you can see, most of the time, when we encounter a technical problem, we are advised to seek assistance from a technical team or a person.

There was nothing wrong with it, but as a newbie, I didn’t have a technical staff and, neither could afford one.

So I came to the decision that I had to be the genius in the room, even if I had no idea how to solve the issue.

The generated report revealed that my website’s page speed insights or performance were very poor.

It prompted my genius self to wonder, “Does my page speed have anything to do with it?”

As a result, I checked my performance report, which revealed that all of the features added with the help of Elementor were slowing down my page speed.

And this led me to connect any dots I could find between page speed and robots file to better understand the problem on my website.

Where I ultimately concluded that no, my performance report has nothing to do with the fact that my robots.txt file is invalid.

Since my first assumption was completely wrong, I opted not to explore more and instead check Google developers about the reason behind it.

Suggestion by Google: robots txt invalid

The Google developer’s site suggested two explanations for why my robots.txt file could be incorrect.

  • Use of disallowed coding for search engine bots, preventing them from crawling and indexing the page.
  • HTTPS 5xx error code.

After considering the two possibilities, it is clear that the first point is not feasible in my circumstance.

As a result, I decided to search for the HTTPS status code for my specific robots.txt file.

Here I discovered that my HTTPS network status code was 304.

So, is HTTPS 304 an error? What exactly is it?

Let me illustrate this with an example.

Let’s generate an SEO report for the domain example.com.

To produce a report-

robots txt light house
Lighthouse
  • Left-click on the example.com domain’s page.
  • Select Inspect, then Lighthouse, and finally Generate Report.
  • Now, in the header area, navigate to the Network option to examine the status code of each request received from that site.

Here it shows that the HTTP status code is 200.

What does HTTP Status mean?

The status code is an HTTP response code used to communicate between the client and the server.

HTTP predefined status codes tell us if the code was successful, unsuccessful, or somewhere in between.

Developers have created a few blocks of status codes. It which helps to specify the sort of answer a server sends back in response to a client request.

Here, clients are web browsers, while servers are databases.

Now the different blocks of HTTP status codes are.

What are the different HTTP Status Codes?

  • 100-199 (first block: Information)

Following this, the server processes the request to give a proper response to the client.

  • 200-299 (second block: Success)

It denotes a successful response, i.e. the page does not include any errors.

You can see the example I used also showed that the HTTP status code is 200.

robot txt invalid Http 200
example.com domain shows HTTP 200

It implies that the URL returned as a response is fully operational.

Therefore, this shows that the code requested by the client was successfully received, interpreted, and accepted.

Hence everything on the example.com web page is functioning properly. And, the Search Engine bots can effectively crawl and index the site.

  • 300-399 (third block: Redirection)

This status code denotes any information about the URL that the server will send as a response to the client.

For example, the request from the client may have multiple responses, or the URL may have been changed to a different address, etc.

So, why does my robots.txt was being declared invalid?

HTTP 304 represents a “Not Modified” response.

Therefore, Status code 304 is acts as a response when the requested URL has no modifications in it.

It indicates that there is no purpose in requesting an updated version of that page when the cache data can be used instead.

You can see this same 304 response if you try to generate a report on the example.com domain page for the second time.

robors txt invalid HTTP 304
example.com domain shows HTTP 304
  • 400-499 (fourth block: Client Error)

This status code denotes an error that happens on the client-side.

For instance, when the requested URL does not exist on the server-side, i.e. the server is unable to recognise the URL.

It can happen specifically if the syntax requested is wrong or if it does not understand the URL.

  • 500-599 (fifth block: Server error)

This code helps us in understanding a situation where a server does not supports or processes a client-side request.

Therefore, it demonstrates that it has encountered a problem.

Here is a browser compatibility chart which displays the status codes the various web browsers supports or does not supports.

browser capability
See full image: Mozilla

Conclusion

When you encounter an issue in a lighthouse-generated report, use this simple approach of the status code to understand your error.

It can save you a lot of time and help you seek a solution more efficiently.

Hope, I was able to provide you with some insight into how to rapidly identify and resolve an issue. And as well as why your robots.txt file is indicating invalid.

Please share your thoughts in the comments section below.

What’s next?

Understand how Sitemap works?

Leave a Comment

Your email address will not be published.