.Google.com's John Mueller responded to a concern concerning why Google marks pages that are actually prohibited coming from creeping through robots.txt and also why the it's safe to ignore the related Browse Console files regarding those creeps.Bot Web Traffic To Concern Guideline URLs.The person inquiring the inquiry chronicled that bots were actually creating hyperlinks to non-existent query specification Links (? q= xyz) to web pages along with noindex meta tags that are actually likewise obstructed in robots.txt. What prompted the inquiry is that Google.com is actually creeping the web links to those pages, receiving blocked out through robots.txt (without seeing a noindex robotics meta tag) after that obtaining shown up in Google Look Console as "Indexed, though blocked out through robots.txt.".The person talked to the observing inquiry:." However listed here's the significant question: why would certainly Google mark webpages when they can not even view the content? What is actually the conveniences in that?".Google.com's John Mueller verified that if they can not crawl the web page they can not find the noindex meta tag. He also produces an intriguing acknowledgment of the internet site: search operator, advising to ignore the outcomes considering that the "average" consumers won't view those end results.He wrote:." Yes, you're correct: if we can not creep the webpage, our team can't observe the noindex. That mentioned, if our company can't creep the web pages, then there's certainly not a whole lot for our company to index. Therefore while you may find some of those web pages with a targeted website:- query, the typical customer won't see them, so I would not fuss over it. Noindex is also alright (without robots.txt disallow), it just indicates the Links will find yourself being crawled (as well as find yourself in the Explore Console record for crawled/not catalogued-- neither of these statuses create problems to the rest of the website). The fundamental part is actually that you do not produce all of them crawlable + indexable.".Takeaways:.1. Mueller's answer confirms the limits being used the Site: search evolved search driver for analysis causes. Among those factors is actually due to the fact that it's not linked to the normal hunt mark, it's a different point altogether.Google.com's John Mueller discussed the web site hunt driver in 2021:." The quick answer is actually that a website: query is certainly not meant to be full, nor utilized for diagnostics reasons.A website inquiry is a details type of search that confines the end results to a specific internet site. It is actually essentially merely words site, a colon, and then the internet site's domain name.This query confines the outcomes to a particular site. It is actually not suggested to be a comprehensive selection of all the webpages from that website.".2. Noindex tag without using a robots.txt is actually alright for these type of circumstances where a robot is connecting to non-existent web pages that are getting uncovered through Googlebot.3. URLs along with the noindex tag will certainly create a "crawled/not catalogued" item in Search Console and that those will not possess a damaging result on the remainder of the internet site.Check out the concern and also answer on LinkedIn:.Why will Google mark pages when they can not even see the material?Included Picture by Shutterstock/Krakenimages. com.