Tuesday, February 7, 2023
HomeMarketingUnderstanding and resolving 'Found - at present not listed'

Understanding and resolving ‘Found – at present not listed’


In the event you see “Found – at present not listed” in Google Search Console, it means Google is conscious of the URL, however hasn’t crawled and listed it but. 

It doesn’t essentially imply the web page won’t ever be processed. As their documentation says, they could come again to it later with none further effort in your half. 

However different components could possibly be stopping Google from crawling and indexing the web page, together with:

  • Server points and onsite technical points proscribing or stopping Google’s crawl functionality.
  • Points regarding the web page itself, similar to high quality.

You too can use Google Search Console Inspection API to queue URLs for his or her coverageState standing (in addition to different helpful information factors) en masse.

Request indexing by way of Google Search Console

That is an apparent decision and for almost all of instances, it should resolve the problem.

Generally, Google is just gradual to crawl new URLs – it occurs. However different instances, underlying points are the offender. 

While you request indexing, one in all two issues may occur:

  • URL turns into “Crawled – at present not listed”
  • Non permanent indexing

Each are signs of underlying points. 

The second occurs as a result of requesting indexing generally offers your URL a short lived “freshness increase” which may take the URL above the requisite high quality threshold and, in flip, result in momentary indexing.


Get the each day publication search entrepreneurs depend on.


Web page high quality points

That is the place vocabulary can get complicated. I have been requested, “How can Google decide the web page high quality if it hasn’t been crawled but?”

This can be a good query, and the reply is that it will possibly’t.

Google is making an assumption in regards to the web page’s high quality primarily based on different pages on the area. Their classifications are likewise primarily based on URL patterns and web site structure.

Consequently, transferring these pages from “consciousness” to the crawl queue could be de-prioritized primarily based on the shortage of high quality they’ve discovered on related pages. 

It is potential that pages on related URL patterns or these positioned in related areas of the location structure have a low-value proposition in comparison with different items of content material focusing on the identical consumer intents and key phrases.

Attainable causes embrace:

  • The principle content material depth.
  • Presentation. 
  • Stage of supporting content material.
  • Uniqueness of the content material and views provided.
  • Or much more manipulative points (i.e., the content material is low high quality and auto-generated, spun, or straight duplicates already established content material).

Engaged on bettering the content material high quality inside the website cluster and the precise pages can have a optimistic influence on reigniting Google’s curiosity in crawling your content material with better objective.

You too can noindex different pages on the web site that you simply acknowledge aren’t of the very best high quality to enhance the ratio of good-quality pages to bad-quality pages on the location.

Crawl funds and effectivity

Crawl funds is an typically misunderstood mechanism in web optimization. 

The vast majority of web sites need not fear about this. In actual fact, Google’s Gary Illyes has gone on the document claiming that in all probability 90% of internet sites do not want to consider crawl funds. It’s typically considered an issue for enterprise web sites.

Crawl effectivity, then again, can have an effect on web sites of all sizes. Ignored, it will possibly result in points on how Google crawls and processes the web site.

As an example, in case your web site: 

  • Duplicates URLs with parameters.
  • Resolves with and with out trailing slashes.
  • Is out there on HTTP and HTTPS.
  • Serves content material from a number of subdomains (e.g., https://web site.com and https://www.web site.com).

…you then is likely to be having duplication points that influence Google’s assumptions on crawl precedence primarily based on wider website assumptions.

You is likely to be zapping Google’s crawl funds with pointless URLs and requests. On condition that Googlebot crawls web sites in parts, this may result in Google’s sources not stretching far sufficient to find all newly printed URLs as quick as you desire to.

You need to crawl your web site often, and make sure that:

  • Pages resolve to a single subdomain (as desired).
  • Pages resolve to a single HTTP protocol.
  • URLs with parameters are canonicalized to the basis (as desired).
  • Inside hyperlinks do not use redirects unnecessarily.

In case your web site makes use of parameters, similar to ecommerce product filters, you possibly can curb the crawling of those URI paths by disallowing them within the robots.txt file.

Your server will also be essential in how Google allocates the funds to crawl your web site.

In case your server is overloaded and responding too slowly, crawling points could come up. On this case, Googlebot will not be capable to entry the web page leading to a few of your content material not getting crawled. 

Consequently, Google will attempt to come again later to index the web site, however it should little doubt trigger a delay in the entire course of.

Inside linking

When you might have an internet site, it is essential to have inside hyperlinks from one web page to a different. 

Google normally pays much less consideration to URLs that do not have any or sufficient inside hyperlinks – and should even exclude them from its index.

You’ll be able to test the variety of inside hyperlinks to pages by crawlers like Screaming Frog and Sitebulb.

Having an organized and logical web site construction with inside hyperlinks is one of the best ways to go in terms of optimizing your web site. 

However if in case you have bother with this, a method to verify your whole inside pages are related is to “hack” into the crawl depth utilizing HTML sitemaps. 

These are designed for customers, not machines. Though they could be seen as relics now, they’ll nonetheless be helpful.

Moreover, in case your web site has many URLs, it is clever to separate them up amongst a number of pages. You do not need all of them linked from a single web page.

Inside hyperlinks additionally want to make use of the <a> tag for inside hyperlinks as an alternative of counting on JavaScript features similar to onClick()

In the event you’re using a Jamstack or JavaScript framework, examine the way it or any associated libraries deal with inside hyperlinks. These should be offered as <a> tags.

Opinions expressed on this article are these of the visitor writer and never essentially Search Engine Land. Workers authors are listed right here.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments