Thursday, January 9, 2014

What are Robots, Spiders, and Crawlers?

Know about Robots, Spiders and Crawlers!!!

You should already have a general understanding that a robot, spider, or crawler is a piece of software that is programmed to “crawl” from one web page to another based on the links on those pages. As this crawler makes it way around the Internet, it collects content (such as text and links) from web sites and saves those in a database that is indexed and ranked according to the search engine algorithm.

When a crawler is first released on the Web, it’s usually seeded with a few web sites and it begins on one of those sites. The first thing it does on that first site is to take note of the links on the page. Then it “reads” the text and begins to follow the links that it collected previously. This network of links is called the crawl frontier; it’s the territory that the crawler is exploring in a very systematic way.

The links in a crawl frontier will sometimes take the crawler to other pages on the same web site, and sometimes they will take it away from the site completely. The crawler will follow the links until it hits a dead end and then backtrack and begin the process again until every link on a page has been followed.

As to what actually happens when a crawler begins reviewing a site, it’s a little more complicated than simply saying that it “reads” the site. The crawler sends a request to the web server where the web site resides, requesting pages to be delivered to it in the same manner that your web browser requests pages that you review. The difference between what your browser sees and what the crawler sees is that the crawler is viewing the pages in a completely text interface. No graphics or other types of media files are displayed. It’s all text, and it’s encoded in HTML. So to you it might look like gibberish.

The crawler can request as many or as few pages as it’s programmed to request at any given time. This can sometimes cause problems with web sites that aren’t prepared to serve up dozens of pages of content at a time. The requests will overload the site and cause it to crash, or it can slow down

If the crawler does go away, it will eventually return to try the task again. And it might try several times before it gives up entirely. But if the site doesn’t eventually begin to cooperate with the crawler, it’s penalized for the failures and your site’s search engine ranking will fall.

Source : Search Engine Optimization Bible - Wiley