0 Members and 1 Guest are viewing this topic.
The Web Robots Pages Web Robots (also known as Web Wanderers, Crawlers, or Spiders), are programs that traverse the Web automatically. Search engines such as Google use them to index the web content, spammers use them to scan for email addresses, and they have many other uses.On this site you can learn more about web robots.About /robots.txt explains what /robots.txt is, and how to use it.The FAQ answers many frequently asked questions, such as How do I stop robots visiting my site? and How can I get the best listing in search engines?"The Other Sites page links to external resources for robot writers and webmasters.The Robots Database has a list of robots.The /robots.txt checker can check your site's /robots.txt file and meta tags.The IP Lookup can help find out more about what robots are visiting you. About robotstxt.orgHistoryThe Web Robot Pages is an information resource dedicated to web robots. Initially hosted at WebCrawler in 1995, it moved to this dedicated site hosted by independent robotstxt.org in 2000. It underwent a modernisation in 2007. AdvertisingAt this time we do not offer advertising opportunities to new partners, nor are we interested in selling the domain. ContactTo contact the administrators of this site regarding technical issues relating to the operation of this site only, please use the contact page. Tools /robots.txt checkerRobots DatabaseIP lookup Other SitesGoogleMany people end up on this site because they have questions about specific search engine robots and search engines. For such questions the best place is the relevant's site's own help pages: Google Web Search Help CenterGoogle Webmaster Help CenterYahoo!'s Web Crawler Help PagesExtensions to the Robots Exclusion ProtocolRecently three major search engines have collaborated to support extensions to the /robotst.txt directives and related mechanisms. See the join announcements on:Yahoo! Search BlogGoogle Webmaster Central BlogMicrosoft Live Search Webmaster Team Blog