1.34 A robot (also known as a bot or spider or crawler ) is a program that acces
ID: 440524 • Letter: 1
Question
1.34 A robot (also known as a bot or spider or crawler ) is a program that accesses web documents automatically rather than in direct response to a user input. For example, the Google search engine uses a program called googlebot to automatically crawl the World Wide Web and build its searchable index of Web pages. An indexing robot such as googlebot begins by reading some Web document, then reading documents linked to by the initial document, and recursively continuing this process on previously unread documents. Some informal standards have been developed to allow Web site administrators and document authors to request robots not to read certain documents. (a) Read the first part of Section 4.1 of Appendix B of the HTML 4.01 Recommendation [W3C-HTML-4.01], and explain what you would do in order to request that robots not crawl the documents accessible from your Tomcat web server. (See http://www.robotstxt.org/wc/norobots.html for more information on the Robot Exclusion Standard.) (b) For one or more Web sites as directed by your instructor, list for each the robots (if any) that are explicitly excluded from crawling one or more of the files at that site.Explanation / Answer
hope this may be helpful There are numerous reasons as to why or when you should control the access of the web robots or web crawlers to your site. As much as you want Googlebot to come to you site, you don’t want the spam bots to come and collect private information from your site. Not to mention that when a robot crawls your site it uses the website’s bandwidth too! In this post I have explained how you can control the access of the web robots to your site through the usage of a simple ‘robots.txt’ file. What are web robots or web spiders? Web Robots (also known as bots, web spiders, web crawlers, Ants) are programs that traverses the World Wide Web in an automated manner. Search engines (like Google, Yahoo etc.) use web crawlers to index the web pages to provide up to date data. Why use ‘robots.txt’ file? Gooble bot may be crawling your site to provide better search results but at the same time other spam bots may be collecting personal information such as email addresses for spamming purpose. If you want to control the access of the web crawlers on your site, you can do so by using the “robots.txt” file. How do I create ‘robots.txt’ file? ‘robots.txt’ is a plain text file. Use any text editor to create the ‘robots.txt’ file.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.