Controlling where search engines go I finally had reason to create a robots.txt file. This file implements a protocol used to tell search engine spiders to ignore certain pages on a website. I noticed that google was turning up a bunch of the meta-pages that are in the Sillysoft wiki, which was not desired. There are multiple meta-pages for every real wiki page, so they were drowning out the real content. By telling search engines to ignore the undesirable pages it focuses them on the pages that actually have useful content on them.
|
|
|
Vorg — Tag Cloud
Written by dustin Latest PhotoQuote of Now:FriendsPopular PostsComputer Games
|