What search engine/method will yield files in website subdirectories?

All of the search engines parse subdirectories. Provided those subdirectories are linked from somewhere, and don't have code in them, or on the site telling search engines *not* to parse them. Search engines are big money these days.

They'll search as exhaustively as their programmers can devise. (Heck, look at the amount of PDF's that are searchable ..image and audio recognition are going to start playing a big part soon too. Soon there won't be much you *can't* find.).

ALL of them will, as long as the website doesn't restrict the search engine from looking in subdirectories with a restrictive robots. Txt file. For example, I did a search on google for chicagotribune.Com, and the third result was archives.chicagotribune.

Com/2008/nov/11/health/chi-081111bishops. While it's not technically a subdirectory (for the reason that most modern websites don't run off of flat html files that are stored in a directory structure) it may be the type of result you are looking for.

I think you might have luck if you go to Google. Com and type in your search query followed by site:websitetitlehere. Com So if you wanted to find all the pages on Mahalo related to Sarah Palin you would get these results... google.ca/search?hl=en&q=sarah+palin+sit....

You could use a sitemap generator like xml-sitemaps.com . Xml-sitemaps. Com has options to download the sitemaps in html, xml or text lists of urls.

Or try sitemapdoc.com.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions