This is a Web Crawling script written in python language.
You have to provide following inputs to start the crawler:
1- Seed Url(eg: http://edition.cnn.com/)
2- Maximum Depth upto which you want to crawl.
3- Maximum number of pages to crawl.
The crawling process will stop when it reaches either the maximum depth or maximum pages limit provided by user.
First it will take some time to create index according to input values of max depth and max pages provided by user.
Once the index is created, it will respond to search queries almost instantly.
Output: Crawling result shows the urls with their ranks.