Skip to content

saurabh-gupta7869/WebCrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

WebCrawler

This is a Web Crawling script written in python language.

You have to provide following inputs to start the crawler:
1- Seed Url(eg: http://edition.cnn.com/)
2- Maximum Depth upto which you want to crawl.
3- Maximum number of pages to crawl.
The crawling process will stop when it reaches either the maximum depth or maximum pages limit provided by user.

First it will take some time to create index according to input values of max depth and max pages provided by user.

Once the index is created, it will respond to search queries almost instantly.

Output: Crawling result shows the urls with their ranks.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages