-
Notifications
You must be signed in to change notification settings - Fork 376
feat: add respect_robots_txt_file
option
#1162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new boolean flag, respect_robots_txt_file, to automatically skip crawling disallowed URLs based on a site's robots.txt rules. Key changes include the addition of tests for robots.txt handling across multiple crawler implementations, integration of robots.txt checking in the crawling pipeline, and the implementation of a RobotsTxtFile utility.
Reviewed Changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.
Show a summary per file
File | Description |
---|---|
tests/unit/server_endpoints.py | Added a static ROBOTS_TXT response to simulate a robots.txt file. |
tests/unit/server.py | Introduced a new endpoint to serve robots.txt and updated routing logic. |
tests/unit/crawlers/_playwright/test_playwright_crawler.py | Added tests verifying that the PlaywrightCrawler correctly respects robots.txt. |
tests/unit/crawlers/_parsel/test_parsel_crawler.py | Introduced tests for the ParselCrawler to validate robots.txt respect. |
tests/unit/crawlers/_beautifulsoup/test_beautifulsoup_crawler.py | Added tests to ensure BeautifulSoupCrawler adheres to robots.txt rules. |
tests/unit/_utils/test_robots.py | New tests for generating, parsing, and validating robots.txt file behavior. |
src/crawlee/crawlers/_playwright/_playwright_crawler.py | Integrated robots.txt enforcement in the link extraction logic. |
src/crawlee/crawlers/_basic/_basic_crawler.py | Updated request adding and session handling to respect robots.txt directives. |
src/crawlee/crawlers/_abstract_http/_abstract_http_crawler.py | Added robots.txt checking in link extraction for HTTP-based crawling. |
src/crawlee/_utils/robots.py | Implemented the RobotsTxtFile class for parsing and handling robots.txt data. |
pyproject.toml | Added dependency for protego to support robots.txt parsing. |
Co-authored-by: Copilot <[email protected]>
@@ -40,6 +40,7 @@ dependencies = [ | |||
"eval-type-backport>=0.2.0", | |||
"httpx[brotli,http2,zstd]>=0.27.0", | |||
"more-itertools>=10.2.0", | |||
"protego>=0.4.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fun to see another scrapy project here, but I guess that it guarantees some stability, so... all good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was planning to use RobotFileParser
, but it doesn't support Google's specification. 😞
src/crawlee/_utils/robots.py
Outdated
self._robots = robots | ||
self._original_url = URL(url).origin() | ||
|
||
@staticmethod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer using @classmethod
and the Self
return type annotation
Co-authored-by: Jan Buchar <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I have a few details... And also, could you please write a new guide/example regarding this feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request adds support for automatically skipping requests disallowed by robots.txt files. Key changes include:
- Introducing a new boolean option (respect_robots_txt_file) across multiple crawler implementations.
- Adding caching and locking in the BasicCrawler to optimize fetching of robots.txt files.
- Adding new tests and examples for verifying correct respect of robots.txt rules.
Reviewed Changes
Copilot reviewed 15 out of 16 changed files in this pull request and generated 1 comment.
Show a summary per file
File | Description |
---|---|
tests/unit/server_endpoints.py | Added a ROBOTS_TXT constant as binary content for testing robots.txt responses |
tests/unit/server.py | Added an endpoint and handler function for serving robots.txt |
tests/unit/crawlers/_playwright/test_playwright_crawler.py | Added test to verify the crawling respects robots.txt rules in the PlaywrightCrawler |
tests/unit/crawlers/_parsel/test_parsel_crawler.py | Added test to verify the crawling respects robots.txt rules in the ParselCrawler |
tests/unit/crawlers/_beautifulsoup/test_beautifulsoup_crawler.py | Added test to verify the crawling respects robots.txt rules in the BeautifulSoupCrawler |
tests/unit/crawlers/_basic/test_basic_crawler.py | Added a test to ensure the robots.txt fetching lock is acquired only once |
tests/unit/_utils/test_robots.py | Introduced tests for the RobotsTxtFile class functionality |
src/crawlee/storage_clients/_memory/_request_queue_client.py | Removed extraneous type ignore comment from sortedcollections import |
src/crawlee/crawlers/_playwright/_playwright_crawler.py | Integrated robots.txt check into the link extraction logic |
src/crawlee/crawlers/_basic/_basic_crawler.py | Extended BasicCrawler with respect_robots_txt_file support and caching/locking mechanisms |
src/crawlee/crawlers/_abstract_http/_abstract_http_crawler.py | Integrated robots.txt check in the abstract HTTP crawler’s link extraction method |
src/crawlee/_utils/robots.py | Added a new RobotsTxtFile class that leverages Protego for parsing and evaluating rules |
pyproject.toml | Updated dependencies to include protego and sortedcollections |
docs/examples/code_examples/respect_robots_txt_file.py | Provided an example demonstrating the usage of respect_robots_txt_file option |
Files not reviewed (1)
- docs/examples/respect_robots_txt_file.mdx: Language not supported
48a93b1
to
41b803d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! LGTM
Description
respect_robots_txt_file
.Issues
respectRobotsTxtFile
crawler option #1144Testing
respect_robots_txt_file
functioning in ‘EnqueueLinksFunction’ for crawlersRobotsTxtFile