How to economize on RAM when starting a crawl with a large list of urls? #816
Labels
t-tooling
Issues with this label are in the ownership of the tooling team.
Milestone
A very long list of starting URLs consumes a significant amount of RAM throughout the crawler's runtime. I tried converting the get_urls() function into a generator, but the crawler.run() method did not accept it. What is the recommended approach?
The text was updated successfully, but these errors were encountered: