Does Crawlee support per-domain concurrency?
In this example, the first domain (paklap.pk) can't handle much load, but the second domain (centurycomputerpk.com) can.
Does Crawlee allow setting concurrency limits per domain, or is concurrency managed globally?
In Scrapy, this is possible through the download_slot mechanism. I’m wondering if there’s an equivalent in Crawlee.
crawler = ParselCrawler(concurrency_settings=ConcurrencySettings(
desired_concurrency=1,
))
await crawler.run([Request.from_url("https://www.paklap.pk/laptops-prices.html", label="paklap_listing")])
await crawler.run([Request.from_url("https://centurycomputerpk.com/product-category/laptops", label="century_listing")])
Does Crawlee support per-domain concurrency?
In this example, the first domain (paklap.pk) can't handle much load, but the second domain (centurycomputerpk.com) can.
Does Crawlee allow setting concurrency limits per domain, or is concurrency managed globally?
In Scrapy, this is possible through the download_slot mechanism. I’m wondering if there’s an equivalent in Crawlee.