🔥码云GVP开源项目 12k star Uniapp+ElementUI 功能强大 支持多语言、二开方便! 广告
将 [CrawSpider](https://www.kancloud.cn/king_om/py_1/2229599) 改写成分布式爬虫。 <br/> 步骤如下: **1. 先创建普通的CrawSpider** ``` # scrapy genspider -t crawl <爬虫名称> <域名> > scrapy genspider -t crawl ct_liks www.wxapp-union.com ``` <br/> **2. 将普通的CrawlSpider改写成分布式的** ```python """ ct_liks.py 文件名 """ import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule # 1. 导入RedisCrawlSpider from scrapy_redis.spiders import RedisCrawlSpider # 2. 继承 RedisCrawlSpider # class CtLiksSpider(CrawlSpider): class CtLiksSpider(RedisCrawlSpider): name = 'ct_liks' # 3. 注销allowed_domains 和 start_urls # allowed_domains = ['www.wxapp-union.com'] # start_urls = ['http://www.wxapp-union.com/'] # 4. 添加redis_key redis_key = "ct_start_url" rules = ( Rule(LinkExtractor(allow=r'www.wxapp-union.com/article-\d+-1.html'), callback='parse_item') ) # 5. 在 __init__中定义allowed_domains def __init__(self, *args, **kwargs): domain = kwargs.pop('domain', '') # 多个允许的域采用 , 分割 self.allowed_domains = list(filter(None, domain.split(','))) super(CtLiksSpider, self).__init__(*args, **kwargs) def parse_item(self, response): title = response.xpath("//title").extract_first() print(title) ``` <br/> **3. ` settings.py`中设置分布式相关配置** ```python ###### 添加如下配置 ######### # 设置重复过滤器的模块 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" # 设置调度器,调度器具备与redis数据库交互的功能 SCHEDULER = "scrapy_redis.scheduler.Scheduler" # 设置当爬虫结束时是否保持redis数据库中的去重集合与任务对象 # True: 保持 # False: 不保持,任务结束就会清空数据库 SCHEDULER_PERSIST = True #SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderPriorityQueue" #SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderQueue" #SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderStack" ITEM_PIPELINES = { 'example.pipelines.ExamplePipeline': 300, # 当开启该管道,该管道将会自动把数据存储到redis数据库中 'scrapy_redis.pipelines.RedisPipeline': 400, } # 设置redis数据库 REDIS_URL = "redis://localhost:6379" # 或者采用下面这种方式设置 # REDIS_HOST = 'localhost' # REDIS_PORT = 6379 LOG_LEVEL = 'DEBUG' # Introduce an artifical delay to make use of parallelism. to speed up the # crawl. DOWNLOAD_DELAY = 1 ``` <br/> **4. 启动爬虫** ```shell # domain用 , 分割 > scrapy runspider ct_liks.py domain='www.baidu.com,taobao.com' ``` <br/> **5. 往Redis数据库中放入 `start_urls`** ```shell > lpush ct_start_url http://www.wxapp-union.com/ ``` 当爬虫根据 `redis_key` 读取到 `start_urls`后就开启工作了。