scrapy自定义pipeline类实现将采集数据保存到mongodb的方法

本文实例讲述了scrapy自定义pipeline类实现将采集数据保存到mongodb的方法。分享给大家供大家参考。具体如下:

# Standard Python library imports
# 3rd party modules
import pymongo
from scrapy import log
from scrapy.conf import settings
from scrapy.exceptions import DropItem
class MongoDBPipeline(object):
  def __init__(self):
    self.server = settings['MONGODB_SERVER']
    self.port = settings['MONGODB_PORT']
    self.db = settings['MONGODB_DB']
    self.col = settings['MONGODB_COLLECTION']
    connection = pymongo.Connection(self.server, self.port)
    db = connection[self.db]
    self.collection = db[self.col]
  def process_item(self, item, spider):
    err_msg = ''
    for field, data in item.items():
      if not data:
        err_msg += 'Missing %s of poem from %s\n' % (field, item['url'])
    if err_msg:
      raise DropItem(err_msg)
    self.collection.insert(dict(item))
    log.msg('Item written to MongoDB database %s/%s' % (self.db, self.col),
        level=log.DEBUG, spider=spider)
    return item

希望本文所述对大家的python程序设计有所帮助。

声明:本文内容来源于网络,版权归原作者所有,内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:notice#nhooo.com(发邮件时,请将#更换为@)进行举报,并提供相关证据,一经查实,本站将立刻删除涉嫌侵权内容。