如何使用scrapy爬取资源,你懂得

标签: scrapy 资源 懂得 | 发表时间:2016-09-16 03:50 | 作者:current_person
出处:http://blog.csdn.net

前言:有没有看点视频感觉到处都是广告,有没有觉得它的播放速度很慢,不要担心,看完这篇文章你就是老司机了

1.安装scrapy

sudo apt-get install python-scrapy
说明:

         scrapy官方文档上说不建议使用这个方法来安装,但是经过我的实验发现官方网站上的安装方法都不好使,ubuntu上的安装文档如下:

          http://doc.scrapy.org/en/latest/intro/install.html#ubuntu-9-10-or-above


2.创建项目

sudo scrapy startproject Mp4
创建之后在文件目录下面就有了初始的项目结构

3.Talk is cheap. Show me the code废话少说,放“码”过来!

items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class Mp4Item(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    mp4name = scrapy.Field()
    mp4url = scrapy.Field()

middlewares.py

# -*-coding:utf-8-*-

import random 
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
from scrapy import log 

class RotateUserAgentMiddleware(UserAgentMiddleware):
    def __init__(self,user_agent=''):
        self.user_agent = user_agent

    def process_request(self,request,spider):
        ua = random.choice(self.user_agent_list)
        if ua:
            #print 'Current UserAgent: ' + ua
            request.headers.setdefault('User-Agent',ua)
    user_agent_list = [\
                "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "  
        "(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",  
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "  
        "(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",  
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "  
        "(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",  
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "  
        "(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",  
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "  
        "(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",  
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "  
        "(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",  
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "  
        "(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",  
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",  
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",  
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "  
        "(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",  
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "  
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",  
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "  
        "(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
    ]   
   
pipeline.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
import codecs

class Mp4Pipeline(object):
    def __init__(self):
        self.file = codecs.open('mp4.json',mode='wb',encoding='utf-8')
    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + '\n'
        self.file.write(line.decode("unicode_escape"))
        return item
settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for Mp4 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'Mp4'

SPIDER_MODULES = ['Mp4.spiders']
NEWSPIDER_MODULE = 'Mp4.spiders'
COOKIES_ENABLED = False
ITEM_PIPELINES = {  
    'Mp4.pipelines.Mp4Pipeline':300  
}
DOWNLOADER_MIDDLEWARES = {  
    'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,  
    'Mp4.middlewares.RotateUserAgentMiddleware' :400  
} 

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'Mp4 (+http://www.yourdomain.com)'

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS=32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY=3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN=16
#CONCURRENT_REQUESTS_PER_IP=16

# Disable cookies (enabled by default)
#COOKIES_ENABLED=False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED=False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Mp4.middlewares.MyCustomSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'Mp4.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'Mp4.pipelines.SomePipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
# NOTE: AutoThrottle will honour the standard settings for concurrency and delay
#AUTOTHROTTLE_ENABLED=True
# The initial download delay
#AUTOTHROTTLE_START_DELAY=5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY=60
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG=False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED=True
#HTTPCACHE_EXPIRATION_SECS=0
#HTTPCACHE_DIR='httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES=[]
#HTTPCACHE_STORAGE='scrapy.extensions.httpcache.FilesystemCacheStorage'

Mp4CrawlSpider.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule  
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor  
from scrapy.selector import Selector  
from Mp4.items import Mp4Item
import urllib

class ExampleSpider(CrawlSpider):
    name = "Mp4CrawlSpider"
    download_delay = 2
    allowed_domains = ["huangsewangzhan.com"]
    start_urls = [
        'http://huangsewangzhan.com/vodlist/?5.html',
        'http://huangsewangzhan.com/vodlist/?6.html',
        'http://huangsewangzhan.com/vodlist/?7.html',
        'http://huangsewangzhan.com/vodlist/?8.html',
        'http://huangsewangzhan.com/vodlist/?9.html'
    ]
    rules =[
        Rule(LxmlLinkExtractor(allow=('/vodlist/'),restrict_xpaths=('//div[@class="page"]'),process_value='process_value'),callback='parse_item',follow=True)
    ]

    def process_value(value):
        print 'value is ' + value;
        #value = value[:-1]
        return value;

    def parse_item(self,response):
        item = Mp4Item()
        sel = Selector(response)
        print sel 
        mp4url = str(response.url)
        print 'mp4url is ' + mp4url
        ''' blog_name = sel.xpath('/a[@id="cb_post_title_url"]/text()').extract()
        '''
        mp4name = 'mp4name'#sel.xpath('//div[@id="article_details"]/div/h1/span/a/text()').extract()
        item['mp4name'] = [n.encode('utf-8') for n in mp4name]
        item['mp4url'] = mp4url.encode('utf-8')
        #yield item 
        #print response.xpath('//div[@class="thumb"]/a/@href')
        for href in response.xpath('//div[@class="thumb"]/a/@href'):
            #print href.extract()
            url = response.urljoin(href.extract())
            yield scrapy.Request(url, callback=self.parse_dir_contents)

    def parse_dir_contents(self, response):
        body = response.body
        for url in body.split("'"):
           if(url.startswith("http") and url.endswith(".mp4")):
               print "real url is " + url
               local = url.split('/')[-1]
               urllib.urlretrieve(url,local)
        #sel = Selector(response) 
        #print sel.xpath('//div[@id="a1"]')
        #print sel.xpath('//div[@class="pl"]')
        #print sel.xpath('//div[@id="pl1111"]')
        #print sel.xpath('//video[@id="ckplayer_a1"]')
        #print 'hahahahahahah' + response.url
        #for sel in response.xpath('//ul/li'):
        #yield item
额,其实前面几段代码都没有用,写得不规范了,没有模块化,这个类负责了主要工作,

这个html的这段代码是javascript生成的,所以没有办法使用xpath获得,我选择了解析字符串,,

功夫不负有心人呀,结果就是这样的









作者:current_person 发表于2016/9/15 21:07:27 原文链接
阅读:92 评论:0 查看评论

相关 [scrapy 资源 懂得] 推荐:

如何使用scrapy爬取资源,你懂得

- - CSDN博客编程语言推荐文章
前言:有没有看点视频感觉到处都是广告,有没有觉得它的播放速度很慢,不要担心,看完这篇文章你就是老司机了.          scrapy官方文档上说不建议使用这个方法来安装,但是经过我的实验发现官方网站上的安装方法都不好使,ubuntu上的安装文档如下:. 创建之后在文件目录下面就有了初始的项目结构.

Scrapy爬虫笔记【1-基本框架】

- - CSDN博客研发管理推荐文章
Scrapy 是一款抓取网页并从中提取结构化数据的应用程序框架,它具有非常广泛的应用场景,如:数据挖掘、信息处理和历史信息归档等. 尽管 Scrapy 最初被设计用于 屏幕抓取(准确地说是 网页抓取),但您也可以仅使用它的 API 来提取数据(就像. Amazon Associates Web Services)或作为通用的网页爬虫来使用.

开源Python网络爬虫框架Scrapy

- - 互联网实践
所谓网络爬虫,就是一个在网上到处或定向抓取数据的程序,当然,这种说法不够专业,更专业的描述就是,抓取特定网站网页的HTML数据. 不过由于一个网站的网页很多,而我们又不可能事先知道所有网页的URL地址,所以,如何保证我们抓取到了网站的所有HTML页面就是一个有待考究的问题了. 一般的方法是,定义一个入口页面,然后一般一个页面会有其他页面的URL,于是从当前页面获取到这些URL加入到爬虫的抓取队列中,然后进入到新新页面后再递归的进行上述的操作,其实说来就跟深度遍历或广度遍历一样.

使用Scrapy来爬取自己的CSDN文章

- - CSDN博客推荐文章
爬虫作为一中数据搜集获取手段,在大数据的背景下,更加得到应用. 我在这里只是记录学习的简单的例子. 大牛可以直接使用python的url2模块直接抓下来页面,然后自己使用正则来处理,我这个技术屌丝只能依赖于框架,在这里我使用的是Scrapy. 首先是python的安装和pip的安装. 在安装Scrapy的过程中,其依赖于 cryptography,在自动安装 cryptography编译的过程中,其缺少了 libffi库,导致Scrapy安装失败.

使用Scrapy-redis实现分布式爬取

- - 标点符
Scrapy是一个比较好用的Python爬虫框架,你只需要编写几个组件就可以实现网页数据的爬取. 但是当我们要爬取的页面非常多的时候,单个主机的处理能力就不能满足我们的需求了(无论是处理速度还是网络请求的并发数),这时候分布式爬虫的优势就显现出来. 而Scrapy-Redis则是一个基于Redis的Scrapy分布式组件.

基于 Python 的 Scrapy 爬虫入门:代码详解

- - SegmentFault 最新的文章
接下来创建一个爬虫项目,以 图虫网 为例抓取里面的图片. 在顶部菜单“发现” “标签”里面是对各种图片的分类,点击一个标签,比如“美女”,网页的链接为: https://tuchong.com/tags/美女/,我们以此作为爬虫入口,分析一下该页面:. 打开页面后出现一个个的图集,点击图集可全屏浏览图片,向下滚动页面会出现更多的图集,没有页码翻页的设置.

网络爬虫框架scrapy介绍及应用——抓取新浪新闻的标题内容评论

- - CSDN博客综合推荐文章
开始这篇博文之前,调研了相关的爬虫方法,简单罗列冰山一角. 手动编写爬虫,httpclient 是常用工具. 常见的请求方式有httpget 和httppost. http://www.yeetrack.com/?p=779 这个教程很全面. httpclient 对js 的支持比较差,有时候需要使用htmlunit 或者selenium.

懂得享受才幸福

- Stacy Xu - 吴淡如的BLOG
由于邻桌的人说话很大声,于是,他们的谈话就变成我用餐时的背景音乐. 因为,其中一位是个老爷爷,应该有八十岁了吧,看来身体还很健朗,只是耳朵听不清楚,所以带他来吃东西的孙女,说话得字字分明. 孙女看起来是个刚出社会不久的上班族. 脸上稚气十足,但说话倒是有点老气横秋. 「爷爷,我请你吃饭,不要管价钱啦,又不用你付钱.

Google+资源手册

- 向往自由的风 - FeedzShare
发布时间:2011年08月02日,  已有 2 人推荐. Google在社交网络上的大动作已经足以引起Twitter和Facebook的危机感. 它以迅雷不及掩耳之势在短短16天内就俘获了1000万用户,而Twitter当时达到这个用户数用了780天,Facebook则长达852天. 不过和大部分新兴的在线服务一样,人们可能在注册了一个账户之后就几乎不管它了.