# 4.2 使用Beautiful Soup
## 1.说明
BeautifulSoup 就是 Python 的一个 HTML 或 XML 的解析库,我们可以用它来方便地从网页中提取数据
## 2. 解析器 {#3-解析器}
BeautifulSoup 支持的解析器及优缺点
| 解析器 | 使用方法 | 优势 | 劣势 |
| :--- | :--- | :--- | :--- |
| Python标准库 | BeautifulSoup\(markup, "html.parser"\) | Python的内置标准库、执行速度适中 、文档容错能力强 | Python 2.7.3 or 3.2.2\)前的版本中文容错能力差 |
| LXML HTML 解析器 | BeautifulSoup\(markup, "lxml"\) | 速度快、文档容错能力强 | 需要安装C语言库 |
| LXML XML 解析器 | BeautifulSoup\(markup, "xml"\) | 速度快、唯一支持XML的解析器 | 需要安装C语言库 |
| html5lib | BeautifulSoup\(markup, "html5lib"\) | 最好的容错性、以浏览器的方式解析文档、生成 HTML5 格式的文档 | 速度慢、不依赖外部扩展 |
## 3.基本使用
实例:
```text
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.prettify())
print(soup.title.string)
```
运行结果:
```text
<html>
<head>
<title>
The Dormouse's story
</title>
</head>
<body>
<p class="title" name="dromouse">
<b>
The Dormouse's story
</b>
</p>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<!-- Elsie -->
</a>
,
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">
Tillie
</a>
;
and they lived at the bottom of a well.
</p>
<p class="story">
...
</p>
</body>
</html>
The Dormouse's story
```
* prettify\(\):把要解析的字符串以标准的缩进格式输出
* soup.title.string:选择HTML中的title节点,再调用string属性得到里面的文本
## 4. 节点选择器 {#5-节点选择器}
### 选择元素
例子:
```text
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.title)
print(type(soup.title))
print(soup.title.string)
print(soup.head)
# 只能选择第一个p节点
print(soup.p)
```
运行结果:
```text
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
<head><title>The Dormouse's story</title></head>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
```
### 提取信息 {#提取信息}
#### 获取名称 {#获取名称}
可以利用 name 属性来获取节点的名称
实例:选择title,调用name属性得到节点的名称
```text
print(soup.title.name)
```
运行结果:
```text
title
```
#### 获取属性 {#获取属性}
调用 attrs 获取所有属性
```text
# 返回字典
print(soup.p.attrs)
print(soup.p.attrs['name'])
```
运行结果:
```text
{'class': ['title'], 'name': 'dromouse'}
dromouse
```
#### 获取内容 {#获取内容}
利用 string 属性获取节点元素包含的文本内容
```text
print(soup.p.string)
```
运行结果:
```text
The Dormouse's story
```
### 嵌套选择 {#嵌套选择}
实例:获取head节点内部的title节点
```text
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)
```
运行结果:
```text
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
```
### 关联选择 {#关联选择}
#### 子节点和子孙节点 {#子节点和子孙节点}
获取直接子节点可以调用 contents 属性
实例:获取body节点下子节点p
```text
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.body.contents)
```
运行结果:返回结果是列表形式
```text
['\n', <p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>, '\n', <p class="story">...</p>, '\n']
```
可以调用 children 属性,得到相应的结果:
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.body.children)
for i,child in enumerate(soup.body.children):
print(i,child)
```
运行结果:
```text
<list_iterator object at 0x00000217D33CD048>
0
1 <p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
2
3 <p class="story">...</p>
4
```
要得到所有的子孙节点的话可以调用 descendants 属性
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.body.descendants)
for i,child in enumerate(soup.body.descendants):
print(i,child)
```
运行结果:
```text
<generator object descendants at 0x0000014D106353B8>
0
1 <p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
2
Once upon a time there were three little sisters; and their names were
3 <a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
4
5 <span>Elsie</span>
6 Elsie
7
8
9 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
10 Lacie
11
and
12 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
13 Tillie
14
and they lived at the bottom of a well.
15
16 <p class="story">...</p>
17 ...
18
```
#### 父节点和祖先节点 {#父节点和祖先节点}
要获取某个节点元素的父节点,可以调用 parent 属性:
实例:获取节点a的父节点p下的内容
```text
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.a.parent)
```
运行结果:
```text
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
```
要想获取所有的祖先节点,可以调用 parents 属性
```text
html = """
<html>
<body>
<p class="story">
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(type(soup.a.parents))
print(list(enumerate(soup.a.parents)))
```
运行结果:
```text
<class 'generator'>
[(0, <p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>), (1, <body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body>), (2, <html>
<body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body></html>), (3, <html>
<body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body></html>)]
```
#### 兄弟节点 {#兄弟节点}
* next\_sibling :获取节点的下一个兄弟节点
* previous\_sibling:获取节点上一个兄弟节点
* next\_siblings :返回所有前面兄弟节点的生成器
* previous\_siblings :返回所有后面的兄弟节点的生成器
实例:
```text
html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
Hello
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print('next sibling:',soup.a.next_sibling)
print('previous sibling:',soup.a.previous_sibling)
print("next siblings:",list(soup.a.next_siblings))
print("previouos siblings:",list(soup.a.previous_siblings))
```
运行结果:
```text
next sibling:
Hello
previous sibling:
Once upon a time there were three little sisters; and their names were
next siblings: ['\n Hello\n ', <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, ' \n and\n ', <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>, '\n and they lived at the bottom of a well.\n ']
previouos siblings: ['\n Once upon a time there were three little sisters; and their names were\n ']
```
#### 提取信息 {#提取信息}
获取一些信息,比如文本、属性等等
```text
html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Bob</a><a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print('Next Sibling:')
print(type(soup.a.next_sibling))
print(soup.a.next_sibling)
print(soup.a.next_sibling.string)
print('Parent:')
print(type(soup.a.parents))
print(list(soup.a.parents)[0])
print(list(soup.a.parents)[0].attrs['class'])
```
运行结果:
```text
Next Sibling:
<class 'bs4.element.Tag'>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
Lacie
Parent:
<class 'generator'>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">Bob</a><a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
</p>
['story']
```
## 5. 方法选择器 {#6-方法选择器}
常用查询方法:find\_all\(\)、find\(\)
### find\_all\(\) {#findall}
查询所有符合条件的元素
语法:
```text
find_all(name , attrs , recursive , text , **kwargs)
```
#### name {#name}
根据节点名来查询元素
```text
html='''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.find_all(name='ul'))
print(type(soup.find_all(name='ul')[0]))
```
运行结果:返回结果类型为:bs4.element.Tag
```text
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>
```
获取ul下的li节点以及li下的文本内容
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
# print(soup.find_all(name='ul'))
# print(type(soup.find_all(name='ul')[0]))
for ul in soup.find_all(name='ul'):
print(ul.find_all(name='li'))
for li in ul.find_all(name='li'):
print(li.string)
```
运行结果:
```text
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
Foo
Bar
Jay
[<li class="element">Foo</li>, <li class="element">Bar</li>]
Foo
Bar
```
#### attrs {#attrs}
根据属性来进行查询
```text
html='''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
# 查询属性以字典的的进行查询
print(soup.find_all(attrs={'id':'list-1'}))
print(soup.find_all(attrs={"name":"elements"}))
```
运行结果:
```text
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
```
对于常用的属性比如id,class,可以不用attrs传递
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(id='list-1'))
# 由于class是关键字需要添加下划线区分
print(soup.find_all(class_='element'))
```
运行结果:
```text
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
```
#### text {#text}
text 参数可以用来匹配节点的文本,传入的形式可以是字符串,可以是正则表达式对象
```text
html='''
<div class="panel">
<div class="panel-body">
<a>Hello, this is a link</a>
<a>Hello, this is a link, too</a>
</div>
</div>
'''
import re
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
# 查询文本包含有link的文本
print(soup.find_all(text=re.compile('link')))
```
运行结果:
```text
['Hello, this is a link', 'Hello, this is a link, too']
```
### find\(\) {#find}
find\(\) 方法返回的是单个元素,即第一个匹配的元素,而 find\_all\(\) 返回的是所有匹配的元素组成的列表
```text
html='''
<div class="panel">
<div class="panel-body">
<a class='element'>Hello, this is a link</a>
<a class='element'>Hello, this is a link, too</a>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.find(name='a'))
print(soup.find(attrs={'class':'element'}))
print(soup.find(class_='element'))
print(type(soup.find(name='a')))
```
返回结果:返回类型为<class 'bs4.element.Tag'>
```text
<a class="element">Hello, this is a link</a>
<a class="element">Hello, this is a link</a>
<a class="element">Hello, this is a link</a>
<class 'bs4.element.Tag'>
```
其他查询方法
### find\_parents\(\) find\_parent\(\) {#findparents-findparent}
find\_parents\(\) 返回所有祖先节点,find\_parent\(\) 返回直接父节点。
### find\_next\_siblings\(\) find\_next\_sibling\(\) {#findnextsiblings-findnextsibling}
find\_next\_siblings\(\) 返回后面所有兄弟节点,find\_next\_sibling\(\) 返回后面第一个兄弟节点。
### find\_previous\_siblings\(\) find\_previous\_sibling\(\) {#findprevioussiblings-findprevioussibling}
find\_previous\_siblings\(\) 返回前面所有兄弟节点,find\_previous\_sibling\(\) 返回前面第一个兄弟节点。
### find\_all\_next\(\) find\_next\(\) {#findallnext-findnext}
find\_all\_next\(\) 返回节点后所有符合条件的节点, find\_next\(\) 返回第一个符合条件的节点。
### find\_all\_previous\(\) 和 find\_previous\(\) {#findallprevious-和-findprevious}
find\_all\_previous\(\) 返回节点后所有符合条件的节点, find\_previous\(\) 返回第一个符合条件的节点
## 6.CSS选择器
相关链接:[http://www.w3school.com.cn/cssref/css\_selectors.asp](http://www.w3school.com.cn/cssref/css_selectors.asp)。
使用 CSS 选择器,只需要调用 select\(\) 方法,传入相应的 CSS 选择器
实例:
```text
html='''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
print(soup.select('.panel .panel-heading'))
print(soup.select('ul li'))
print(soup.select('#list-2 .element'))
print(soup.select('ul')[0])
```
运行结果:
```text
[<div class="panel-heading">
<h4>Hello</h4>
</div>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
```
### 嵌套选择 {#嵌套选择}
实例:select\(\) 方法同样支持嵌套选择,例如我们先选择所有 ul 节点,再遍历每个 ul 节点选择其 li 节点
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for ul in soup.select('ul'):
print(ul.select('li'))
```
运行结果:
```text
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
```
### 获取属性 {#获取属性}
获取属性还是可以用上面的方法获取
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for ul in soup.select('ul'):
print(ul['id'])
print(ul.attrs['id'])
```
运行结果:
```text
list-1
list-1
list-2
list-2
```
### 获取文本 {#获取文本}
获取文本可以用string 属性,还有一种方法那就是 get\_text\(\),同样可以获取文本值。
```text
from bs4 import BeautifulSoup
soup = BeautifulSoup(html,'lxml')
for li in soup.select('li'):
print('GET TEXT:',li.get_text())
print('STRING:',li.string)
```
运行结果:
```text
GET TEXT: Foo
STRING: Foo
GET TEXT: Bar
STRING: Bar
GET TEXT: Jay
STRING: Jay
GET TEXT: Foo
STRING: Foo
GET TEXT: Bar
STRING: Bar
```
## 7.细节
* 推荐使用 LXML 解析库,必要时使用 html.parser
* 节点选择筛选功能弱但是速度快
* 建议使用 find\(\)、find\_all\(\) 查询匹配单个结果或者多个结果
* 如果对 CSS 选择器熟悉的话可以使用 select\(\) 选择法
如何匹配规则不是熟练,而且想快速获取,可以如下操作:
![](https://box.kancloud.cn/2b67ba71951ee51ec68ce53b7197e436_778x823.png)
右键![](https://box.kancloud.cn/e8b554a00c22ca742636d5f9ab590116_842x739.png)能手敲就手敲,不要偷懒,不然能力提升不上去
- 介绍
- 1.开发环境配置
- 1.1 python3的安装
- 1.1.1 windows下的安装
- 1.1.2 Linux下的安装
- 1.1.3 Mac下的安装
- 1.2 请求库的安装
- 1.2.1 requests的安装
- 1.2.2 selenium的安装
- 1.2.3 ChromeDriver的安装
- 1.2.4 GeckoDriver 的安装
- 1.2.5 PhantomJS的安装
- 1.2.6 aiohttp的安装
- 1.3 解析库的安装
- 1.3.1 lxml的安装
- 1.3.2 Beautiful Soup的安装
- 1.3.3 pyquery的安装
- 1.3.4 tesserocr的安装
- 1.4 数据库的安装
- 1.4.1 MySQL的安装
- 1.4.2 MongoDB的安装
- 1.4.3 Redis的安装
- 1.5 存储库的安装
- 1.5.1 PyMySQL的安装
- 1.5.2 PyMongo的安装
- 1.5.3 redis-py的安装
- 1.5.4 RedisDump的安装
- 1.6 Web库的安装
- 1.6.1 Flask的安装
- 1.6.2 Tornado的安装
- 1.7 App爬取相关库的安装
- 1.7.1 Charles的安装
- 1.7.2 mitmproxy的安装
- 1.7.3 Appium的安装
- 1.8 爬虫框架的安装
- 1.8.1 pyspider的安装
- 1.8.2 Scrapy的安装
- 1.8.3 Scrapy-Splash的安装
- 1.8.4 ScrapyRedis的安装
- 1.9 布署相关库的安装
- 1.9.1 Docker的安装
- 1.9.2 Scrapyd的安装
- 1.9.3 ScrapydClient的安装
- 1.9.4 ScrapydAPI的安装
- 1.9.5 Scrapyrt的安装
- 1.9.6-Gerapy的安装
- 2.爬虫基础
- 2.1 HTTP 基本原理
- 2.1.1 URI和URL
- 2.1.2 超文本
- 2.1.3 HTTP和HTTPS
- 2.1.4 HTTP请求过程
- 2.1.5 请求
- 2.1.6 响应
- 2.2 网页基础
- 2.2.1网页的组成
- 2.2.2 网页的结构
- 2.2.3 节点树及节点间的关系
- 2.2.4 选择器
- 2.3 爬虫的基本原理
- 2.3.1 爬虫概述
- 2.3.2 能抓怎样的数据
- 2.3.3 javascript渲染的页面
- 2.4 会话和Cookies
- 2.4.1 静态网页和动态网页
- 2.4.2 无状态HTTP
- 2.4.3 常见误区
- 2.5 代理的基本原理
- 2.5.1 基本原理
- 2.5.2 代理的作用
- 2.5.3 爬虫代理
- 2.5.4 代理分类
- 2.5.5 常见代理设置
- 3.基本库使用
- 3.1 使用urllib
- 3.1.1 发送请求
- 3.1.2 处理异常
- 3.1.3 解析链接
- 3.1.4 分析Robots协议
- 3.2 使用requests
- 3.2.1 基本用法
- 3.2.2 高级用法
- 3.3 正则表达式
- 3.4 抓取猫眼电影排行
- 4.解析库的使用
- 4.1 使用xpath
- 4.2 使用Beautiful Soup
- 4.3 使用pyquery
- 5.数据存储
- 5.1 文件存储
- 5.1.1 TXT 文件存储
- 5.1.2 JSON文件存储
- 5.1.3 CSV文件存储
- 5.2 关系型数据库存储
- 5.2.1 MySQL的存储
- 5.3 非关系数据库存储
- 5.3.1 MongoDB存储
- 5.3.2 Redis存储
- 6.Ajax数据爬取
- 6.1 什么是Ajax
- 6.2 Ajax分析方法
- 6.3 Ajax结果提取
- 6.4 分析Ajax爬取今日头条街拍美图
- 7.动态渲染页面爬取
- 7.1 Selenium的使用
- 7.2 Splash的使用
- 7.3 Splash负载均衡配置
- 7.4 使用selenium爬取淘宝商品
- 8.验证码的识别
- 8.1 图形验证码的识别
- 8.2 极验滑动验证码的识别
- 8.3 点触验证码的识别
- 8.4微博宫格验证码的识别
- 9.代理的使用
- 9.1 代理的设置
- 9.2 代理池的维护
- 9.3 付费代理的使用
- 9.4 ADSL拨号代理
- 9.5 使用代理爬取微信公总号文章
- 10.模拟登录
- 10.1 模拟登陆并爬去GitHub
- 10.2 Cookies池的搭建
- 11.App的爬取
- 11.1 Charles的使用
- 11.2 mitmproxy的使用
- 11.3 mitmdump“得到”App电子书信息
- 11.4 Appium的基本使用
- 11.5 Appnium爬取微信朋友圈
- 11.6 Appium+mitmdump爬取京东商品
- 12.pyspider框架的使用
- 12.1 pyspider框架介绍
- 12.2 pyspider的基本使用
- 12.3 pyspider用法详解
- 13.Scrapy框架的使用
- 13.1 scrapy框架介绍
- 13.2 入门
- 13.3 selector的用法
- 13.4 spider的用法
- 13.5 Downloader Middleware的用法
- 13.6 Spider Middleware的用法
- 13.7 Item Pipeline的用法
- 13.8 Scrapy对接Selenium
- 13.9 Scrapy对接Splash
- 13.10 Scrapy通用爬虫
- 13.11 Scrapyrt的使用
- 13.12 Scrapy对接Docker
- 13.13 Scrapy爬取新浪微博
- 14.分布式爬虫
- 14.1 分布式爬虫原理
- 14.2 Scrapy-Redis源码解析
- 14.3 Scrapy分布式实现
- 14.4 Bloom Filter的对接
- 15.分布式爬虫的部署
- 15.1 Scrapyd分布式部署
- 15.2 Scrapyd-Client的使用
- 15.3 Scrapyd对接Docker
- 15.4 Scrapyd批量部署
- 15.5 Gerapy分布式管理
- 微信公总号文章实战
- 源码
- other