[TOC] # Python爬虫抓取之客户端渲染(CSR)页面抓取方法 ## 客户端渲染(Client Side Render)页面的常见方法 - AJAX - JavaScript ### 如何分析AJAX请求接口 AJAX 叫做异步JavaScript与XML,洋文全名(Asynchronous JavaScript And XML)。 AJAX应用程序可能使用XML来传输数据,但是以`纯文本`或`JSON`文本的形式传输数据同样普遍。 AJAX允许通过与后台的Web服务器交换数据来异步更新网页。这意味着可以更新网页的某些部分,而无需重新加载整个页面。 那么AJAX是如何实现不刷新页面而更新页面数据的呢? 这就依赖于 `XMLHttpRequest`对象,XHR(XMLHttpRequest) 用于在后台与服务器交换数据。所有现代的浏览器都支持 XMLHttpRequest 对象。 打开浏览器的`开发者工具(按F12)` => "访问`https://www.jianshu.com`页面(滑到页面最底下,点击`阅读更多`)" => "网络(Network)" => "过滤(Filter)" => 选择"XHR"。 `XHR`下面就是调用`AJAX`请求地址了(Headers里有`"x-requested-with": "XMLHttpRequest"`以及`"x-pjax": "true"`) 。 找到点击`阅读更多`时产生的`trending_notes`请求地址,然后右键选择`复制(Copy)`中的`Copy as fetch`,可以看到类似的如下内容: ```html fetch("https://www.jianshu.com/trending_notes", { "headers": { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "accept": "text/html, */*; q=0.01", "accept-language": "zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5", "cache-control": "no-cache", "content-type": "application/x-www-form-urlencoded; charset=UTF-8", "pragma": "no-cache", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "x-csrf-token": "jQ8cjTTjfPWR0dEoYEjtBSzr6v7XxsDg3x21en7Sl2eIuLg2WYmRhl+HR/iKNaeLxkvE7hdZcXILWYXMXyoKZQ==", "x-pjax": "true", "x-requested-with": "XMLHttpRequest" }, "referrer": "https://www.jianshu.com/", "referrerPolicy": "no-referrer-when-downgrade", "body": "page=4&seen_snote_ids%5B%5D=73208581&seen_snote_ids%5B%5D=70592266&seen_snote_ids%5B%5D=56402115&seen_snote_ids%5B%5D=70049570&seen_snote_ids%5B%5D=71683600&seen_snote_ids%5B%5D=54427150&seen_snote_ids%5B%5D=69777587&seen_snote_ids%5B%5D=72391260&seen_snote_ids%5B%5D=73111362&seen_snote_ids%5B%5D=70579958&seen_snote_ids%5B%5D=72330113&seen_snote_ids%5B%5D=72741304&seen_snote_ids%5B%5D=73501668&seen_snote_ids%5B%5D=72890232&seen_snote_ids%5B%5D=71820900&seen_snote_ids%5B%5D=70868542&seen_snote_ids%5B%5D=72294439&seen_snote_ids%5B%5D=72060912&seen_snote_ids%5B%5D=72060165&seen_snote_ids%5B%5D=70942923&seen_snote_ids%5B%5D=71081191", "method": "POST", "mode": "cors", "credentials": "include" }); ``` 下面我们分析一下这条请求: - method方法在`page < 3`时为 `GET`,之后使用的是`POST` , 也就是说两种方法都是可以的。 - 提交参数`body`中有: `page=4`和一些`seen_snote_ids[]=xxxx`的已读文章id列表信息,说明携带的参数就是页数和已经阅读过的文章id列表。 - Headers信息保留即可。 实现一下抓取代码: ```Python import requests as req from lxml import etree import time url_host='https://www.jianshu.com' headers = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "accept": "text/html, */*; q=0.01", "accept-language": "zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5", "cache-control": "no-cache", "content-type": "application/x-www-form-urlencoded; charset=UTF-8", "pragma": "no-cache", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "x-csrf-token": "jQ8cjTTjfPWR0dEoYEjtBSzr6v7XxsDg3x21en7Sl2eIuLg2WYmRhl+HR/iKNaeLxkvE7hdZcXILWYXMXyoKZQ==", "x-pjax": "true", "x-requested-with": "XMLHttpRequest" } def jianshu_trending(page, payload): max_page = 3 # 抓取简书发现的文章列表 if page > max_page: url = url_host + '/trending_notes' else: url = url_host # JSON数据接口 print(payload) seen_list = [] if page > max_page: resp = req.post(url, data = payload, headers = headers) else: resp = req.get(url, params=payload, headers = headers) doc = etree.HTML(resp.text) li_list = doc.xpath('//li') print('*'*40) for item in li_list: note_id = item.xpath('//li/@data-note-id')[0] seen_list.append(note_id) url = url_host + item.xpath('div[@class="content"]/a[@class="title"]/@href')[0] title = item.xpath('div[@class="content"]/a[@class="title"]/text()')[0] brief = str(item.xpath('div[@class="content"]/p[@class="abstract"]/text()')[0]) user = item.xpath('div[@class="content"]/div[@class="meta"]/a[@class="nickname"]/text()')[0] user_url = url_host + item.xpath('div[@class="content"]/div[@class="meta"]/a[@class="nickname"]/@href')[0] span = item.xpath('div[@class="content"]/div[@class="meta"]/span/text()') like = span[0] if len(span) == 2: like = span[1] print('标题: ' + title + '| ' + '作者: ' + user + ' 主页: ' + user_url) # print('链接: ' + url) # print('作者: ' + user + ' 主页: ' + user_url) # print('点赞: ' + like) # print('摘要: ' + brief.strip()) print('*'*40) return seen_list if __name__ == '__main__': seen_list = [] for i in range(0,15): if len(seen_list) > 0: payload = {'page':i, 'seen_snote_ids[]' : seen_list} else: payload = {'page': i,} seen_list += jianshu_trending(i, payload) time.sleep(3) ```