Tome:基于MCP协议的无代码AI桌面助手,轻松连接本地与云端模型
2026/5/13 10:11:30
如果你曾经尝试用传统爬虫抓取现代网站,大概率会遇到这样的困境:明明浏览器能看到的内容,用requests爬取却空空如也。这是因为现在90%的网站都采用了动态加载技术,而Scrapy+Playwright的组合正是解决这一痛点的黄金搭档。
我在去年接手一个电商数据采集项目时,就深刻体会到这个组合的威力。目标网站的商品详情需要滚动到页面底部才会加载,价格信息要鼠标悬停才显示,传统的爬虫完全无能为力。换成Playwright后,不仅数据获取率从30%飙升到98%,还成功绕过了反爬机制。
核心优势对比:
page.mouse.move()模拟真实用户轨迹,成功绕过了某社交平台的机器人检测新手最容易在环境配置阶段踩坑,这里分享几个实战中总结的经验:
# 推荐使用Python 3.8+版本 pyenv install 3.8.12 python -m venv playwright_env source playwright_env/bin/activate # 安装核心依赖(注意版本兼容性) pip install scrapy==2.11.0 playwright==1.40.0 scrapy-playwright==0.0.30 # 安装浏览器二进制文件(建议添加环境变量) PLAYWRIGHT_BROWSERS_PATH=$HOME/pw_browsers playwright install常见问题排查:
RUN apk add --no-cache python3 py3-pip gcc python3-dev musl-dev libffi-dev openssl-devsettings.py中添加PLAYWRIGHT_CONTEXT_KWARGS = { "ignore_https_errors": True }PLAYWRIGHT_LAUNCH_OPTIONS = { "timeout": 60000, # 60秒超时 "args": ["--disable-gpu"] }默认配置只能满足基础需求,要实现高性能爬取需要深度定制中间件。这是我优化过的配置模板:
# settings.py DOWNLOAD_HANDLERS = { "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", } TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" PLAYWRIGHT_BROWSER_TYPE = "chromium" PLAYWRIGHT_LAUNCH_OPTIONS = { "headless": True, "channel": "chrome", # 使用稳定版Chrome而非Chromium "args": [ "--single-process", "--no-zygote", "--disable-web-security" ], } # 上下文管理优化 PLAYWRIGHT_MAX_CONTEXTS = 8 PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT = 30000性能调优参数:
PLAYWRIGHT_MAX_PAGES_PER_CONTEXT:控制每个上下文的页面数,建议设为4-6PLAYWRIGHT_ABORT_REQUEST:拦截不必要的资源请求def should_abort_request(request): return ( request.resource_type == "image" or ".woff2" in request.url )让我们通过一个电商爬虫案例,展示如何处理各种动态场景:
import scrapy from scrapy_playwright.page import PageCoroutine class EcommerceSpider(scrapy.Spider): name = "jd_spider" def start_requests(self): url = "https://item.jd.com/100038325784.html" yield scrapy.Request( url, meta={ "playwright": True, "playwright_page_coroutines": [ PageCoroutine("wait_for_selector", "div.sku-name"), PageCoroutine("evaluate", "window.scrollBy(0, 500)"), PageCoroutine("wait_for_selector", "div.price-box"), PageCoroutine("click", "li.tab-main:last-child"), PageCoroutine("wait_for_timeout", 2000), ], "playwright_context_kwargs": { "viewport": {"width": 1920, "height": 1080}, "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" } }, callback=self.parse_detail ) async def parse_detail(self, response): # 提取商品详情 item = { "title": response.css("div.sku-name::text").get().strip(), "price": response.css("span.price::text").get(), "comments": response.xpath("//div[@id='comment']//text()").get() } # 处理规格参数 specs = {} for row in response.css("ul.parameter2 li"): key = row.css("::text").get().strip() value = row.xpath("./text()").get().strip() specs[key] = value item["specs"] = specs yield item高级交互技巧:
await page.route("**/*", lambda route: route.continue_()) download = await page.wait_for_event("download") await download.save_as("/path/to/save")frame = page.frame_locator("iframe#loginIframe") await frame.locator("#username").fill("user123")page.on("websocket", lambda ws: print(ws.url))在高并发环境下,这些优化手段能让你的爬虫稳定性提升200%:
并发控制矩阵:
| 硬件配置 | 推荐并发数 | 内存消耗 | 成功率 |
|---|---|---|---|
| 4核8G | 12-16 | 6GB | 98.5% |
| 8核16G | 24-32 | 12GB | 99.2% |
| 16核32G | 48-64 | 24GB | 99.5% |
异常处理模板:
async def parse(self, response): try: # 解析逻辑 except TimeoutError: self.logger.warning(f"Timeout on {response.url}") yield self.retry_request(response.request) except Exception as e: self.logger.error(f"Error parsing {response.url}: {str(e)}") page = response.meta.get("playwright_page") if page: await page.close() def retry_request(self, request): new_request = request.copy() new_request.dont_filter = True return new_request内存泄漏排查:
playwright._impl._api_types.Error捕获特定错误from twisted.internet import reactor print(reactor.getDelayedCalls())最近项目中遇到的几个反爬案例和解决方案:
案例1:指纹检测
context = await browser.new_context( user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64)", locale="zh-CN", timezone_id="Asia/Shanghai", color_scheme="dark" )案例2:行为验证
await page.route( "**/geetest/**", lambda route: route.fulfill(path="./geetest_mock.html") )案例3:请求频率限制
class CustomMiddleware: def process_request(self, request, spider): redis_conn.decr("req_limit") while redis_conn.get("req_limit") <= 0: time.sleep(0.5)对于大型爬虫项目,推荐采用分层架构:
project/ ├── spiders/ │ ├── base_spider.py # 基础爬虫类 │ ├── product/ # 按业务分类 │ └── news/ ├── middlewares/ │ ├── proxy.py # 代理中间件 │ └── retry.py # 重试中间件 ├── pipelines/ │ ├── mysql.py # 数据库存储 │ └── redis.py # 缓存处理 ├── utils/ │ ├── context.py # 上下文管理 │ └── captcha.py # 验证码处理 └── config/ ├── settings.py # 基础配置 └── proxies.txt # 代理列表关键设计模式:
在最近的一个千万级数据采集项目中,这套架构帮助我们将代码复用率提升到75%,开发效率提高40%。