Web crawler and scraper for Rust
-
Updated
Mar 25, 2026 - Rust
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering).
Web crawler and scraper for Rust
An ergonomic Python HTTP Client with TLS fingerprint
An ergonomic Rust HTTP Client with TLS fingerprint
SiteOne Crawler is a cross-platform website crawler and analyzer for SEO, security, accessibility, and performance optimization—ideal for developers, DevOps, QA engineers, and consultants. Supports Windows, macOS, and Linux (x64 and arm64).
A rock-solid cryptocurrency crawler library.
🕷️ The pipeline for the OSCAR corpus
Dyer is designed for reliable, flexible and fast web crawling, providing some high-level, comprehensive features without compromising speed.
✴️ An experimental graph database
Spider ported to Python
🎓a Better BlackBoard for PKUers. 北京大学教学网命令行工具(🖥️Win/🐧Linux/🍏Mac), 支持查看/提交作业、下载课程回放.
A command line tool based on the crypto-crawler library.
Spider ported to Node.js
CLI to download all images/webms in a 4chan thread
Kabegame — An anime image crawler client with pluggable crawlers (from a GitHub plugin repo), wallpaper rotation by custom rules, and Wallpaper Engine export. Supports Windows 10/11, macOS Big Sur+, and Ubuntu.
⚡ A subdomain enumeration tool leveraging diverse techniques, designed for advanced pentesting operations
Fast, local-first web content extraction for LLMs. Scrape, crawl, extract structured data — all from Rust. CLI, REST API, and MCP server.
Rust Web Crawler saving pages on Redis
Crawling and scraping the Web for fun and profit