Skip to content

A Python package for Systematic Web Scraping and Crawling

License

Notifications You must be signed in to change notification settings

dglttr/scrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

scrawler

"scrawler" = "scraper" + "crawler"

Provides functionality for the automatic collection of website data (web scraping) and following links to map an entire domain (crawling). It can handle these tasks individually, or process several websites/domains in parallel using asyncio and multithreading.

This project was initially developed while working at the Fraunhofer Institute for Systems and Innovation Research. Many thanks for the opportunity and support!

Installation

You can install scrawler from PyPI:

pip install scrawler

Note

Alternatively, you can find the .whl and .tar.gz files on GitHub for each respective release.

Getting Started

Check out the Getting Started Guide.

Documentation

Documentation is available at Read the Docs.