Scrapy is an open source program that lets you create web crawlers (sometimes called spiders) to scrape information from websites. It is a powerful tool that can be used for many different purposes.

To use Scrapy, you need Python installed on your computer and a few libraries. These libraries are required in order for the program to work correctly.

Installing Scrapy is easy and can be done on Windows, Linux or Mac computers. The program is a Python-based software which is designed to make web scraping simple and fun.

The main feature of Scrapy is its ability to send concurrent requests to websites in a fault-tolerant manner. This enables the crawler to process large amounts of data in very little time.

You can also configure it to automatically throttle the number of requests sent at a certain time, or limit how many concurrent requests you can send to a given domain/IP address. The latter option is especially useful if you want to avoid being banned from a website, or simply want to make sure that your crawls don’t eat up too much of the websites’ resources.

A few other important features are allowed_domains and start_urls, which help you restrict the spider from crawling to unwanted sites. The latter allows you to specify a specific set of domains that you would like the spider to scrape from, while the former lets you define a URL that the spider can start from when it begins its crawl.

Item Loaders

There are several types of item loaders in Scrapy that help you format the data you get from the website you scrape. They can be used to strip spaces from the ‘description’ field, merge multiple fields into a single string, and even merge a list of strings into a string. They can be very useful for generating a JSON file from the data which is then stored on a file system or in a database.

XPath queries

There are many XPath queries that you can use to search for information on websites. These can be retrieved using the xpath() method or the regular expressions re() method.

The re() method is a bit more complex than the xpath() method because it needs to be applied to an XPath selector object. It then returns a string array of unicode strings, which you can then use with extract() to extract the desired data.

XPath and CSS queries are not only useful for text scraping, they can also be used to scrape images and videos. This is very useful for web pages with embedded video or image galleries that you may want to scrape.


One of the most useful features in Scrapy is its interactive shell, which makes it possible to run and test your code without having to run the spider. It can be accessed by running the command scrapy shell in your terminal.

Auto Throttle

Unlike other similar open-source programs, Scrapy automatically adjusts its speed based on the load of the website it is trying to scrape. This is a great way to keep your IP address from being blocked and make the website servers happier as well.