apify

Apify SDK: The scalable web crawling and scraping library for JavaScript

Apify SDK — The scalable web scraping and crawling library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
Under Apache License 2.0
By apify

scraping npm automation web-scraping javascript-library puppeteer apify headless-chrome web-crawling crawling rpa

Apify SDK: The scalable web crawling and scraping library for JavaScript


Apify SDK simplifies the development of web crawlers, scrapers, data extractors and web automation jobs.
It provides tools to manage and automatically scale a pool of headless browsers,
to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud,
rotate proxies and much more.
The SDK is available as the apify NPM package.
It can be used either stand-alone in your own applications
or in actors
running on the Apify Cloud.


View full documentation, guides and examples on the Apify SDK project website


Motivation

Thanks to tools like Playwright, Puppeteer or
Cheerio, it is easy to write Node.js code to extract data from web pages. But
eventually things will get complicated. For example, when you try to:



Python has Scrapy for these tasks, but there was no such library for JavaScript, the language of
the web
. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a
browser.


The goal of the Apify SDK is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't
reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing
commonalities.


Overview

The Apify SDK is available as the apify NPM package and it provides the following tools:



Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus
take advantage of its pool of proxies, job scheduler, data storage, etc.
For more information, see the Apify SDK Programmer's Reference.


Quick Start

This short tutorial will set you up to start using Apify SDK in a minute or two.
If you want to learn more, proceed to the Getting Started
tutorial that will take you step by step through creating your first scraper.


Local stand-alone usage

Apify SDK requires Node.js 15.10 or later.
Add Apify SDK to any Node.js project by running:


bash
npm install apify playwright



Neither playwright nor puppeteer are bundled with the SDK to reduce install size and allow greater
flexibility. That's why we install it with NPM. You can choose one, both, or neither.



Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of the Apify SDK,
see the Examples section of the documentation.


```javascript
const Apify = require('apify');


// Apify.main is a helper function, you don't need to use it.
Apify.main(async () => {
const requestQueue = await Apify.openRequestQueue();
// Choose the first URL to open.
await requestQueue.addRequest({ url: 'https://www.iana.org/' });


const crawler = new Apify.PlaywrightCrawler({
requestQueue,
handlePageFunction: async ({ request, page }) => {
// Extract HTML title of the page.
const title = await page.title();
console.log(`Title of ${request.url}: ${title}`);

// Add URLs that match the provided pattern.
await Apify.utils.enqueueLinks({
page,
requestQueue,
pseudoUrls: ['https://www.iana.org/[.*]'],
});
},
});

await crawler.run();


});
```


When you run the example, you should see Apify SDK automating a Chrome browser.



By default, Apify SDK stores data to ./apify_storage in the current working directory. You can override this behavior by setting either the
APIFY_LOCAL_STORAGE_DIR or APIFY_TOKEN environment variable. For details, see Environment variables, Request storage and Result storage.


Local usage with Apify command-line interface (CLI)

To avoid the need to set the environment variables manually, to create a boilerplate of your project, and to enable pushing and running your code on
the Apify platform, you can use the Apify command-line interface (CLI) tool.


Install the CLI by running:


bash
npm -g install apify-cli


Now create a boilerplate of your new web crawling project by running:


bash
apify create my-hello-world


The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called my-hello-world
with a Node.js project files. You can run the project as follows:


bash
cd my-hello-world
apify run


By default, the crawling data will be stored in a local directory at ./apify_storage. For example, the input JSON file for the actor is expected to
be in the default key-value store in ./apify_storage/key_value_stores/default/INPUT.json.


Now you can easily deploy your code to the Apify platform by running:


bash
apify login


bash
apify push


Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the
Apify Actor documentation.


Usage on the Apify platform

You can also develop your web scraping project in an online code editor directly on the Apify platform.
You'll need to have an Apify Account. Go to Actors, page in the app, click Create new
and then go to the Source tab and start writing your code or paste one of the examples from the Examples section.


For more information, view the Apify actors quick start guide.


Support

If you find any bug or issue with the Apify SDK, please submit an issue on GitHub.
For questions, you can ask on Stack Overflow or contact [email protected]


Contributing

Your code contributions are welcome and you'll be praised to eternity!
If you have any ideas for improvements, either submit an issue or create a pull request.
For contribution guidelines and the code of conduct,
see CONTRIBUTING.md.


License

This project is licensed under the Apache License 2.0 -
see the LICENSE.md file for details.


Acknowledgments

Many thanks to Chema Balsas for giving up the apify package name
on NPM and renaming his project to jsdocify.