How to scrape with Postman?

Scraping infinite scroll pages

You find scraping API difficult to use as a non-developer ? Below explanations will help you to use Postman and get your data with ScrapingBot without having to write a single line of code.

What is Postman?

Postman is a software you can download and use for free. You can find it here.

It’s actually really easy to setup. We’ll walk you through the steps to configure Postman for use with ScrapingBot.
It can be used in many ways. In this article, we’ll focus on the base functionality it offers : making API calls. Just follow the below instructions and see how you can do some scraping with postman.

How can I use Postman with ScrapingBot?

First of all, if you do not already have one, you’ll need to create a free account on ScrapingBot.

You don’t have a ScrapingBot account?

Here is what Postman should look like after you’ve installed it.

Now to get started, simply click on “Create a request”.

scraping with postman

Now, using our documentation, we’ll fill the different elements needed in order to make API calls to ScrapingBot.

First we need to configure the credentials, click on the “Authorization” tab and in “Type” select “Basic auth”.

postman authorization

In this tab, your “Username” will be your ScrapingBot username, and the “Password” will be your API key (you can find your API key on your dashboard).

If you don’t have a ScrapingBot account, register in a few minutes!

Now for this article we’ll first detail a basic usage example, trying to get the HTML content of a page.

First, you need to set the request type to POST (click where it says “GET” and you’ll see a dropdown appear to select “POST”). Then add the URL of our “raw-html” endpoint as follows.
postman post with url

Now last step before send the API call. We need to configure the body of our request in order to give the API the informations it need to perform the scraping.

On Postman, click on the “Body” tab, select “raw” and then click on “Text” to select “JSON”. We select JSON as it’s the type of data we send to the API and it will also helps with the readability.

And finally we add the JSON containing the “url” parameter which tells the API which page it needs to extract the data from. In our example it’s an amazon product page but it can be any publicly available page.

postman body

Now you can click on “Send” and after a little bit of time you’ll receive the response from the API under the request part of the screen.

In our example as we requested for the raw-html, the result we get is a text containing all of that page HTML content.

postman send

This is it ! This is how you can easily do scraping with Postman. You did your first scraping using ScrapingBot and didn’t have to type a single line of code.

Eventually you can then save the request you just created in Postman in order to use it later. In order to do this simply click on the “Save” button. It will show a popup where you can give a name to your request and add it to a collection (A collection is just a group of request, for example for a same site, here you can create the collection ScrapingBot for example). Then just click “Save” at the bottom.

postman save request

Now you’re ready to add more requests for other website or to use our others endpoints.

Let’s see a more advanced version of the request using the different options we offer when calling the API. This will allow you to do scraping with postman for larger variety of websites.

It’s time to create your FREE ScrapingBot account!

Advanced request with options

There is not a lot of difference to add options to your request. We only need to modify the JSON body in order to pass those options to the API.

The different options

In the example below, we have added the 3 main options you might need (all those options are explained in detail on the documentation page) :
  •  proxyCountry : allows you to set a country from which the request will be done. Can be useful when the website has some regional blocking or to get regional pricing.
  • premiumProxy: tells the API to use the premium proxy. Those requests are less likely to get blocked by the targeted website. Often necessary for Amazon. Use only if needed.
  • useChrome: this option tells the API to simulate a navigation to the URL using a browser. This can be necessary when a website loads data asynchronously (meaning all the data is not immediately loaded when you arrive at a page, sometimes happens on retail websites).


You’ll notice that this time we are calling a different endpoint of our API : “/retail”. This endpoint is very useful for scraping retail product page and returns a JSON with all the extracted data so you don’t have to look through the HTML text to find what you need, we do it for you.

N.B. : if the retail endpoint does not return the data you expect for a particular retail website, please contact us at and we’ll look into adding support for that website

postman advanced options

Finally, let’s see how to use our social media API endpoint. This will help you do scraping with Postman for most social media platforms (LinkedIn, Instagram, Facebook, TikTok).

How to use our social-media endpoints with Postman?

Overall, the concept is the same. The major difference is that for social media, you’ll need 2 different request to be able to get the result. Let’s get into it.

 The first request is similar to the above example. It’s a POST request to send all the parameters to the API end-point for social-media. Once again, all the options and their possible value for the social-media endpoints are in the documentation page.
postman social media post

You can “Send” this first request. The API will return a response that contains a “responseId”. You’ll need this for the second request that will allow us to get the final result with the data.

Below you’ll see an example of what the second request looks like. This time it’s a “GET” type request, we don’t need to add a body to it, but we’ll need to add two “Params”.

postman social media get

In the example above the scraping was already finished when we sent the second request so we got the data.
It’s possible that the request takes a longer time to process, in which case you’ll get a different result like so :

    "status": "pending",
    "message": "Scraping is not finished for this request, try again in a few"


Lastly, if there is an error during the scraping, which can happen for many reasons, the message from the second request will clearly state it. If that happens, you need to restart the scraping process from the first request in order to get a new “responseId”.


And this is it. We hope this article was helpful to use our service without programming skills. You can now do all your scraping with postman.
If you need any extra help, please feel free to contact us using the contact form.

Happy scraping !

If you don’t already have your ScrapingBot account, it’s time to register for FREE!