2

I have the below code that fetches data from a API. This works fine, however I see that it only returns back 25 rows. I am trying to see how could I extract all of the data from the API call and not limit to 25

import requests
import pandas as pd


API_KEY = API_KEY



url = 'https://api.pagerduty.com/incidents/'
headers = {
    'Accept': 'application/vnd.pagerduty+json;version=2',
    'Authorization': 'Token token={token}'.format(token=API_KEY)}
r = requests.get(url, headers=headers)
data = r.content
data_dict = json.loads(data)
data_df = pd.DataFrame(data_dict['incidents'])
1
  • have you tried using r.json() instead od data = r.content -> json.loads(data)? if call was successful, requests will always have the full response (unless the API returned partial data) Commented Feb 18, 2020 at 11:37

2 Answers 2

3

That is due to pagination in the API response. The call defaults to return 25 incidents. If you pass a limit parameter, you can get more, but only up to 100. If you loop until more is false and increment the offset each call you can get all the incidents.

See here API Reference for limit

Sign up to request clarification or add additional context in comments.

Comments

0

Check documentation here.

You can pass the query parameter for example

url = 'https://api.pagerduty.com/incidents?limit=50'

The maximum limit as per docs is 100 so you will have to send multiple GET requests. If there are thousands of records make sure your python script send GET request at specific time interval because of rate-limiting else will through Too Many Requests error.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.