I am working on putting my Google BigQuery data into a pandas dataframe. I am successfully able to run the below code and print the result set.
from google.cloud import bigquery
from google.oauth2 import service_account
from pandas.io import gbq
credentials = service_account.Credentials.from_service_account_file('My Project Credentials.json')
project_id = 'essential-cairn-253818'
client = bigquery.Client(credentials= credentials,project=project_id)
query = client.query("""
SELECT device.model as model
FROM `my-table-name`
LIMIT 100
""")
results = query.result()
for row in results:
print("{}".format(row.model))
However, I would like to use the pandas.io.gbq.read_gbq() functionality to put this into a dataframe. I add the next rows of code and I get stuck.
query2 = """
SELECT device.model as model
FROM `my-table-name`
LIMIT 100
"""
results_df = gbq.read_gbq(query2, project_id=project_id, private_key='My Project Credentials.json', dialect = 'standard')
This produces the error:
TypeError: 'RowIterator' object is not callable
I'm not sure where I'm going wrong. I am following the question seen here: Live data from BigQuery into a Python DataFrame
Can anyone point me in the right direction?
pandas-gbqpackage. pandas-gbq.readthedocs.io/en/latest . Your other option is loop throughresults(as you have done) and append each row to a list, then convert the list to a pandas df