I have a very large JSON object that I need to split into smaller objects and write those smaller objects to file.
Sample Data
raw = '[{"id":"1","num":"2182","count":-17}{"id":"111","num":"3182","count":-202}{"id":"222","num":"4182","count":12},{"id":"33333","num":"5182","count":12}]'
Desired Output (In this example, split the data in half)
output_file1.json = [{"id":"1","num":"2182","count":-17},{"id":"111","num":"3182","count":-202}]
output_file2.json = [{"id":"222","num":"4182","count":12}{"id":"33333","num":"5182","count":12}]
Current Code
import pandas as pd
import itertools
import json
from itertools import zip_longest
def grouper(iterable, n, fillvalue=None):
args = [iter(iterable)] * n
return zip_longest(fillvalue=fillvalue, *args)
raw = '[{"id":"1","num":"2182","count":-17}{"id":"111","num":"3182","count":-202}{"id":"222","num":"4182","count":12},{"id":"33333","num":"5182","count":12}]'
#split the data into manageable chunks + write to files
for i, group in enumerate(grouper(raw, 4)):
with open('outputbatch_{}.json'.format(i), 'w') as outputfile:
json.dump(list(group), outputfile)
Current Output of first file "outputbatch_0.json"
["[", "{", "\"", "s"]
I feel like I'm making this much harder than it needs to be.
rawstring isn't valid JSON (missing commas between objects). Is this the case with your real data or just a typo in the question?