1

Update: I managed to fix the issue with the help of Jeremy's function that break my data set in to 50 chunks. I have posted the final answer.

I have the following code the reason I want to break the array into chunks is that I am trying to use an api that only allows 50 requests at a time. Also I am java developer who is trying to move to python. What I want to do is break the array into 50 chunks and feed them to the api.

I have a text file that has long list of Ids, I and based on the Id that i read in I am constructing the URL.

import simplejson as json
import sys
import urllib
import traceback, csv, string

# "base" API URL
URL_BASE = 'Some URL'
# set user agent string
urllib.version = "Data Collection Fix it"

page_ids = []

def divide_list(list_, n):
    for i in range(0, len(list_), n):
        yield list_[i:i + n]

def issue_query():

    iFile = open('ReadFromThisFile.txt', "r")
    lines = iFile.readlines()
    #print len(lines)

    for line in lines:
        ids = string.split(line)
        ids = ids[0]
        page_ids.append(ids)            

    url = URL_BASE
    indicies = range(len(page_ids))
    File = open("WriteToThisFile.csv", "w")
    for indicies in divide_list(page_ids, 50):
        count = 0
        fiftyIds =[]
        url = URL_BASE
        for id in indicies:
            str(id).strip
            url += str(id) + '|'
            print url
            fiftyIds.append(str(id))
            count += 1
        print count 
        rv = urllib.urlopen(url)
        j = rv.read().decode("utf-8")
        #sys.stderr.write(j + "\n")
        data = json.loads(j)
        for id in fiftyIds:
            try:
                s = int(data["query"]["pages"][id]["revisions"][0]["size"])
                sys.stderr.write("%d\t%d\n" % (int(id), s))
                File.write("%d\t%d\n" % (int(id), s))
                #print ("%d\t%d\n" % (int(id), s))
                # do something interesting with id and s
            except Exception, e:
                traceback.print_exc()

    File.close()
    iFile.close()

issue_query()

I know many experience python developers might give me negative points for asking a simple question like this but I couldn't find any good examples on google or here. So sorry for any trouble if in case I have repeated a question.

Thanks,

3

4 Answers 4

3

Generator version of Jeremy's answer:

def divide_list(list_, n):

   for i in range(0, len(list_), n):
       yield list_[i:i + n]


for chunk in divide_list([1,2,3,4,5], 2):
   print chunk 
Sign up to request clarification or add additional context in comments.

Comments

3

There's a recipe in the itertools documentation (which is really worth a read-through, just so you know what is there for when you need it -- and you will need it).

def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)

Comments

3

There's probably a built-in function to do this but I can't think of it.

#!/usr/bin/env python2.7

def divide_list(list_, n):
    """Produces an iterator over subsections of maximum length n of the list."""

    for i in range(0, len(list_), n):
        yield list_[i:i + n]

Example usage:

print(list(divide_list([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 3)))
# prints: [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11]]

Using it to produce URLs as in your example:

BASE_URL = "http://example.com/blah?ids="
page_ids = range(0, 123)

for indices in divide_list(page_ids, 50):
    url = URL_BASE + "|".join(str(i).strip() for i in indices)
    # then do something with url...
    print(url)

# prints:
# http://example.com/blah?ids=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49
# http://example.com/blah?ids=50|51|52|53|54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73|74|75|76|77|78|79|80|81|82|83|84|85|86|87|88|89|90|91|92|93|94|95|96|97|98|99
# http://example.com/blah?ids=100|101|102|103|104|105|106|107|108|109|110|111|112|113|114|115|116|117|118|119|120|121|122

5 Comments

Thanks, so in this case how would i iterate over results each element of array in results?
I have expanded the example to URLs in the format of your example. Is this clear?
Sorry I should have added my entire code here, But I did added, I am planning to use your example. The way I plan to use is that once I read it from file to the list I ll pass that list to the method you explain and use return list to iterate. Thanks.
Hi Jeremy I have updated the code and I used your method but looks like I am still doing something wrong. I did the loop little different since its much easier for me to understand. Thanks.
Thanks for all the help but don't worry I managed to fix the problem and I have pasted the code which works and spits out exactly what I want in my csv file.
0

I guess instead of updating my original post of the question I should have done the answer question. Hopefully its not confusing, I have put updated comment on the question section informing that the issue has been solved and here how I solved it with the help of Jeremy Banks function

import simplejson as json
import sys
import urllib
import traceback, csv, string

# "base" API URL
URL_BASE = 'Some URL'
# set user agent string
urllib.version = "Data Collection Fix it"

page_ids = []

def divide_list(list_, n):
    for i in range(0, len(list_), n):
        yield list_[i:i + n]

def issue_query():

    iFile = open('ReadFromThisFile.txt', "r")
    lines = iFile.readlines()
    #print len(lines)

    for line in lines:
        ids = string.split(line)
        ids = ids[0]
        page_ids.append(ids)            

    url = URL_BASE
    indicies = range(len(page_ids))
    File = open("WriteToThisFile.csv", "w")
    for indicies in divide_list(page_ids, 50):
        count = 0
        fiftyIds =[]
        url = URL_BASE
        for id in indicies:
            str(id).strip
            url += str(id) + '|'
            print url
            fiftyIds.append(str(id))
            count += 1
        print count 
        rv = urllib.urlopen(url)
        j = rv.read().decode("utf-8")
        #sys.stderr.write(j + "\n")
        data = json.loads(j)
        for id in fiftyIds:
            try:
                s = int(data["query"]["pages"][id]["revisions"][0]["size"])
                sys.stderr.write("%d\t%d\n" % (int(id), s))
                File.write("%d\t%d\n" % (int(id), s))
                #print ("%d\t%d\n" % (int(id), s))
                # do something interesting with id and s
            except Exception, e:
                traceback.print_exc()

    File.close()
    iFile.close()

issue_query()

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.