What is the difference between these two array declarations in Python?
table = [[0]*100]*100
table = numpy.zeros([100,100], int)
They have nothing in common in fact. The second is a numpy 2D array. The first is not anything useful - it's an array of 100 items, each one of which is a reference to a SINGLE array of 100 zeroes:
table = [[0]*100]*100
table[1][0]=222
print table[0][0]
This prints '222'!
table = numpy.zeros([100,100], int)
table[1][0]=222
print table[0][0]
This prints '0'!
Well, for once, the first one is dangerously wrong. See this:
In [8]: table = [[0]*2]*10
In [9]: table
Out[9]:
[[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0]]
In [10]: table[0][1] = 5
In [11]: table
Out[11]:
[[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5],
[0, 5]]
It happens because the way you declared table, the sub-list is duplicated all over again. See this FAQ for info on doing this correctly.