You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
| 2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
| 2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
| 2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
| 2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
| 2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
| 2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
| 2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
| 2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
| 2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
| 2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
| 2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
| 2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
|
2
(1) |
|
3
(2) |
4
(7) |
5
|
6
|
7
|
8
|
9
|
|
10
|
11
|
12
(1) |
13
(5) |
14
(2) |
15
(3) |
16
|
|
17
|
18
(1) |
19
(1) |
20
(1) |
21
|
22
|
23
|
|
24
|
25
(1) |
26
|
27
(1) |
28
(4) |
29
|
30
(1) |
|
From: John H. <jdh...@ac...> - 2005-04-14 21:00:29
|
>>>>> "Nicholas" == Nicholas Young <su...@su...> writes:
Nicholas> I've attempted to implement this code myself (see
Nicholas> attached patch to src/_image.cpp) but I'm not a regular
Nicholas> c++ or even c programmer so it's fairly likely there
Nicholas> will be memory leaks in the code. For a 1024x2048 array
Nicholas> using the GTKAgg backend and with plenty of memory free
Nicholas> this change results in show() taking <0.7s rather than
Nicholas> >4.6s; if there is a memory shortage and swapping
Nicholas> becomes involved the change is much more noticeable. I
Nicholas> haven't made any decent Python wrapping code yet - but
Nicholas> would be happy do do so if someone familiar with c++
Nicholas> could tidy up my attachment.
Hi Nicholas,
Thanks for the suggestions and patch. I incorporated frombuffer and
have been testing it. I've been testing the performance of frombuffer
vs fromarray, and have seen some 2-3x speedups but nothing like the
numbers you are reporting. [Also, I don't see any detectable memory
leaks so I don't think you have any worries there]
Here is the test script I am using - does this look like a fair test?
You can uncomment report_memory on unix like systems to get a memory
report on each pass through the loop, and switch out fromarray vs
frombuffer to compare your function with mine
On a related note, below I'm pasting in a representative section the
code I am currently using in fromarray for MxNx3 and MxNx4 arrays --
any obvious performance gains to be had here numerix gurus?
Another suggestion for Nicholas -- perhaps you want to support MxN,
MxNx3 and MxNx4 arrays in your frombuffer function?
And a final question -- how are you getting your function into the
matplotlib image pipeline. Did you alter the image.py
AxesImage.set_data function to test whether A is a buffer object? If
so, you might want to post these changes to the codebase as well.
// some fromarray code
//PyArrayObject *A = (PyArrayObject *) PyArray_ContiguousFromObject(x.ptr(), PyArray_DOUBLE, 2, 3);
PyArrayObject *A = (PyArrayObject *) PyArray_FromObject(x.ptr(), PyArray_DOUBLE, 2, 3);
int rgba = A->dimensions[2]==4;
double r,g,b,alpha;
int offset =0;
for (size_t rownum=0; rownum<imo->rowsIn; rownum++) {
for (size_t colnum=0; colnum<imo->colsIn; colnum++) {
offset = rownum*A->strides[0] + colnum*A->strides[1];
r = *(double *)(A->data + offset);
g = *(double *)(A->data + offset + A->strides[2] );
b = *(double *)(A->data + offset + 2*A->strides[2] );
if (rgba)
alpha = *(double *)(A->data + offset + 3*A->strides[2] );
else
alpha = 1.0;
*buffer++ = int(255*r); // red
*buffer++ = int(255*g); // green
*buffer++ = int(255*b); // blue
*buffer++ = int(255*alpha); // alpha
}
}
## ... and here is the profile script ....
import sys, os, time, gc
from matplotlib._image import fromarray, fromarray2, frombuffer
from matplotlib.numerix.mlab import rand
from matplotlib.numerix import UInt8
def report_memory(i):
pid = os.getpid()
a2 = os.popen('ps -p %d -o rss,sz' % pid).readlines()
print i, ' ', a2[1],
return int(a2[1].split()[1])
N = 1024
#X2 = rand(N,N)
#X3 = rand(N,N,3)
X4 = rand(N,N,4)
start = time.time()
b4 = (X4*255).astype(UInt8).tostring()
for i in range(50):
im = fromarray(X4, 0)
#im = frombuffer(b4, N, N, 0)
#val = report_memory(i)
end = time.time()
print 'elapsed: %1.3f'%(end-start)
|
|
From: Nicholas Y. <su...@su...> - 2005-04-14 15:39:41
|
Hi, I'm a fairly heavy user of matplotlib (to plot results from plasma physics simulations) and my use requires the display of fairly large images. Having done some testing I've discovered (after bypassing anything slow from the python code) that for large images where there image size approaches the available memory the main performance bar seems to be the conversion of the raw data to the _image.Image class. The way in which the conversion takes place - with data being taken non-sequentially from many points in a floating point source array and then converted to an 1 byte integer - is slow and if swapping becomes involved even slower. To overcome this problem I suggest implementing c++ code to allow the creation of the image from a buffer (with each rgba pixel as 4 bytes) rather than a floating point array. Where image data is being generated elsewhere (in my case in Fortran code) it's trivial to output to a different format and doing so means that the size of the input data can be significantly smaller and that the data in the source array is accessed sequentially (it's likely that a compiler will also be able to optimise a copy of this data more effectively). The image can then be scaled and over plotted as with any existing image. I've attempted to implement this code myself (see attached patch to src/_image.cpp) but I'm not a regular c++ or even c programmer so it's fairly likely there will be memory leaks in the code. For a 1024x2048 array using the GTKAgg backend and with plenty of memory free this change results in show() taking <0.7s rather than >4.6s; if there is a memory shortage and swapping becomes involved the change is much more noticeable. I haven't made any decent Python wrapping code yet - but would be happy do do so if someone familiar with c++ could tidy up my attachment. Hope this is useful to others, Nicholas Young |