You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
| 2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
| 2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
| 2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
| 2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
| 2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
| 2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
| 2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
| 2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
| 2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
| 2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
| 2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
| 2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: John H. <jdh...@ac...> - 2006-06-23 13:29:18
|
>>>>> "Edin" =3D=3D Edin Salkovi=A7 <edi...@gm...> writes:
Edin> The reason why I used pickle - from the Python docs: =3D=3D=3D=3D=
=3D
I have had bad experiences in the past with pickle files created with
one version that don't load with another. I don't know if that is a
common problem or if others have experienced it, but it has made me
wary of them for mpl, where we work across platforms and python
versions. Maybe this concern is unfounded. I still do not understand
what the downside is of simply creating a dictionary in a python
module as we do with latex_to_bakoma.
JDH
|
|
From: John H. <jdh...@ac...> - 2006-06-23 13:24:17
|
>>>>> "Martin" == Martin Spacek <sc...@ms...> writes:
Martin> I suppose I'm a bit jaded towards edges because I tend to
Martin> make histograms and not bar graphs, but we can have it
Martin> both ways.
I can live with that -- did you test your work with the table_demo?
Martin> Sure. Here they are against their latest rev (2508 for
Martin> both). Never done logs before, hope they're alright. What
Martin> do you mean by "the rest"?
"the rest", meaning the work you had already done.
I'm having trouble applying your patch because of the way the file
names are coded. If somebody knows the magic patch command to make it
go through, please commit it. Otherwise, Martin, can you make a patch
with svn diff from the mpl root dir (the one that setup.py lives in)?
Thanks,
JDH
|
|
From: <edi...@gm...> - 2006-06-23 09:50:30
|
On 6/22/06, John Hunter <jdh...@ac...> wrote:
> Since you asked :-)
>
> I may not have mentioned this but the style conventions for mpl code
> are
>
> functions : lower or lower_score_separated
> variables and attributes : lower or lowerUpper
> classes : Upper or MixedUpper
OK
> Also, I am not too fond of the dict of dicts -- why not use variable
> names?
I used a dict of dicts because this allowed me to generate separate
picle files (for each one of the dicts in the top-level dict) and
anything else (see the final script) by their coresponding top-level
dict name. I thought it was better, for practical/speed reasons, to
have separate pickle files, for every dict.
> for line in file(fname):
> if line[:2]!=' 0': continue # using continue avoids unneccesary indent
Thanks for the tip!
> uninum = line[2:6].strip().lower()
> type1name = line[12:37].strip()
> texname = line[83:110].strip()
>
> uninum = int(uninum, 16)
I thought that the idea was to allow users to write unicode strings
directly in TeX (OK, this isn't much of an excuse :). That's why I
used the eval approach, to get the dict keys (or values) to be unicode
strings. I'm also aware that indexing by ints is faster, and that the
underlying FT2 functions work with ints... OK, I'm now convinced that
your approach is better :)
> pickle.dump((uni2type1, type12uni, uni2tex, tex2uni), file('unitex.pcl','w'))
>
> # An example
> unichar = int('00d7', 16)
> print uni2tex.get(unichar)
> print uni2type1.get(unichar)
>
> Also, I am a little hesitant to use pickle files for the final
> mapping. I suggest you write a script that generates the python code
> contains the dictionaries you need (that is how much of _mathext_data
> was generated.
The reason why I used pickle - from the Python docs:
=====
Strings can easily be written to and read from a file. Numbers take a
bit more effort, since the read() method only returns strings, which
will have to be passed to a function like int(), which takes a string
like '123' and returns its numeric value 123. However, when you want
to save more complex data types like lists, dictionaries, or class
instances, things get a lot more complicated.
Rather than have users be constantly writing and debugging code to
save complicated data types, Python provides a standard module called
pickle. This is an amazing module that can take almost any Python
object (even some forms of Python code!), and convert it to a string
representation; this process is called pickling. Reconstructing the
object from the string representation is called unpickling. Between
pickling and unpickling, the string representing the object may have
been stored in a file or data, or sent over a network connection to
some distant machine.
=====
So I thought that pickling was the obvious way to go. And, of course,
unpickling with cPickle is very fast. I also think that no human being
should change the automaticaly generated dicts. Rather, we should put
a separate python file (i.e. _mathtext_manual_data.py) where anybody
who wants to manually override the automaticaly generated values, or
add new (key, value) pairs can do so.
The idea:
_mathtext_manual_data.py:
=======
uni2text = {key1:value1, key2:value2}
tex2uni = {}
uni2type1 = {}
type12uni = {}
uni2tex.py:
=======
from cPickle import load
uni2tex = load(open('uni2tex.cpl'))
try:
import _mathtext_manual_data
uni2tex.update(_mathtext_manual_data.uni2tex)
except (TypeError, SyntaxError): # Just these exceptions should be raised
raise
except: # All other exceptions should be silent
pass
=====
Finally, I added lines for automatically generating pretty much
everything that can be automatically generated
stix-tbl2py.py
=======
'''A script for seemlesly copying the data from the stix-tbl.ascii*
file to a set
of python dicts. Dicts are then pickled to coresponding files, for
later retrieval.
Currently used table file:
http://www.ams.org/STIX/bnb/stix-tbl.ascii-2005-09-24
'''
import pickle
tablefilename = 'stix-tbl.ascii-2005-09-24'
dictnames = ['uni2type1', 'type12uni', 'uni2tex', 'tex2uni']
dicts = {}
# initialize the dicts
for name in dictnames:
dicts[name] = {}
for line in file(tablefilename):
if line[:2]!=' 0': continue
uninum = int(line[2:6].strip().lower(), 16)
type1name = line[12:37].strip()
texname = line[83:110].strip()
if type1name:
dicts['uni2type1'][uninum] = type1name
dicts['type12uni'][type1name] = uninum
if texname:
dicts['uni2tex'][uninum] = texname
dicts['tex2uni'][texname] = uninum
template = '''# Automatically generated file.
from cPickle import load
%(name)s = load(open('%(name)s.pcl'))
try:
import _mathtext_manual_data
%(name)s.update(_mathtext_manual_data.%(name)s)
except (TypeError, SyntaxError): # Just these exceptions should be raised
raise
except: # All other exceptions should be silent
pass
'''
# pickling the dicts to corresponding .pcl files
# automatically generating .py module files, used by importers
for name in dictnames:
pickle.dump(dicts[name], open(name + '.pcl','w'))
file(name + '.py','w').write(template%{'name':name})
# An example
from uni2tex import uni2tex
from uni2type1 import uni2type1
unichar = u'\u00d7'
uninum = ord(unichar)
print uni2tex[uninum]
print uni2type1[uninum]
Cheers,
Edin
|
|
From: Martin S. <sc...@ms...> - 2006-06-23 06:17:58
|
Hi John, John Hunter wrote: > Most people prefer the center aligning behavior, at least those who > complained on the list about bar, so when I wrote barh I adopted > this. I tried to fix bar in the process, but ended up running into > some bugs when I tested John Gill's table demo, and so left it as edge > aligned and haven't revisited it since. So my weak preference would > be to have the two functions consistent and center aligned, but he who > does the work usually gets the biggest vote. Maybe others can chime > in. I suppose I'm a bit jaded towards edges because I tend to make histograms and not bar graphs, but we can have it both ways. Here's patch5 (now against the latest axes.py rev 2508). It has everything in patch4, plus a keyword arg 'align' that lets you choose between aligning the bars according to their edges (left for vertical bars, bottom for horizontal bars) or their centers. The default is align='edge' for both bar() and barh(). Perhaps that should be changed to 'center' if most people prefer it that way. Also, the 'horizontal' boolean arg in patch4 has been renamed to 'orientation' and is now a string: either 'vertical' or 'horizontal', consistent with hist(). I also added the align arg to hist(), which just passes it on to bar() or barh(). > I was following the convention that the x arg goes first and the y > second, but I'm not wed to this. In barh(), Matlab does indeed order the args x, y, but interprets them as y, x (ie, position, value), which actually makes sense to me. > Perhaps you could patch the CHANGELOG and > API_CHANGES file along with the rest which explains the changes. Sure. Here they are against their latest rev (2508 for both). Never done logs before, hope they're alright. What do you mean by "the rest"? Cheers, Martin |
|
From: Eric F. <ef...@ha...> - 2006-06-22 22:44:16
|
John Hunter wrote: >>>>>>"Eric" == Eric Firing <ef...@ha...> writes: > > > Eric> Where breakage will occur is any place in user code that > Eric> expects the collection segments or vertices to be lists of > Eric> tuples and tries to append to the list, for example. I > Eric> don't know of any way to make the move towards use of arrays > Eric> without this problem cropping up; I hope it is considered > Eric> tolerable. > > If I'm understanding you correctly: Users who create the collection > themselves with the list of xy tuples approach can still modify their > lists, eg with append, and not have breakage (I actually use this > feature). But users who are getting collections back from code like > contour will get the non-modifiable array version. > > John, Yes, that is the way it is supposed to work. The collection stores and uses whichever form it is given. This can work because the XY array is very similar to the [(x,y), (x,y)...] form; both are single objects, and they behave the same when one says, "for xy in XY:", or if one calls the array constructor with either as an argument. Eric |
|
From: John H. <jdh...@ac...> - 2006-06-22 21:08:39
|
>>>>> "Eric" == Eric Firing <ef...@ha...> writes:
Eric> Where breakage will occur is any place in user code that
Eric> expects the collection segments or vertices to be lists of
Eric> tuples and tries to append to the list, for example. I
Eric> don't know of any way to make the move towards use of arrays
Eric> without this problem cropping up; I hope it is considered
Eric> tolerable.
If I'm understanding you correctly: Users who create the collection
themselves with the list of xy tuples approach can still modify their
lists, eg with append, and not have breakage (I actually use this
feature). But users who are getting collections back from code like
contour will get the non-modifiable array version.
|
|
From: Eric F. <ef...@ha...> - 2006-06-22 20:05:19
|
I have commited a set of changes to _transforms, collections, quiver, contour, and numerix as part of a move toward taking advantage of the efficiency of numerix arrays in place of sequences of tuples. The changes are outlined very briefly in CHANGELOG and API_CHANGES. Changes in clabel are hacks, and may make it less efficient rather than more, but I expect this to be temporary; I needed to simply make it work with the other changes. Where breakage will occur is any place in user code that expects the collection segments or vertices to be lists of tuples and tries to append to the list, for example. I don't know of any way to make the move towards use of arrays without this problem cropping up; I hope it is considered tolerable. Eric |
|
From: John H. <jdh...@ac...> - 2006-06-22 14:43:32
|
>>>>> "Edin" =3D=3D Edin Salkovi=A7 <edi...@gm...> writes:
Edin> I finally solved the problem of automaticaly generating the
Edin> dicts for unicode <-> TeX conversion. This is the first step
Edin> in enabling unicode support in mathtext.
Excellent.=20
Edin> The STIX projects is usefull after all ;) They keep a nice
Edin> table of Unicode symbols at:
Edin> http://www.ams.org/STIX/bnb/stix-tbl.ascii-2005-09-24
Edin> Any comments about the script are appreciated :). Now I'll
Since you asked :-)
I may not have mentioned this but the style conventions for mpl code
are=20
functions : lower or lower_score_separated
variables and attributes : lower or lowerUpper
classes : Upper or MixedUpper
Also, I am not too fond of the dict of dicts -- why not use variable
names? Here is my version
import pickle
fname =3D 'stix-tbl.ascii-2005-09-24'
uni2type1 =3D dict()
type12uni =3D dict()
uni2tex =3D dict()
tex2uni =3D dict()
for line in file(fname):
if line[:2]!=3D' 0': continue # using continue avoids unneccesary=
indent
uninum =3D line[2:6].strip().lower()
type1name =3D line[12:37].strip()
texname =3D line[83:110].strip()
uninum =3D int(uninum, 16)
if type1name:
uni2type1[uninum] =3D type1name
type12uni[type1name] =3D uninum
if texname:
uni2tex[uninum] =3D texname
tex2uni[texname] =3D uninum
pickle.dump((uni2type1, type12uni, uni2tex, tex2uni), file('unitex.pc=
l','w'))
# An example
unichar =3D int('00d7', 16)
print uni2tex.get(unichar)
print uni2type1.get(unichar)
Also, I am a little hesitant to use pickle files for the final
mapping. I suggest you write a script that generates the python code
contains the dictionaries you need (that is how much of _mathext_data
was generated.
Thanks,
JDH
|
|
From: <edi...@gm...> - 2006-06-22 13:51:43
|
I finally solved the problem of automaticaly generating the dicts for unicode <-> TeX conversion. This is the first step in enabling unicode support in mathtext. The STIX projects is usefull after all ;) They keep a nice table of Unicode symbols at: http://www.ams.org/STIX/bnb/stix-tbl.ascii-2005-09-24 Any comments about the script are appreciated :). Now I'll dig a bit deeper into the font classes to fix them to suport unicode. '''A script for seemlesly copying the data from the stix-tbl.ascii* file to a set of python dicts. Dicts are then pickled to coresponding files, for later retrieval. Currently used table file: http://www.ams.org/STIX/bnb/stix-tbl.ascii-2005-09-24 ''' import pickle table_filename = 'stix-tbl.ascii-2005-09-24' dict_names = ['uni2type1', 'type12uni', 'uni2tex', 'tex2uni'] dicts = {} # initialize the dicts for name in dict_names: dicts[name] = {} for line in file(table_filename): if line[:2]==' 0': uni_num = eval("u'\\u"+line[2:6].strip().lower()+"'") type1_name = line[12:37].strip() tex_name = line[83:110].strip() if type1_name: dicts['uni2type1'][uni_num] = type1_name dicts['type12uni'][type1_name] = uni_num if tex_name: dicts['uni2tex'][uni_num] = tex_name dicts['tex2uni'][tex_name] = uni_num for name in dict_names: pickle.dump(dicts[name], open(name + '.pcl','w')) # An example uni_char = u'\u00d7' print dicts['uni2tex'][uni_char] print dicts['uni2type1'][uni_char] # Testing of results, testing; feel free to unquote # _mathtext_data.py can be found in the matplolib dir #~ from _mathtext_data import latex_to_bakoma #~ supported = 0 #~ unsupported = 0 #~ for tex_symbol in latex_to_bakoma: #~ try: #~ print tex_symbol, dicts['tex2uni'][tex_symbol] #~ supported += 1 #~ except KeyError: #~ unsupported += 1 #~ pass #~ print supported, unsupported |
|
From: John H. <jdh...@ac...> - 2006-06-22 11:13:25
|
>>>>> "Martin" == Martin Spacek <sc...@ms...> writes:
Hey martin, thanks for all these changes.
Martin> to inconsistent behaviour: barh() draws bars vertically
Martin> centered on the y values (ala matlab 6.0), while bar()
Martin> draws bars aligned according to their left edge (not ala
Martin> matlab). I prefer the edge aligning behaviour. It's easy
Martin> to convert from one behaviour to the other, but I had to
Martin> duplicate all the error checking code before conversion,
Martin> which bloated it back up.
Most people prefer the center aligning behavior, at least those who
complained on the list about bar, so when I wrote barh I adopted
this. I tried to fix bar in the process, but ended up running into
some bugs when I tested John Gill's table demo, and so left it as edge
aligned and haven't revisited it since. So my weak preference would
be to have the two functions consistent and center aligned, but he who
does the work usually gets the biggest vote. Maybe others can chime
in.
Martin> And lastly... I find it odd that barh() has the width and
Martin> bottom args (formerly x and y) in that order: barh(width,
Martin> bottom). The general matlab convention is that the first
Martin> argument is the positions, and the second arg is the
Martin> values. So it would make more sense to me to have
Martin> barh(bottom, width). That way, you could switch back and
Martin> forth between bar() and barh() and get the expected
Martin> behaviour without having to switch around the
Martin> arguments. In fact, that's exactly how barh() in matlab 6
Martin> interprets the first two arguments: arg1 is the vertical
Martin> positions, and arg2 is the lengths of the bars at those
Martin> positions. Same goes for matlab's bar() function. As it is
Martin> now in matplotlib, the first and second arguments are
Martin> interpreted differently for bar() and barh()
I was following the convention that the x arg goes first and the y
second, but I'm not wed to this. I don't mind breaking existing code
if this order seems more natural, and since we are mostly emulating
the matlab conventions in bar and barh, it makes some sense to strive
for consistency. Perhaps you could patch the CHANGELOG and
API_CHANGES file along with the rest which explains the changes.
JDH
|
|
From: Martin S. <sc...@ms...> - 2006-06-22 11:02:43
|
Well, I seem to have really dove into this. Here are 4 different patches against the latest svn of axes.py (rev 2495). Note that the rest of my install is the 0.87.3 release (I had to copy over quiver.py to get the latest axes.py to work). patch1 has the following changes to bar() and barh(): - fixed ignoring the rcParams['patch.facecolor'] for bar color: the default value for the color arg is now None, and the Patch class is left to handle fetching the rcparams['patch.facecolor'] - set default error bar color to None, so that errorbar() can handle fetching the rcParams['lines.color'] - added an edgecolor keyword arg - left and height can now both be scalars in bar(), same goes for x and y in barh(). Previously, this raised a TypeError upon testing their lengths. Code that preventively checked for this in barh() (but not in bar()) has been removed. - fixed a bug where patches would be cleared when error bars were plotted if rcParams['axes.hold'] was False - it looks like the code for barh() was copied from bar(), with some of the args renamed. There was an error in the color checking code in barh() where len(left) from bar() hadn't been properly renamed to len(x) - found one or two changes that had been made to bar() that hadn't been propagated to barh(), or vice versa - rearranged the order of some code segments so that they follow the order of the arguments - updated the docstrings Hopefully I haven't introduced any new bugs. patch2 has everything in patch1, except it removes some code duplication by calling bar() from within barh(). I thought this would be a good idea, since it's easy to make a change in bar() and forget to do the same in barh(). It turns out that this takes up almost as many lines of code as having two independent functions, but this is only due to inconsistent behaviour: barh() draws bars vertically centered on the y values (ala matlab 6.0), while bar() draws bars aligned according to their left edge (not ala matlab). I prefer the edge aligning behaviour. It's easy to convert from one behaviour to the other, but I had to duplicate all the error checking code before conversion, which bloated it back up. So... patch3 has everything in patch2, but renames the x and y args in barh() to width and bottom respectively. This makes barh() draw bars vertically aligned to their bottom edge, consistent with bar()'s behaviour. Also, this makes hist(orientation='horizontal') do the same, which makes it consistent with hist(orientation='vertical'). Finally, it removes the code bloat mentioned above. However, it'll break any existing code that relies on x or y as named args in barh(), or code that expects barh() bars to be vertically centered on their y values. And lastly... I find it odd that barh() has the width and bottom args (formerly x and y) in that order: barh(width, bottom). The general matlab convention is that the first argument is the positions, and the second arg is the values. So it would make more sense to me to have barh(bottom, width). That way, you could switch back and forth between bar() and barh() and get the expected behaviour without having to switch around the arguments. In fact, that's exactly how barh() in matlab 6 interprets the first two arguments: arg1 is the vertical positions, and arg2 is the lengths of the bars at those positions. Same goes for matlab's bar() function. As it is now in matplotlib, the first and second arguments are interpreted differently for bar() and barh() I don't know if anyone agrees with this change, but patch4 has all of the changes in patch3, plus the order of the width and bottom args are switched in barh(). This of course will break existing code that depends on this order. I had to modify the barh() call in hist(orientation='horizontal') to reflect this. I couldn't find any other barh() call in matplotlib. For consistency, I also switched the order of the yerr and xerr args, but these have default values and are usually passed as keyword args, so this shouldn't break (much) code. The patches are numbered in increasing order of preference. They look rather big (and I'm not sure if my file compare util is bug-free). If there seem to be problems with them, I can provide the full axes.py file that corresponds to each patch. Cheers, Martin |
|
From: Eric F. <ef...@ha...> - 2006-06-19 18:40:38
|
John Hunter wrote:
>>>>>>"Eric" == Eric Firing <ef...@ha...> writes:
>
>
> Eric> Based on a quick look, I think it would be easy to make
> Eric> LineCollection and PolyCollection accept a numerix array in
> Eric> place of [(x,y), (x,y), ...] for each line segment or
> Eric> polygon; specifically, this could replaced by an N x 2
> Eric> array, where the first column would be x and the second
> Eric> would be y. Backwards compatibility could be maintained
> Eric> easily. This would eliminate quite a bit of useless
> Eric> conversion back and forth among lists, tuples, and arrays.
> Eric> As it is, each sequence of sequences is converted to a pair
> Eric> of arrays in backend_bases, and typically it started out as
> Eric> either a 2-D numerix array or a pair of 1-D arrays in the
> Eric> code that is calling the collection constructor.
>
> I think this is a useful enhancement. I would think that representing
> each segment as (x,y) where x and y are 1D arrays, might be slightly
> more natural than using an Nx2 but others may disagree.
John,
I have been working on this and I can probably commit something in the
next few days. I have been pursuing the Nx2 representation for the
following reasons:
1) It is highly compatible with the present sequence of tuples, so that
the two representations can coexist peacefully:
a = [(1,2), (3,4), (5,6)] # present style
aa = numerix.array(a) # new style
In most places, a and aa work the same with no change to the code. The
exception is where code does something like "a.append(b)". This occurs
in the contour labelling code. I haven't fixed it yet, but I don't see
any fundamental problem in doing so.
2) The Nx2 representation streamlines code because it involves one 2-D
object, "XY", in place of two 1-D objects, X and Y. This also
eliminates the need to check that the lengths of X and Y match.
Logically, X and Y must go together, so why not keep them glued together
in a single array?
Because of the compatibility, there is very little code that actually
has to be changed to support the numerix array. There is a potential
for breakage of user code, however. This is a concern. I don't know of
any way of eliminating it entirely while retaining the efficiency
benefits of using numerix arrays when possible. One thing that might
help is to have the transform seq_xy_tups method handle both input
forms, and return the form corresponding to the input. I can do this; I
now have a transform method that handles both "a" and "aa", but
presently it returns a numerix array in either case.
The optimization you describe below sounds good, but I want to finish
stage 1, above, first.
Eric
>
> How often does it come up that we want a homogeneous line collection,
> ie a bunch of lines segments with the same properties (color,
> linewidth...)? The most expensive part of the agg line collection
> renderer is probably the multiple calls to render_scanlines, which is
> necessary every time we change the linewidth or color.
>
> If all of the lines in a collection shared the same properties, we
> could draw the entire path with a combination of lineto/moveto, and
> just stroke and render it once (agg has an upper limit on path length
> though, since at some point I added the following to draw_lines
>
> if ((i%10000)==0) {
> //draw the path in chunks
> _render_lines_path(path, gc);
> path.remove_all();
> path.move_to(thisx, thisy);
> }
>
> Ie I render it every 10000 points.
>
> Actually, as I type this I realize the case of homogeneous lines (and
> polys) can be handled by the backend method "draw_path". One
> possibility is for the LineCollection to detect the homogeneous case
> len(linewidths)==1 and len(colors)==1 and call out to draw_path
> instead of draw_line_collection (the same could be done for a regular
> poly collection). Some extra extension code would probably be
> necessary to build the path efficiently from numerix arrays, and to
> handle the "chunking" problem to avoid extra long paths, but for
> certain special cases (scatters and quiver w/o color mapping) it would
> probably be a big win. The downside is that not all backend implement
> draw_paths, but the Collection front-end could detect this and fall
> back on the old approach if draw_paths is not implemented.
>
> JDH
|
|
From: John H. <jdh...@ac...> - 2006-06-19 13:13:55
|
>>>>> "Martin" == Martin Spacek <sc...@ms...> writes:
Martin> Don't know if this is the best way, but here's a solution:
Martin> def bar(self, left, height, width=0.8, bottom=0,
Martin> color=matplotlib.rcParams['patch.facecolor'], yerr=None,
Martin> xerr=None, ecolor=matplotlib.rcParams['patch.edgecolor'],
Martin> capsize=3 ):
Hey Martin,
We don't put the rc defaults in the function declaration because these
are evaluated only once, at module load time, which prevents users
from being able to change the defaults after the module is loaded. So
we use this idiom
def somefunc(edgecolor=None):
if edgecolor is None: edgecolor = rcParams['patch.edgecolor']
If you'd like to submit a patch for bar and barh, that'd be great.
Thanks,
JDH
|
|
From: Martin S. <sc...@ms...> - 2006-06-19 10:30:29
|
I've noticed that the rcparams settings for patch.facecolor and
patch.endcolor is ignored by bar() and barh() (and therefore hist()),
always displaying as blue and black, respectively. Is this intentional?
I'm running matplotlib 0.87.3
The culprit:
def bar(self, left, height, width=0.8, bottom=0,
color='b', yerr=None, xerr=None, ecolor='k', capsize=3
):
Don't know if this is the best way, but here's a solution:
def bar(self, left, height, width=0.8, bottom=0,
color=matplotlib.rcParams['patch.facecolor'],
yerr=None, xerr=None,
ecolor=matplotlib.rcParams['patch.edgecolor'], capsize=3
):
Similar situation for barh()
Cheers,
Martin
|
|
From: <Jan...@ga...> - 2006-06-19 06:37:55
|
Hi - this is my first post to such a list, so bear with me.=20 =20 I've just installed mpl3d and have had success with the examples shown = at http://www.scipy.org/Cookbook/Matplotlib/mplot3D =20 We currently don't have numpy installed and using the older Numeric, so = I used the following instead: =20 N =3D 100 x =3D zeros((N,N),Float) y =3D zeros((N,N),Float) z =3D zeros((N,N),Float) u =3D arange(0,2*pi,2.*pi/N) v =3D arange(0,2*pi,2.*pi/N) =20 for i in range(N): for j in range(N): x[i,j] =3D cos(u[i])*sin(v[j]) y[i,j] =3D sin(u[i])*sin(v[j]) z[i,j] =3D cos(v[j]) =20 fig=3Dp.figure() ax =3D p3.Axes3D(fig) ax.plot_surface(x,y,z) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') fig.add_axes(ax) p.show() p.savefig('surfacetest') p.close() =20 which worked a treat (apart from the figure not closing on the first = instance ...). =20 However, if I change N to 10, I get the following error message: =20 Traceback (most recent call last): File "test.py", line 47, in ? ax.plot_surface(x,y,z) File "c:\Python24\lib\site-packages\mpl3d\mplot3d.py, line 921, in plot_surface norm =3D normalize(min(shade),max(shade)) ValueError: min() arg is an empty sequence =20 It seems that if the number of columns or rows is less than 20 than = rstride and cstride =3D 0. This means that the boxes required to make the = polygons in the surface plot won't be constructed. However, you can get a 3D plot if = you use plot_wireframe or plot3D instead with N =3D 10 (but these plots = aren't quite as nice as the surface plot would be). =20 Is there a minimum size of the arrays which plot_surface will work on? = Is there a workaround for smaller examples? I'm looking at plotting a = (smallish) number of time series solutions as a surface. =20 Cheers, Jane. =20 Dr Jane Sexton Risk Research Group Geospatial and Earth Monitoring Division Geoscience Australia |
|
From: Gary R. <gr...@bi...> - 2006-06-18 00:41:48
|
Hi Edin, Edin Salković wrote: > Hi all, > <snip> > Also, if anyone has some good online sources about parsing etc. on the > net, I vwould realy appreciate it. Everything that David Mertz wrote about text processing in his excellent "Charming Python" articles: <http://gnosis.cx/publish/tech_index_cp.html> and "Building Recursive Descent Parsers with Python": http://www.onlamp.com/pub/a/python/2006/01/26/pyparsing.html If you were after a book on the subject, Mertz's book "Text Processing in Python" <http://gnosis.cx/TPiP/> would be an obvious choice or you could pick up any book about writing compilers. Gary R. |
|
From: Helge A. <av...@bc...> - 2006-06-16 05:26:06
|
On 6/15/06, John Hunter <jdh...@ac...> wrote: > How often does it come up that we want a homogeneous line collection, > ie a bunch of lines segments with the same properties (color, > linewidth...)? Hi, for b&w PS publication quality plotting, this must be a common thing to draw; contour lines, vectors, xy plots, the axes, tick marks, even fonts can all be constructed from disjoint line segments, no? if matplotlib could pass numerix arrays more or less directly to gtk it could perhaps also become the speed king of plotting packages :) Helge |
|
From: <edi...@gm...> - 2006-06-15 22:28:39
|
SGkgYWxsLAoKSXMgaXQgdGhhdCB0aGUgY29kZSBpbiB0aGUgbWF0aHRleHQgbW9kdWxlIGxvb2tz IHVnbHkgb3IgaXMgaXQganVzdCBtZQpub3QgdW5kZXJzdGFuZGluZyBpdD8KQWxzbywgaWYgYW55 b25lIGhhcyBzb21lIGdvb2Qgb25saW5lIHNvdXJjZXMgYWJvdXQgcGFyc2luZyBldGMuIG9uIHRo ZQpuZXQsIEkgdndvdWxkIHJlYWx5IGFwcHJlY2lhdGUgaXQuCgpDb25zaWRlcmluZyB0aGUgZm9s b3dpbmcgY29kZSAocGlja2VkIG9uIHJhbmRvbSwgZnJvbSBtYXRodGV4dC5weSkKCj09PQpkZWYg bWF0aF9wYXJzZV9zX2Z0MmZvbnQocywgZHBpLCBmb250c2l6ZSwgYW5nbGU9MCk6CiAgICAiIiIK ICAgIFBhcnNlIHRoZSBtYXRoIGV4cHJlc3Npb24gcywgcmV0dXJuIHRoZSAoYmJveCwgZm9udHMp IHR1cGxlIG5lZWRlZAogICAgdG8gcmVuZGVyIGl0LgoKICAgIGZvbnRzaXplIG11c3QgYmUgaW4g cG9pbnRzCgogICAgcmV0dXJuIGlzIHdpZHRoLCBoZWlnaHQsIGZvbnRzCiAgICAiIiIKCiAgICBt YWpvciwgbWlub3IxLCBtaW5vcjIsIHRtcCwgdG1wID0gc3lzLnZlcnNpb25faW5mbwogICAgaWYg bWFqb3I9PTIgYW5kIG1pbm9yMT09MjoKICAgICAgICByYWlzZSBTeXN0ZW1FeGl0KCdtYXRodGV4 dCBicm9rZW4gb24gcHl0aG9uMi4yLiAgV2UgaG9wZSB0bwpnZXQgdGhpcyBmaXhlZCBzb29uJykK CiAgICBjYWNoZUtleSA9IChzLCBkcGksIGZvbnRzaXplLCBhbmdsZSkKICAgIHMgPSBzWzE6LTFd ICAjIHN0cmlwIHRoZSAkIGZyb20gZnJvbnQgYW5kIGJhY2sKICAgIGlmIG1hdGhfcGFyc2Vfc19m dDJmb250LmNhY2hlLmhhc19rZXkoY2FjaGVLZXkpOgogICAgICAgIHcsIGgsIGJmb250cyA9IG1h dGhfcGFyc2Vfc19mdDJmb250LmNhY2hlW2NhY2hlS2V5XQogICAgICAgIHJldHVybiB3LCBoLCBi Zm9udHMuZm9udHMudmFsdWVzKCkKCiAgICBiYWtvbWFGb250cyA9IEJha29tYVRydWVUeXBlRm9u dHMoKQogICAgRWxlbWVudC5mb250cyA9IGJha29tYUZvbnRzCiAgICBoYW5kbGVyLmNsZWFyKCkK ICAgIGV4cHJlc3Npb24ucGFyc2VTdHJpbmcoIHMgKQoKICAgIGhhbmRsZXIuZXhwci5zZXRfc2l6 ZV9pbmZvKGZvbnRzaXplLCBkcGkpCgogICAgIyBzZXQgdGhlIG9yaWdpbiBvbmNlIHRvIGFsbG93 IHcsIGggY29tcHV0aW9uCiAgICBoYW5kbGVyLmV4cHIuc2V0X29yaWdpbigwLCAwKQogICAgeG1p biA9IG1pbihbZS54bWluKCkgZm9yIGUgaW4gaGFuZGxlci5zeW1ib2xzXSkKICAgIHhtYXggPSBt YXgoW2UueG1heCgpIGZvciBlIGluIGhhbmRsZXIuc3ltYm9sc10pCiAgICB5bWluID0gbWluKFtl LnltaW4oKSBmb3IgZSBpbiBoYW5kbGVyLnN5bWJvbHNdKQogICAgeW1heCA9IG1heChbZS55bWF4 KCkgZm9yIGUgaW4gaGFuZGxlci5zeW1ib2xzXSkKCiAgICAjIG5vdyBzZXQgdGhlIHRydWUgb3Jp Z2luIC0gZG9lc24ndCBhZmZlY3Qgd2l0aCBhbmQgaGVpZ2h0CiAgICB3LCBoID0gIHhtYXgteG1p biwgeW1heC15bWluCiAgICAjIGEgc21hbGwgcGFkIGZvciB0aGUgY2FudmFzIHNpemUKICAgIHcg Kz0gMgogICAgaCArPSAyCgogICAgaGFuZGxlci5leHByLnNldF9vcmlnaW4oMCwgaC15bWF4KQoK ICAgIEVsZW1lbnQuZm9udHMuc2V0X2NhbnZhc19zaXplKHcsaCkKICAgIGhhbmRsZXIuZXhwci5y ZW5kZXIoKQogICAgaGFuZGxlci5jbGVhcigpCgogICAgbWF0aF9wYXJzZV9zX2Z0MmZvbnQuY2Fj aGVbY2FjaGVLZXldID0gdywgaCwgYmFrb21hRm9udHMKICAgIHJldHVybiB3LCBoLCBiYWtvbWFG b250cy5mb250cy52YWx1ZXMoKQoKbWF0aF9wYXJzZV9zX2Z0MmZvbnQuY2FjaGUgPSB7fQo9PT09 CgpJIGRvbid0IHVuZGVyc3RhbmQsIGZvciBleGFtcGxlLCB3aGF0IGRvZXMgdGhlIHN0YXRlbWVu dDoKCmV4cHJlc3Npb24ucGFyc2VTdHJpbmcoIHMgKQoKZG8/CgoiZXhwcmVzc2lvbiIgaXMgZGVm aW5lZCBnbG9iYWx5LCBhbmQgaXMgY2FsbGVkICh0aGF0IGlzIC0gaXRzIG1ldGhvZCkKb25seSBv bmNlIGluIHRoZSBhYm92ZSBkZWZpbml0aW9uIG9mIHRoZSBmdW5jdGlvbiwgYnV0IEkgZG9uJ3QK dW5kZXJzdGFuZCAtIHdoYXQgZG9lcyB0aGF0IHBhcnRpY3VsYXIgbGluZSBkbz8hPwoKLS0tLS0t ClJlZ2FyZGluZyB0aGUgdW5pY29kZSBzdXBwb3J0IGluIG1hdGh0ZXh0LCBtYXRodGV4dCBjdXJy ZW50bHkgdXNlcyB0aGUKZm9sb3dpbmcgZGljdGlvbmFyeSBmb3IgZ2V0dGluZyB0aGUgZ2x5cGgg aW5mbyBvdXQgb2YgdGhlIGZvbnQgZmlsZXM6CgpsYXRleF90b19iYWtvbWEgPSB7CgogICAgcidc b2ludCcgICAgICAgICAgICAgICAgOiAoJ2NtZXgxMCcsICA0NSksCiAgICByJ1xiaWdvZG90JyAg ICAgICAgICAgICA6ICgnY21leDEwJywgIDUwKSwKICAgIHInXGJpZ29wbHVzJyAgICAgICAgICAg IDogKCdjbWV4MTAnLCAgNTUpLAogICAgcidcYmlnb3RpbWVzJyAgICAgICAgICAgOiAoJ2NtZXgx MCcsICA1OSksCiAgICByJ1xzdW0nICAgICAgICAgICAgICAgICA6ICgnY21leDEwJywgIDUxKSwK ICAgIHInXHByb2QnICAgICAgICAgICAgICAgIDogKCdjbWV4MTAnLCAgMjQpLAouLi4KfQoKSSBt YW5hZ2VkIHRvIGJ1aWxkIHRoZSBmb2xsb3dpbmcgZGljdGlvbmFyeShsaXR0bGUgbW9yZSBsZWZ0 IHRvIGJlIGRvbmUpOgp0ZXhfdG9fdW5pY29kZSA9IHsKcidcUycgOiB1J1x1MDBhNycsCnInXFAn IDogdSdcdTAwYjYnLApyJ1xHYW1tYScgOiB1J1x1MDM5MycsCnInXERlbHRhJyA6IHUnXHUwMzk0 JywKcidcVGhldGEnIDogdSdcdTAzOTgnLApyJ1xMYW1iZGEnIDogdSdcdTAzOWInLApyJ1xYaScg OiB1J1x1MDM5ZScsCnInXFBpJyA6IHUnXHUwM2EwJywKcidcU2lnbWEnIDogdSdcdTAzYTMnLApy J1xVcHNpbG9uJyA6IHUnXHUwM2E1JywKcidcUGhpJyA6IHUnXHUwM2E2JywKcidcUHNpJyA6IHUn XHUwM2E4JywKcidcT21lZ2EnIDogdSdcdTAzYTknLApyJ1xhbHBoYScgOiB1J1x1MDNiMScsCnIn XGJldGEnIDogdSdcdTAzYjInLApyJ1xnYW1tYScgOiB1J1x1MDNiMycsCnInXGRlbHRhJyA6IHUn XHUwM2I0JywKcidcdmFyZXBzaWxvbicgOiB1J1x1MDNiNScsCnInXHpldGEnIDogdSdcdTAzYjYn LApyJ1xldGEnIDogdSdcdTAzYjcnLApyJ1x2YXJ0aGV0YScgOiB1J1x1MDNiOCcsCnInXGlvdGEn IDogdSdcdTAzYjknLApyJ1xrYXBwYScgOiB1J1x1MDNiYScsCnInXGxhbWJkYScgOiB1J1x1MDNi YicsCnInXG11JyA6IHUnXHUwM2JjJywKcidcbnUnIDogdSdcdTAzYmQnLApyJ1x4aScgOiB1J1x1 MDNiZScsCnInXHBpJyA6IHUnXHUwM2MwJywKcidcdmFycmhvJyA6IHUnXHUwM2MxJywKcidcdmFy c2lnbWEnIDogdSdcdTAzYzInLApyJ1xzaWdtYScgOiB1J1x1MDNjMycsCnInXHRhdScgOiB1J1x1 MDNjNCcsCnInXHVwc2lsb24nIDogdSdcdTAzYzUnLApyJ1x2YXJwaGknIDogdSdcdTAzYzYnLApy J1xjaGknIDogdSdcdTAzYzcnLApyJ1xwc2knIDogdSdcdTAzYzgnLApyJ1xvbWVnYScgOiB1J1x1 MDNjOScsCnInXGVsbCcgOiB1J1x1MjExMycsCnInXHdwJyA6IHUnXHUyMTE4JywKcidcT21lZ2En IDogdSdcdTIxMjYnLApyJ1xSZScgOiB1J1x1MjExYycsCnInXEltJyA6IHUnXHUyMTExJywKcidc YWxlcGgnIDogdSdcdTA1ZDAnLApyJ1xhbGVwaCcgOiB1J1x1MjEzNScsCnInXHNwYWRlc3VpdCcg OiB1J1x1MjY2MCcsCnInXGhlYXJ0c3VpdCcgOiB1J1x1MjY2MScsCnInXGRpYW1vbmRzdWl0JyA6 IHUnXHUyNjYyJywKcidcY2x1YnN1aXQnIDogdSdcdTI2NjMnLApyJ1xmbGF0JyA6IHUnXHUyNjZk JywKcidcbmF0dXJhbCcgOiB1J1x1MjY2ZScsCnInXHNoYXJwJyA6IHUnXHUyNjZmJywKcidcbGVm dGFycm93JyA6IHUnXHUyMTkwJywKcidcdXBhcnJvdycgOiB1J1x1MjE5MScsCnInXHJpZ2h0YXJy b3cnIDogdSdcdTIxOTInLApyJ1xkb3duYXJyb3cnIDogdSdcdTIxOTMnLApyJ1xSaWdodGFycm93 JyA6IHUnXHUyMWQyJywKcidcTGVmdHJpZ2h0YXJyb3cnIDogdSdcdTIxZDQnLApyJ1xsZWZ0cmln aHRhcnJvdycgOiB1J1x1MjE5NCcsCnInXHVwZG93bmFycm93JyA6IHUnXHUyMTk1JywKcidcZm9y YWxsJyA6IHUnXHUyMjAwJywKcidcZXhpc3RzJyA6IHUnXHUyMjAzJywKcidcZW1wdHlzZXQnIDog dSdcdTIyMDUnLApyJ1xEZWx0YScgOiB1J1x1MjIwNicsCnInXG5hYmxhJyA6IHUnXHUyMjA3JywK cidcaW4nIDogdSdcdTIyMDgnLApyJ1xuaScgOiB1J1x1MjIwYicsCnInXHByb2QnIDogdSdcdTIy MGYnLApyJ1xjb3Byb2QnIDogdSdcdTIyMTAnLApyJ1xzdW0nIDogdSdcdTIyMTEnLApyJy0nIDog dSdcdTIyMTInLApyJ1xtcCcgOiB1J1x1MjIxMycsCnInLycgOiB1J1x1MjIxNScsCnInXGFzdCcg OiB1J1x1MjIxNycsCnInXGNpcmMnIDogdSdcdTIyMTgnLApyJ1xidWxsZXQnIDogdSdcdTIyMTkn LApyJ1xwcm9wdG8nIDogdSdcdTIyMWQnLApyJ1xpbmZ0eScgOiB1J1x1MjIxZScsCnInXG1pZCcg OiB1J1x1MjIyMycsCnInXHdlZGdlJyA6IHUnXHUyMjI3JywKcidcdmVlJyA6IHUnXHUyMjI4JywK cidcY2FwJyA6IHUnXHUyMjI5JywKcidcY3VwJyA6IHUnXHUyMjJhJywKcidcaW50JyA6IHUnXHUy MjJiJywKcidcb2ludCcgOiB1J1x1MjIyZScsCnInOicgOiB1J1x1MjIzNicsCnInXHNpbScgOiB1 J1x1MjIzYycsCnInXHdyJyA6IHUnXHUyMjQwJywKcidcc2ltZXEnIDogdSdcdTIyNDMnLApyJ1xh cHByb3gnIDogdSdcdTIyNDgnLApyJ1xhc3ltcCcgOiB1J1x1MjI0ZCcsCnInXGVxdWl2JyA6IHUn XHUyMjYxJywKcidcbGVxJyA6IHUnXHUyMjY0JywKcidcZ2VxJyA6IHUnXHUyMjY1JywKcidcbGwn IDogdSdcdTIyNmEnLApyJ1xnZycgOiB1J1x1MjI2YicsCnInXHByZWMnIDogdSdcdTIyN2EnLApy J1xzdWNjJyA6IHUnXHUyMjdiJywKcidcc3Vic2V0JyA6IHUnXHUyMjgyJywKcidcc3Vwc2V0JyA6 IHUnXHUyMjgzJywKcidcc3Vic2V0ZXEnIDogdSdcdTIyODYnLApyJ1xzdXBzZXRlcScgOiB1J1x1 MjI4NycsCnInXHVwbHVzJyA6IHUnXHUyMjhlJywKcidcc3FzdWJzZXRlcScgOiB1J1x1MjI5MScs CnInXHNxc3Vwc2V0ZXEnIDogdSdcdTIyOTInLApyJ1xzcWNhcCcgOiB1J1x1MjI5MycsCnInXHNx Y3VwJyA6IHUnXHUyMjk0JywKcidcb3BsdXMnIDogdSdcdTIyOTUnLApyJ1xvbWludXMnIDogdSdc dTIyOTYnLApyJ1xvdGltZXMnIDogdSdcdTIyOTcnLApyJ1xvc2xhc2gnIDogdSdcdTIyOTgnLApy J1xvZG90JyA6IHUnXHUyMjk5JywKcidcdmRhc2gnIDogdSdcdTIyYTInLApyJ1xkYXNodicgOiB1 J1x1MjJhMycsCnInXHRvcCcgOiB1J1x1MjJhNCcsCnInXGJvdCcgOiB1J1x1MjJhNScsCnInXGJp Z3dlZGdlJyA6IHUnXHUyMmMwJywKcidcYmlndmVlJyA6IHUnXHUyMmMxJywKcidcYmlnY2FwJyA6 IHUnXHUyMmMyJywKcidcYmlnY3VwJyA6IHUnXHUyMmMzJywKcidcZGlhbW9uZCcgOiB1J1x1MjJj NCcsCnInXGNkb3QnIDogdSdcdTIyYzUnLApyJ1xsY2VpbCcgOiB1J1x1MjMwOCcsCnInXHJjZWls JyA6IHUnXHUyMzA5JywKcidcbGZsb29yJyA6IHUnXHUyMzBhJywKcidccmZsb29yJyA6IHUnXHUy MzBiJywKcidcbGFuZ2xlJyA6IHUnXHUyN2U4JywKcidccmFuZ2xlJyA6IHUnXHUyN2U5JywKcidc ZGFnJyA6IHUnXHUyMDIwJywKcidcZGRhZycgOiB1J1x1MjAyMScsCn0KCnVuaWNvZGVfdG9fdGV4 IGlzIHN0cmFpZ2h0IGZvcndhcmQuCkFtIEkgb24gdGhlIHJpZ2h0IHRyYWNrPyBXaGF0IHNob3Vs ZCBJIGRvIG5leHQ/CgpJIGFsc28gbm90aWNlZCB0aGF0IHNvbWUgVGVYIGNvbW1hbmRzIChjb21t YW5kcyBpbiB0aGUgc2Vuc2UgdGhhdCB0aGV5CmNhbiBoYXZlIGFyZ3VtZW50cyBlbmNsb3NlZCBp biBicmFja2V0cyB7fSkgYXJlIGRlZmluZWQgYXMgb25seQpzeW1ib2xzOiBcc3FydCBhbG9uZSwg Zm9yIGV4YW1wbGUsIGRpc3BsYXlzIGp1c3QgdGhlIGJlZ2luaW5nIG9mIHRoZQpzcXVhcmUgcm9v dDriiJosIGFuZCBcc3FydHsxMjN9IHRyaWdnZXJzIGFuIGVycm9yLgoKVGhhdCdzIGl0IGZvciBu b3cKVGhhbmtzIGluIGFkdmFuY2UsCkVkaW4K |
|
From: John H. <jdh...@ac...> - 2006-06-15 21:14:03
|
>>>>> "Edin" =3D=3D Edin Salkovi=A7 <edi...@gm...> writes:
Edin> Hi all, Is it that the code in the mathtext module looks
Edin> ugly or is it just me not understanding it? Also, if anyone
Edin> has some good online sources about parsing etc. on the net,
Edin> I vwould realy appreciate it.
It's probably you not understanding it :-) In my opinion, the code is
pretty nice and modular, with a few exceptions, but I'm biased.
Parsers can be a little hard to understand at first. You might start
by trying to understand pyparsing
http://pyparsing.wikispaces.com
and work through some of the basic examples there. Once you have your
head wrapped around that, it will get easier.
Edin> Considering the foowing code (picked on random, from
Edin> mathtext.py)
Edin> I don't understand, for example, what does the statement:
Edin> expression.parseString( s )
Edin> do?
Edin> "expression" is defined globaly, and is called (that is -
Edin> its method) only once in the above definition of the
Edin> function, but I don't understand - what does that particular
Edin> line do?!?
It's not defined globally, but at module level. There is only one
expression that represents a TeX math expression (at least as far as
mathtext is concerned) so it is right that there is only one of them
at module level. It's like saying "a name is a first name followed by
an optional middle initial followed by a last name". You only need to
define this one, and then you set handlers to handle the different
components.
The expression assigns subexpressions to handlers. The statement
below says that an expression is one or more of a space, font element,
an accent, a symbol, a subscript, etc...
expression =3D OneOrMore(
space ^ font ^ accent ^ symbol ^ subscript ^ superscript ^ subsupersc=
ript ^ group ^ composite ).setParseAction(handler.expression).setName("e=
xpression")
A subscript, for example, is a symbol group followed by an underscore
followed by a symbol group
subscript << Group( Optional(symgroup) + Literal('_') + symgroup )
and the handler is defined as
subscript =3D Forward().setParseAction(handler.subscript).setName("subscr=
ipt")
which means that the function handler.subscript will be called every
time the pattern is matched. The tokens will be the first symbol
group, the underscore, and the second symbol group. Here is the
implementation of that function
def subscript(self, s, loc, toks):
assert(len(toks)=3D=3D1)
#print 'subsup', toks
if len(toks[0])=3D=3D2:
under, next =3D toks[0]
prev =3D SpaceElement(0)
else:
prev, under, next =3D toks[0] =20
if self.is_overunder(prev):
prev.neighbors['below'] =3D next
else:
prev.neighbors['subscript'] =3D next
return loc, [prev]
This grabs the tokens and assigns them to the names "prev" and "next".
Every element in the TeX expression is a special case of an Element,
and every Element has a dictionary mapping surrounding elements to
relative locations, either above or below or right or superscript or
subscript. The rest of this function takes the "next" element, and
assigns it either below (eg for \Sum_\0) or subscript (eg for x_0) and
the layout engine will then take this big tree and lay it out. See
for example the "set_origin" function?
Edin> ------ Regarding the unicode s
upport in mathtext, mathtext
Edin> currently uses the folowing dictionary for getting the glyph
Edin> info out of the font files:
Edin> latex_to_bakoma =3D {
Edin> r'\oint' : ('cmex10', 45), r'\bigodot' : ('cmex10', 50),
Edin> r'\bigoplus' : ('cmex10', 55), r'\bigotimes' : ('cmex10',
Edin> 59), r'\sum' : ('cmex10', 51), r'\prod' : ('cmex10', 24),
Edin> ...
Edin> }
Edin> I managed to build the following dictionary(little more left
Edin> to be done): tex_to_unicode =3D { r'\S' : u'\u00a7', r'\P' :
Edin> u'\u00b6', r'\Gamma' : u'\u0393', r'\Delta' : u'\u0394',
Edin> r'\Theta' : u'\u0398', r'\Lambda' : u'\u039b', r'\Xi' :
Edin> u'\u039e', r'\Pi' : u'\u03a0', r'\Sigma' : u'\u03a3',
Edin> unicode_to_tex is straight forward. Am I on the right
Edin> track? What should I do next?
Yes, this looks like the right approach. Once you have this
dictionary mostly working, you will need to try and make it work with
a set of unicode fonts. So instead of having the tex symbol point to
a file name and glyph index, you will need to parse a set of unicode
fonts to see which unicode symbols they provide and build a mapping
from unicode name -> file, glyph index. Then when you encounter a tex
symbol, you can use your tex_to_unicode dict combined with your
unicode -> filename, glyphindex dict to get the desired glyph.
Edin> I also noticed that some TeX commands (commands in the sense
Edin> that they can have arguments enclosed in brackets {}) are
Edin> defined as only symbols: \sqrt alone, for example, displays
Edin> just the begining of the square root:=BA, and \sqrt{123}
Edin> triggers an error.
We don't have support for \sqrt{123} because we would need to do
something a little fancier (draw the horizontal line over 123). This
is doable and would be nice. To implement it, one approach would be
add some basic drawing functionality to the freetype module, eg to
tell freetype to draw a line on it's bitmap. Another approach would
simply be to grab the bitmap to freetype and pass it off to agg and
use the agg renderer to decorate it. This is probably preferable.
But I think this is a lower priority right now.
JDH
|
|
From: John H. <jdh...@ac...> - 2006-06-15 14:03:18
|
>>>>> "Eric" == Eric Firing <ef...@ha...> writes:
Eric> Based on a quick look, I think it would be easy to make
Eric> LineCollection and PolyCollection accept a numerix array in
Eric> place of [(x,y), (x,y), ...] for each line segment or
Eric> polygon; specifically, this could replaced by an N x 2
Eric> array, where the first column would be x and the second
Eric> would be y. Backwards compatibility could be maintained
Eric> easily. This would eliminate quite a bit of useless
Eric> conversion back and forth among lists, tuples, and arrays.
Eric> As it is, each sequence of sequences is converted to a pair
Eric> of arrays in backend_bases, and typically it started out as
Eric> either a 2-D numerix array or a pair of 1-D arrays in the
Eric> code that is calling the collection constructor.
I think this is a useful enhancement. I would think that representing
each segment as (x,y) where x and y are 1D arrays, might be slightly
more natural than using an Nx2 but others may disagree.
How often does it come up that we want a homogeneous line collection,
ie a bunch of lines segments with the same properties (color,
linewidth...)? The most expensive part of the agg line collection
renderer is probably the multiple calls to render_scanlines, which is
necessary every time we change the linewidth or color.
If all of the lines in a collection shared the same properties, we
could draw the entire path with a combination of lineto/moveto, and
just stroke and render it once (agg has an upper limit on path length
though, since at some point I added the following to draw_lines
if ((i%10000)==0) {
//draw the path in chunks
_render_lines_path(path, gc);
path.remove_all();
path.move_to(thisx, thisy);
}
Ie I render it every 10000 points.
Actually, as I type this I realize the case of homogeneous lines (and
polys) can be handled by the backend method "draw_path". One
possibility is for the LineCollection to detect the homogeneous case
len(linewidths)==1 and len(colors)==1 and call out to draw_path
instead of draw_line_collection (the same could be done for a regular
poly collection). Some extra extension code would probably be
necessary to build the path efficiently from numerix arrays, and to
handle the "chunking" problem to avoid extra long paths, but for
certain special cases (scatters and quiver w/o color mapping) it would
probably be a big win. The downside is that not all backend implement
draw_paths, but the Collection front-end could detect this and fall
back on the old approach if draw_paths is not implemented.
JDH
|
|
From: Cyril G. <cyr...@fr...> - 2006-06-15 11:37:10
|
Hello, I use matplotlib 0.87.3 on win32 with wxpython. From the command line, the numeric package is set correctly : numpy (read from matplotlibrc) and all woks fine. I try to use matplotlib from pyxpcom (the connector between xpcom and python in the mozilla world), it seems matplotlib doesn't read matplotlibrc since it looks for numeric package which is not installed. I don't undestand, is there a reason the matplotlibrc file is not read, interpreted ? thanks a lot, Cyril. |
|
From: Jordan D. <jdawe@u.washington.edu> - 2006-06-15 00:22:04
|
Eric Firing wrote: > Jordan, > > I understand what you wrote. I am a bit worried about the amount of > complexity it would add to the collection code, however, and it seems > like it would be useful only in quite special situations--and in those > situations, there may be reasonable alternatives. For example, the ps > backend uses nan as a flag or separator to skip drawing a line > segment; if all backends did this, then it would provide a more > general way to accomplish what you want to do. > > I will keep your idea in mind, but I want to start off with a much > simpler change. I would tend to agree that nan entries would be a better idea than what I was talking about. I'll think about trying to modify the backend codes to support this behaviour, if they don't already. Jordan |
|
From: Jordan D. <jdawe@u.washington.edu> - 2006-06-15 00:03:16
|
I have one suggestion, slightly off-topic, and I'm not sure how useful it would be: you might think about making LineCollection accept a 3-D numerix array. This came up for me while I was looking at turning the quiver arrows into line segments. As I understand it (and as the documentation says) LineCollection takes a set of a lines which are composed of continuous line segments, like: segments = ( line0, line1, line2 ) linen = ( (x0,y0), (x1,y1), (x2, y2) ) I'd like an extra level of organization, like so: linegroups = ( group0, group1, group2) groupi = ( line0, line1, line2 ) linen = ( (x0,y0), (x1,y1), (x2, y2) ) Where "linegroups" would be the input to LineCollection. I assume it's fairly obvious how this turns into a rank 3 array. The reason for this is that it allows for the drawing of non-continuous lines. This came up with the quiver arrow stuff because, as it stands, drawing a line-based arrow requires you to back-track over a previous line at least once. This created some rendering problems, where the back-tracked line was darker than the others, at least on the agg backend. This can be fixed by backtracking along every line in the arrow, so you are essentially drawing the arrow twice, but that seems inefficient. It is possible to draw 3 seperate line segments for each arrow, but then the colormapping no longer works; each line segment gets a different color, and the arrows look like a mess. As I said, I don't know how useful this would be; it only comes up when drawing non-closed line segments that need to be addressed as a single object. Does what I wrote make sense? Jordan Eric Firing wrote: > Based on a quick look, I think it would be easy to make LineCollection > and PolyCollection accept a numerix array in place of [(x,y), (x,y), > ...] for each line segment or polygon; specifically, this could replaced > by an N x 2 array, where the first column would be x and the second > would be y. Backwards compatibility could be maintained easily. This > would eliminate quite a bit of useless conversion back and forth among > lists, tuples, and arrays. As it is, each sequence of sequences is > converted to a pair of arrays in backend_bases, and typically it started > out as either a 2-D numerix array or a pair of 1-D arrays in the code > that is calling the collection constructor. > > Using a single 2-D array makes it easier to determine whether one is > dealing with 'old-style' inputs or 'new-style' inputs, but it might > still be reasonable to allow [X, Y] instead or in addition, where X and > Y are 1-D numerix arrays. > > Any objections or alternative suggestions? > > Eric > > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > |
|
From: Eric F. <ef...@ha...> - 2006-06-14 23:34:08
|
Based on a quick look, I think it would be easy to make LineCollection and PolyCollection accept a numerix array in place of [(x,y), (x,y), ...] for each line segment or polygon; specifically, this could replaced by an N x 2 array, where the first column would be x and the second would be y. Backwards compatibility could be maintained easily. This would eliminate quite a bit of useless conversion back and forth among lists, tuples, and arrays. As it is, each sequence of sequences is converted to a pair of arrays in backend_bases, and typically it started out as either a 2-D numerix array or a pair of 1-D arrays in the code that is calling the collection constructor. Using a single 2-D array makes it easier to determine whether one is dealing with 'old-style' inputs or 'new-style' inputs, but it might still be reasonable to allow [X, Y] instead or in addition, where X and Y are 1-D numerix arrays. Any objections or alternative suggestions? Eric |
|
From: Darren D. <dd...@co...> - 2006-06-14 23:04:46
|
I'm making an errorbar plot with assymetric errorbars. The docstring says:
xerr and yerr may be any of:
a rank-0, Nx1 Numpy array - symmetric errorbars +/- value
an N-element list or tuple - symmetric errorbars +/- value
a rank-1, Nx2 Numpy array - asymmetric errorbars -column1/+column2
I think that last line should read:
a 2xN Numpy array - asymmetric errorbars -row1/+row2
Darren
|