Monday, January 25, 2010

comp.lang.python - 25 new messages in 7 topics - digest

comp.lang.python
http://groups.google.com/group/comp.lang.python?hl=en

comp.lang.python@googlegroups.com

Today's topics:

* Default path for files - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/bae557fe699a8ae3?hl=en
* Total maximal size of data - 12 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/4e4b77d1da087b26?hl=en
* Terminal application with non-standard print - 4 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/fb21750dc257d33e?hl=en
* how can i know if a python object have a attribute such as 'attr1'? - 1
messages, 1 author
http://groups.google.com/group/comp.lang.python/t/8417b18dd7cf1f28?hl=en
* list.pop(0) vs. collections.dequeue - 5 messages, 3 authors
http://groups.google.com/group/comp.lang.python/t/9221d87f93748b3f?hl=en
* site.py confusion - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/03f435bfda8b5ac9?hl=en
* Python, PIL and 16 bit per channel images - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/c873b79f4ee4a780?hl=en

==============================================================================
TOPIC: Default path for files
http://groups.google.com/group/comp.lang.python/t/bae557fe699a8ae3?hl=en
==============================================================================

== 1 of 1 ==
Date: Mon, Jan 25 2010 11:47 am
From: "Gabriel Genellina"


En Sun, 24 Jan 2010 15:04:48 -0300, G�nther Dietrich
<gd_usenet@spamfence.net> escribi�:

> Rotwang <sg552@hotmail.co.uk> wrote:
>
>>> Check out http://docs.python.org/library/os.html and the function
>>> chdir it is what you are looking for.
>>
>> Thank you. So would adding
>>
>> import os
>> os.chdir(<path>)
>>
>> to site.py (or any other module which is automatically imported during
>> initialisation) change the default location to <path> every time I used
>> Python?
>
> Don't change the library modules. It would catch you anytime when you
> expect it least.
>
> See for the environment variable PYTHONSTARTUP and the associated
> startup file.

sitecustomize.py would be a better place. PYTHONSTARTUP is only used when
running in interactive mode.
Anyway, I'd do that explicitely on each script that requires it; after
upgrading the Python version, or moving to another PC, those scripts would
start failing...

--
Gabriel Genellina


==============================================================================
TOPIC: Total maximal size of data
http://groups.google.com/group/comp.lang.python/t/4e4b77d1da087b26?hl=en
==============================================================================

== 1 of 12 ==
Date: Mon, Jan 25 2010 12:03 pm
From: "Diez B. Roggisch"


Am 25.01.10 20:39, schrieb AlexM:
> On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>>
>>> I have a simple question to which I could not find an answer.
>>> What is the total maximal size of list including size of its elements?
>>> I do not like to look into python source.
>>
>> But it would answer that question pretty fast. Because then you'd see
>> that all list-object-methods are defined in terms of Py_ssize_t, which
>> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>>
>> Diez
>
> Then how do explain the program output?

What exactly? That after 3GB it ran out of memory? Because you don't
have 4GB memory available for processes.

Diez


== 2 of 12 ==
Date: Mon, Jan 25 2010 12:07 pm
From: Terry Reedy


On 1/25/2010 2:05 PM, Alexander Moibenko wrote:
> I have a simple question to which I could not find an answer.

Because it has no finite answer

> What is the total maximal size of list including size of its elements?

In theory, unbounded. In practice, limited by the memory of the interpreter.

The maximum # of elements depends on the interpreter. Each element can
be a list whose maximum # of elements ..... and recursively so on...

Terry Jan Reedy

== 3 of 12 ==
Date: Mon, Jan 25 2010 12:15 pm
From: AlexM


On Jan 25, 2:03 pm, "Diez B. Roggisch" <de...@nospam.web.de> wrote:
> Am 25.01.10 20:39, schrieb AlexM:
>
> > On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de>  wrote:
> >> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>
> >>> I have a simple question to which I could not find an answer.
> >>> What is the total maximal size of list including size of its elements?
> >>> I do not like to look into python source.
>
> >> But it would answer that question pretty fast. Because then you'd see
> >> that all list-object-methods are defined in terms of Py_ssize_t, which
> >> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>
> >> Diez
>
> > Then how do explain the program output?
>
> What exactly? That after 3GB it ran out of memory? Because you don't
> have 4GB memory available for processes.
>
> Diez

Did you see my posting?
....
Here is what I get on 32-bit architecture:
cat /proc/meminfo
MemTotal: 8309860 kB
MemFree: 5964888 kB
Buffers: 84396 kB
Cached: 865644 kB
SwapCached: 0 kB
.....

I have more than 5G in memory not speaking of swap space.


== 4 of 12 ==
Date: Mon, Jan 25 2010 12:21 pm
From: AlexM


On Jan 25, 2:07 pm, Terry Reedy <tjre...@udel.edu> wrote:
> On 1/25/2010 2:05 PM, Alexander Moibenko wrote:
>
> > I have a simple question to which I could not find an answer.
>
> Because it has no finite answer
>
> > What is the total maximal size of list including size of its elements?
>
> In theory, unbounded. In practice, limited by the memory of the interpreter.
>
> The maximum # of elements depends on the interpreter. Each element can
> be a list whose maximum # of elements ..... and recursively so on...
>
> Terry Jan Reedy

I am not asking about maximum numbers of elements I am asking about
total maximal size of list including size of its elements. In other
words:
if size of each list element is ELEMENT_SIZE and all elements have the
same size what would be the maximal number of these elements in 32 -
bit architecture?
I see 3 GB, and wonder why? Why not 2 GB or not 4 GB?
AlexM
AlexM


== 5 of 12 ==
Date: Mon, Jan 25 2010 12:37 pm
From: "Alf P. Steinbach"


* AlexM:
> On Jan 25, 2:07 pm, Terry Reedy <tjre...@udel.edu> wrote:
>> On 1/25/2010 2:05 PM, Alexander Moibenko wrote:
>>
>>> I have a simple question to which I could not find an answer.
>> Because it has no finite answer
>>
>>> What is the total maximal size of list including size of its elements?
>> In theory, unbounded. In practice, limited by the memory of the interpreter.
>>
>> The maximum # of elements depends on the interpreter. Each element can
>> be a list whose maximum # of elements ..... and recursively so on...
>>
>> Terry Jan Reedy
>
> I am not asking about maximum numbers of elements I am asking about
> total maximal size of list including size of its elements. In other
> words:
> if size of each list element is ELEMENT_SIZE and all elements have the
> same size what would be the maximal number of these elements in 32 -
> bit architecture?
> I see 3 GB, and wonder why? Why not 2 GB or not 4 GB?

At a guess you were running this in 32-bit Windows. By default it reserves the
upper two gig of address space for mapping system DLLs. It can be configured to
use just 1 gig for that, and it seems like your system is, or you're using some
other system with that kind of behavior, or, it's just arbitrary...


Cheers & hth.,

- Alf (by what mechanism do socks disappear from the washer?)


== 6 of 12 ==
Date: Mon, Jan 25 2010 12:42 pm
From: "Diez B. Roggisch"


Am 25.01.10 21:15, schrieb AlexM:
> On Jan 25, 2:03 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>> Am 25.01.10 20:39, schrieb AlexM:
>>
>>> On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>>>> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>>
>>>>> I have a simple question to which I could not find an answer.
>>>>> What is the total maximal size of list including size of its elements?
>>>>> I do not like to look into python source.
>>
>>>> But it would answer that question pretty fast. Because then you'd see
>>>> that all list-object-methods are defined in terms of Py_ssize_t, which
>>>> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>>
>>>> Diez
>>
>>> Then how do explain the program output?
>>
>> What exactly? That after 3GB it ran out of memory? Because you don't
>> have 4GB memory available for processes.
>>
>> Diez
>
> Did you see my posting?
> ....
> Here is what I get on 32-bit architecture:
> cat /proc/meminfo
> MemTotal: 8309860 kB
> MemFree: 5964888 kB
> Buffers: 84396 kB
> Cached: 865644 kB
> SwapCached: 0 kB
> .....
>
> I have more than 5G in memory not speaking of swap space.

Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE?

http://de.wikipedia.org/wiki/Physical_Address_Extension

Just because the system can deal with more overall memory - one process
can't get more than 4 GB (or even less, through re-mapped memory).
Except it uses specific APIs like the old hi-mem-stuff under DOS.

Diez


== 7 of 12 ==
Date: Mon, Jan 25 2010 12:49 pm
From: AlexM


On Jan 25, 2:37 pm, "Alf P. Steinbach" <al...@start.no> wrote:
> * AlexM:
>
>
>
> > On Jan 25, 2:07 pm, Terry Reedy <tjre...@udel.edu> wrote:
> >> On 1/25/2010 2:05 PM, Alexander Moibenko wrote:
>
> >>> I have a simple question to which I could not find an answer.
> >> Because it has no finite answer
>
> >>> What is the total maximal size of list including size of its elements?
> >> In theory, unbounded. In practice, limited by the memory of the interpreter.
>
> >> The maximum # of elements depends on the interpreter. Each element can
> >> be a list whose maximum # of elements ..... and recursively so on...
>
> >> Terry Jan Reedy
>
> > I am not asking about maximum numbers of elements I am asking about
> > total maximal size of list including size of its elements. In other
> > words:
> > if size of each list element is ELEMENT_SIZE and all elements have the
> > same size what would be the maximal number of these elements in 32 -
> > bit architecture?
> > I see 3 GB, and wonder why? Why not 2 GB or not 4 GB?
>
> At a guess you were running this in 32-bit Windows. By default it reserves the
> upper two gig of address space for mapping system DLLs. It can be configured to
> use just 1 gig for that, and it seems like your system is, or you're using some
> other system with that kind of behavior, or, it's just arbitrary...
>
> Cheers & hth.,
>
> - Alf (by what mechanism do socks disappear from the washer?)

No, it is 32-bit Linux.
Alex


== 8 of 12 ==
Date: Mon, Jan 25 2010 12:56 pm
From: "Diez B. Roggisch"


Am 25.01.10 21:49, schrieb AlexM:
> On Jan 25, 2:37 pm, "Alf P. Steinbach"<al...@start.no> wrote:
>> * AlexM:
>>
>>
>>
>>> On Jan 25, 2:07 pm, Terry Reedy<tjre...@udel.edu> wrote:
>>>> On 1/25/2010 2:05 PM, Alexander Moibenko wrote:
>>
>>>>> I have a simple question to which I could not find an answer.
>>>> Because it has no finite answer
>>
>>>>> What is the total maximal size of list including size of its elements?
>>>> In theory, unbounded. In practice, limited by the memory of the interpreter.
>>
>>>> The maximum # of elements depends on the interpreter. Each element can
>>>> be a list whose maximum # of elements ..... and recursively so on...
>>
>>>> Terry Jan Reedy
>>
>>> I am not asking about maximum numbers of elements I am asking about
>>> total maximal size of list including size of its elements. In other
>>> words:
>>> if size of each list element is ELEMENT_SIZE and all elements have the
>>> same size what would be the maximal number of these elements in 32 -
>>> bit architecture?
>>> I see 3 GB, and wonder why? Why not 2 GB or not 4 GB?
>>
>> At a guess you were running this in 32-bit Windows. By default it reserves the
>> upper two gig of address space for mapping system DLLs. It can be configured to
>> use just 1 gig for that, and it seems like your system is, or you're using some
>> other system with that kind of behavior, or, it's just arbitrary...
>>
>> Cheers& hth.,
>>
>> - Alf (by what mechanism do socks disappear from the washer?)
>
> No, it is 32-bit Linux.
> Alex

I already answered that (as did Alf, the principle applies for both OSs)
- kernel memory space is mapped into the address-space, reducing it by 1
or 2 GB.

Diez


== 9 of 12 ==
Date: Mon, Jan 25 2010 1:22 pm
From: AlexM


On Jan 25, 2:42 pm, "Diez B. Roggisch" <de...@nospam.web.de> wrote:
> Am 25.01.10 21:15, schrieb AlexM:
>
>
>
> > On Jan 25, 2:03 pm, "Diez B. Roggisch"<de...@nospam.web.de>  wrote:
> >> Am 25.01.10 20:39, schrieb AlexM:
>
> >>> On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de>    wrote:
> >>>> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>
> >>>>> I have a simple question to which I could not find an answer.
> >>>>> What is the total maximal size of list including size of its elements?
> >>>>> I do not like to look into python source.
>
> >>>> But it would answer that question pretty fast. Because then you'd see
> >>>> that all list-object-methods are defined in terms of Py_ssize_t, which
> >>>> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>
> >>>> Diez
>
> >>> Then how do explain the program output?
>
> >> What exactly? That after 3GB it ran out of memory? Because you don't
> >> have 4GB memory available for processes.
>
> >> Diez
>
> > Did you see my posting?
> > ....
> > Here is what I get on 32-bit architecture:
> > cat /proc/meminfo
> > MemTotal:      8309860 kB
> > MemFree:       5964888 kB
> > Buffers:         84396 kB
> > Cached:         865644 kB
> > SwapCached:          0 kB
> > .....
>
> > I have more than 5G in memory not speaking of swap space.
>
> Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE?
>
>    http://de.wikipedia.org/wiki/Physical_Address_Extension
>
> Just because the system can deal with more overall memory - one process
> can't get more than 4 GB (or even less, through re-mapped memory).
> Except it uses specific APIs like the old hi-mem-stuff under DOS.
>
> Diez

Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled
python without extended memory. So I was looking in the wrong place.
Thanks!
AlexM


== 10 of 12 ==
Date: Mon, Jan 25 2010 1:31 pm
From: "Diez B. Roggisch"


Am 25.01.10 22:22, schrieb AlexM:
> On Jan 25, 2:42 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>> Am 25.01.10 21:15, schrieb AlexM:
>>
>>
>>
>>> On Jan 25, 2:03 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>>>> Am 25.01.10 20:39, schrieb AlexM:
>>
>>>>> On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de> wrote:
>>>>>> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>>
>>>>>>> I have a simple question to which I could not find an answer.
>>>>>>> What is the total maximal size of list including size of its elements?
>>>>>>> I do not like to look into python source.
>>
>>>>>> But it would answer that question pretty fast. Because then you'd see
>>>>>> that all list-object-methods are defined in terms of Py_ssize_t, which
>>>>>> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>>
>>>>>> Diez
>>
>>>>> Then how do explain the program output?
>>
>>>> What exactly? That after 3GB it ran out of memory? Because you don't
>>>> have 4GB memory available for processes.
>>
>>>> Diez
>>
>>> Did you see my posting?
>>> ....
>>> Here is what I get on 32-bit architecture:
>>> cat /proc/meminfo
>>> MemTotal: 8309860 kB
>>> MemFree: 5964888 kB
>>> Buffers: 84396 kB
>>> Cached: 865644 kB
>>> SwapCached: 0 kB
>>> .....
>>
>>> I have more than 5G in memory not speaking of swap space.
>>
>> Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE?
>>
>> http://de.wikipedia.org/wiki/Physical_Address_Extension
>>
>> Just because the system can deal with more overall memory - one process
>> can't get more than 4 GB (or even less, through re-mapped memory).
>> Except it uses specific APIs like the old hi-mem-stuff under DOS.
>>
>> Diez
>
> Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled
> python without extended memory. So I was looking in the wrong place.


You can't compile it with PAE. It's an extension that doesn't make sense
in a general purpose language. It is used by Databases or some such,
that can hold large structures in memory that don't need random access,
but can cope with windowing.

Diez


== 11 of 12 ==
Date: Mon, Jan 25 2010 1:46 pm
From: AlexM


On Jan 25, 3:31 pm, "Diez B. Roggisch" <de...@nospam.web.de> wrote:
> Am 25.01.10 22:22, schrieb AlexM:
>
>
>
> > On Jan 25, 2:42 pm, "Diez B. Roggisch"<de...@nospam.web.de>  wrote:
> >> Am 25.01.10 21:15, schrieb AlexM:
>
> >>> On Jan 25, 2:03 pm, "Diez B. Roggisch"<de...@nospam.web.de>    wrote:
> >>>> Am 25.01.10 20:39, schrieb AlexM:
>
> >>>>> On Jan 25, 1:23 pm, "Diez B. Roggisch"<de...@nospam.web.de>      wrote:
> >>>>>> Am 25.01.10 20:05, schrieb Alexander Moibenko:
>
> >>>>>>> I have a simple question to which I could not find an answer.
> >>>>>>> What is the total maximal size of list including size of its elements?
> >>>>>>> I do not like to look into python source.
>
> >>>>>> But it would answer that question pretty fast. Because then you'd see
> >>>>>> that all list-object-methods are defined in terms of Py_ssize_t, which
> >>>>>> is an alias for ssize_t of your platform. 64bit that should be a 64bit long.
>
> >>>>>> Diez
>
> >>>>> Then how do explain the program output?
>
> >>>> What exactly? That after 3GB it ran out of memory? Because you don't
> >>>> have 4GB memory available for processes.
>
> >>>> Diez
>
> >>> Did you see my posting?
> >>> ....
> >>> Here is what I get on 32-bit architecture:
> >>> cat /proc/meminfo
> >>> MemTotal:      8309860 kB
> >>> MemFree:       5964888 kB
> >>> Buffers:         84396 kB
> >>> Cached:         865644 kB
> >>> SwapCached:          0 kB
> >>> .....
>
> >>> I have more than 5G in memory not speaking of swap space.
>
> >> Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE?
>
> >>    http://de.wikipedia.org/wiki/Physical_Address_Extension
>
> >> Just because the system can deal with more overall memory - one process
> >> can't get more than 4 GB (or even less, through re-mapped memory).
> >> Except it uses specific APIs like the old hi-mem-stuff under DOS.
>
> >> Diez
>
> > Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled
> > python without extended memory. So I was looking in the wrong place.
>
> You can't compile it with PAE. It's an extension that doesn't make sense
> in a general purpose language. It is used by Databases or some such,
> that can hold large structures in memory that don't need random access,
> but can cope with windowing.
>
> Diez

Well, there actually is a way of building programs that may use more
than 4GB of memory on 32 machines for Linux with higmem kernels, but I
guess this would not work for python.
I'll just switch to 64-bit architecture.
Thanks again.
AlexM


== 12 of 12 ==
Date: Mon, Jan 25 2010 2:12 pm
From: "Diez B. Roggisch"


>
> Well, there actually is a way of building programs that may use more
> than 4GB of memory on 32 machines for Linux with higmem kernels, but I
> guess this would not work for python.

As I said, it's essentially paging:

http://kerneltrap.org/node/2450

And it's not something you can just compile in, you need explicit
code-support for it. Which python hasn't. And most other programs. So
there is not a magic compile option.

> I'll just switch to 64-bit architecture.

That's the solution, yes :)

Diez

==============================================================================
TOPIC: Terminal application with non-standard print
http://groups.google.com/group/comp.lang.python/t/fb21750dc257d33e?hl=en
==============================================================================

== 1 of 4 ==
Date: Mon, Jan 25 2010 12:55 pm
From: Hans Mulder


Grant Edwards wrote:
> On 2010-01-24, R?mi <babedoudi@yahoo.fr> wrote:
>
>> I would like to do a Python application that prints data to stdout, but
>> not the common way. I do not want the lines to be printed after each
>> other, but the old lines to be replaced with the new ones, like wget
>> does it for example (when downloading a file you can see the percentage
>> increasing on a same line).
>
> sys.stdout.write("Here's the first line")
> time.sleep(1)
> sys.stdout.write("\rAnd this line replaces it.")

That does not work on my system, because sys.stdout is line buffered.
This causes both strings to be written when sys.stdout is closed because
Python is shutting down.

This works better:

import sys, time

sys.stdout.write("Here's the first line")
sys.stdout.flush()
time.sleep(1)
sys.stdout.write("\rAnd this line replaces it.")
sys.stdout.flush()


Hope this helps,

-- HansM

== 2 of 4 ==
Date: Mon, Jan 25 2010 1:18 pm
From: Dennis Lee Bieber


On Sun, 24 Jan 2010 12:35:09 -0800 (PST), R�mi <babedoudi@yahoo.fr>
declaimed the following in gmane.comp.python.general:

> If I understand well, \r erases the last line. How about erasing the
> previous lines?
>
No... "\r" is a carriage-return (move to beginning of line). "\n" is
a line-feed (which on most systems is translated as a <CR><LF>, in order
to implement a "new line").

Full screen control requires having a terminal that understands
advanced control codes [many of which seem to have been based upon DEC
VT-52/VT-100 codes] (Which, unfortunately, the Window's command line
doesn't commonly do -- hence curses and various schemes mapping logical
operations to specific terminal types... After all, the most basic mode
is a terminal that supports: carriage return, line feed, and clear
screen; by using a library that maintains an "image" of the entire
screen, any operation that modifies a line other than the last or new
can be emulated by clear screen and write modified image text to screen)
--
Wulfraed Dennis Lee Bieber KD6MOG
wlfraed@ix.netcom.com HTTP://wlfraed.home.netcom.com/

== 3 of 4 ==
Date: Mon, Jan 25 2010 2:20 pm
From: Grant Edwards


On 2010-01-25, Hans Mulder <hansmu@xs4all.nl> wrote:
> Grant Edwards wrote:
>> On 2010-01-24, R?mi <babedoudi@yahoo.fr> wrote:
>>
>>> I would like to do a Python application that prints data to stdout, but
>>> not the common way. I do not want the lines to be printed after each
>>> other, but the old lines to be replaced with the new ones, like wget
>>> does it for example (when downloading a file you can see the percentage
>>> increasing on a same line).
>>
>> sys.stdout.write("Here's the first line")
>> time.sleep(1)
>> sys.stdout.write("\rAnd this line replaces it.")
>
> That does not work on my system, because sys.stdout is line buffered.

That's correct of course.

> This causes both strings to be written when sys.stdout is closed because
> Python is shutting down.
>
> This works better:
>
> import sys, time
>
> sys.stdout.write("Here's the first line")
> sys.stdout.flush()
> time.sleep(1)
> sys.stdout.write("\rAnd this line replaces it.")
> sys.stdout.flush()

Or you can tell Python to do unbuffered output:

#!/usr/bin/python -u

--
Grant Edwards grante Yow! I'm using my X-RAY
at VISION to obtain a rare
visi.com glimpse of the INNER
WORKINGS of this POTATO!!


== 4 of 4 ==
Date: Mon, Jan 25 2010 2:30 pm
From: Sean DiZazzo


On Jan 24, 11:27 am, Rémi <babedo...@yahoo.fr> wrote:
> Hello everyone,
>
> I would like to do a Python application that prints data to stdout, but
> not the common way. I do not want the lines to be printed after each
> other, but the old lines to be replaced with the new ones, like wget
> does it for example (when downloading a file you can see the percentage
> increasing on a same line).
>
> I looked into the curses module, but this seems adapted only to do a
> whole application, and the terminal history is not visible anymore when
> the application starts.
>
> Any ideas?
>
> Thanks,
>
> Remi

You might want to take a look at the readline module.

~Sean

==============================================================================
TOPIC: how can i know if a python object have a attribute such as 'attr1'?
http://groups.google.com/group/comp.lang.python/t/8417b18dd7cf1f28?hl=en
==============================================================================

== 1 of 1 ==
Date: Mon, Jan 25 2010 12:57 pm
From: "Jan Kaliszewski"


24-01-2010, 00:38:29 Terry Reedy <tjreedy@udel.edu> wrote:

> On 1/23/2010 10:56 AM, Arnaud Delobelle wrote:
>> thinke365<thinke365@gmail.com> writes:
>>
>>> for example, i may define a python class:
>>> class A:
>>> def sayHello():
>>> print 'hello'
>>>
>>> a = A()
>>> a.attr1 = 'hello'
>>> a.attr2 = 'bb'
>>>
>>> b = A()
>>> a.attr2 = 'aa'
>>>
>>> how can i know whether an object have an attribute named attr1?
>>
>> hasattr(a, 'attr1')
>
> or
> try: a.attr1
> except AttributeError: pass

or

-- if you are interested only in attributes contained by attributes dict
of this particular object (and no in attributes of its type, base types
nor attributes calculated on-demand by __getattr__/__getattribute__
methods) --

you can check its __dict__ --
* using vars(), e.g.: if 'attr1' in vars(a)...
* or directly (less elegant?), e.g.: if 'attr1' in a.__dict__...

But please remember that it doesn't work for instances of types with
__slots__ defined (see:
http://docs.python.org/reference/datamodel.html#slots).

Regards,
*j

--
Jan Kaliszewski (zuo) <zuo@chopin.edu.pl>

==============================================================================
TOPIC: list.pop(0) vs. collections.dequeue
http://groups.google.com/group/comp.lang.python/t/9221d87f93748b3f?hl=en
==============================================================================

== 1 of 5 ==
Date: Mon, Jan 25 2010 1:00 pm
From: Paul Rubin


Steve Howell <showell30@yahoo.com> writes:
> These are the reasons I am not using deque:

Thanks for these. Now we are getting somewhere.

> 1) I want to use native lists, so that downstream methods can use
> them as lists.

It sounds like that could be fixed by making the deque API a proper
superset of the list API.

> 2) Lists are faster for accessing elements.

It sounds like that could be fixed by optimizing deque somewhat. Also,
have you profiled your application to show that accessing list elements
is actually using a significant fraction of its runtime and that it
would be slowed down noticably by deque? If not, it's a red herring.

> 3) I want to be able to insert elements into the middle of the list.

I just checked, and was surprised to find that deque doesn't support
this. I'd say go ahead and file a feature request to add it to deque.

> 4) I have no need for rotating elements.

That's unpersuasive since you're advocating adding a feature to list
that many others have no need for.


> Adding a word or two to a list is an O(1) addition to a data structure
> that takes O(N) memory to begin with.

Yes, as mentioned, additive constants matter.

> Another way of looking at it is that you would need to have 250 or so
> lists in memory at the same time before the extra pointer was even
> costing you kilobytes of memory.

I've often run applications with millions of lists, maybe tens of
millions. Of course it would be 100's of millions if the machines were
big enough.

> My consumer laptop has 3027908k of memory.

I thought the idea of buying bigger machines was to solve bigger
problems, not to solve the same problems more wastefully.


== 2 of 5 ==
Date: Mon, Jan 25 2010 1:32 pm
From: Arnaud Delobelle


Steve Howell <showell30@yahoo.com> writes:
[...]
> My algorithm does exactly N pops and roughly N list accesses, so I
> would be going from N*N + N to N + N log N if switched to blist.

Can you post your algorithm? It would be interesting to have a concrete
use case to base this discussion on.

--
Arnaud


== 3 of 5 ==
Date: Mon, Jan 25 2010 2:05 pm
From: Steve Howell


On Jan 25, 1:32 pm, Arnaud Delobelle <arno...@googlemail.com> wrote:
> Steve Howell <showel...@yahoo.com> writes:
>
> [...]
>
> > My algorithm does exactly N pops and roughly N list accesses, so I
> > would be going from N*N + N to N + N log N if switched to blist.
>
> Can you post your algorithm?  It would be interesting to have a concrete
> use case to base this discussion on.
>

It is essentially this, in list_ass_slice:

if (d < 0) { /* Delete -d items */
if (ilow == 0) {
a->popped -= d;
a->ob_item -= d * sizeof(PyObject *);
list_resize(a, Py_SIZE(a));
}
else {
memmove(&item[ihigh+d], &item[ihigh],
(Py_SIZE(a) - ihigh)*sizeof(PyObject *));
list_resize(a, Py_SIZE(a) + d);
}
item = a->ob_item;
}

I am still working through the memory management issues, but when I
have a complete working patch, I will give more detail.

== 4 of 5 ==
Date: Mon, Jan 25 2010 2:09 pm
From: Steve Howell


On Jan 25, 1:32 pm, Arnaud Delobelle <arno...@googlemail.com> wrote:
> Steve Howell <showel...@yahoo.com> writes:
>
> [...]
>
> > My algorithm does exactly N pops and roughly N list accesses, so I
> > would be going from N*N + N to N + N log N if switched to blist.
>
> Can you post your algorithm?  It would be interesting to have a concrete
> use case to base this discussion on.
>

I just realized you meant the Python code itself. It is here:

https://bitbucket.org/showell/shpaml_website/src/tip/shpaml.py

== 5 of 5 ==
Date: Mon, Jan 25 2010 3:00 pm
From: Steve Howell


On Jan 25, 1:00 pm, Paul Rubin <no.em...@nospam.invalid> wrote:
> Steve Howell <showel...@yahoo.com> writes:
> > These are the reasons I am not using deque:
>
> Thanks for these.  Now we are getting somewhere.
>
> >   1) I want to use native lists, so that downstream methods can use
> > them as lists.
>
> It sounds like that could be fixed by making the deque API a proper
> superset of the list API.

That is probably a good idea.

> >   2) Lists are faster for accessing elements.
>
> It sounds like that could be fixed by optimizing deque somewhat.  Also,
> have you profiled your application to show that accessing list elements
> is actually using a significant fraction of its runtime and that it
> would be slowed down noticably by deque?  If not, it's a red herring.

I haven't profiled deque vs. list, but I think you are correct about
pop() possibly being a red herring.

It appears that the main bottleneck might still be the processing I do
on each line of text, which in my cases is regexes.

For really large lists, I suppose memmove() would eventually start to
become a bottleneck, but it's brutally fast when it just moves a
couple kilobytes of data around.

> >   3) I want to be able to insert elements into the middle of the list.
>
> I just checked, and was surprised to find that deque doesn't support
> this.  I'd say go ahead and file a feature request to add it to deque.
>

It might be a good thing to add just for consistency sake. If
somebody first implements an algorithm with lists, then discovers it
has overhead relating to inserting/appending at the end of the list,
then the more deque behaves like a list, the more easily they could
switch over their code to deque. Not knowing much about deque's
internals, I assume its performance for insert() would O(N) just like
list, although maybe a tiny bit slower.

> >   4) I have no need for rotating elements.
>
> That's unpersuasive since you're advocating adding a feature to list
> that many others have no need for.  
>

To be precise, I wasn't really advocating for a new feature but an
internal optimization of a feature that already exists.

> > Adding a word or two to a list is an O(1) addition to a data structure
> > that takes O(N) memory to begin with.
>
> Yes, as mentioned, additive constants matter.
>
> > Another way of looking at it is that you would need to have 250 or so
> > lists in memory at the same time before the extra pointer was even
> > costing you kilobytes of memory.
>
> I've often run applications with millions of lists, maybe tens of
> millions.  Of course it would be 100's of millions if the machines were
> big enough.
>

I bet even in your application, the amount of memory consumed by the
PyListObjects themselves is greatly dwarfed by other objects, notably
the list elements themselves, not to mention any dictionaries that
your app uses.

> > My consumer laptop has 3027908k of memory.
>
> I thought the idea of buying bigger machines was to solve bigger
> problems, not to solve the same problems more wastefully.

Well, I am not trying to solve problems wastefully here. CPU cycles
are also scarce, so it seems wasteful to do an O(N) memmove that could
be avoided by storing an extra pointer per list. I also think that
encouraging the use of pop(0) would actually make many programs more
memory efficient, in the sense that you can garbage collect list
elements earlier.

Thanks for your patience in responding to me, despite the needlessly
abrasive tone of my earlier postings. I am coming around to this
thinking:

1) Summarize all this discussion and my lessons learned in some kind
of document. It does not have to be a PEP per se, but I could provide
a useful service to the community by listing pros/cons/etc.

2) I would still advocate for removing the warning against list.pop
(0) from the tutorial. I agree with Steven D'Aprano that docs really
should avoid describing implementation details in many instances
(although I do not know what he thinks about this particular case). I
also think that the performance penalty for pop(0) is negligible for
most medium-sized programs. For large-sized programs where you really
want to swap in deque, I think most authors are beyond reading the
tutorial and are looking elsewhere for insight on Python data
structures.

3) I am gonna try to implement the patch anyway for my own
edification.

4) I do think that there are ways that deque could be improved, but
it is not high on my priority list. I will try to mention it in the
PEP, though.


==============================================================================
TOPIC: site.py confusion
http://groups.google.com/group/comp.lang.python/t/03f435bfda8b5ac9?hl=en
==============================================================================

== 1 of 1 ==
Date: Mon, Jan 25 2010 2:27 pm
From: George Trojan


Inspired by the 'Default path for files' thread I tried to use
sitecustomize in my code. What puzzles me is that the site.py's main()
is not executed. My sitecustomize.py is
def main():
print 'In Main()'
main()
and the test program is
import site
#site.main()
print 'Hi'
The output is
$ python try.py
Hi
When I uncomment the site.main() line the output is
$ python try.py
In Main()
Hi
If I change import site to import sitecustomize the output is as above.
What gives?
Adding to the confusion, I found
http://code.activestate.com/recipes/552729/ which contradicts
http://docs.python.org/library/site.html

George

==============================================================================
TOPIC: Python, PIL and 16 bit per channel images
http://groups.google.com/group/comp.lang.python/t/c873b79f4ee4a780?hl=en
==============================================================================

== 1 of 1 ==
Date: Mon, Jan 25 2010 2:04 pm
From: Peter Chant


Does anyone know whether PIL can handle 16 bit per channel RGB images?
PyPNG site (http://packages.python.org/pypng/ca.html) states PIL uses 8 bits
per channel internally.

Thanks,

Pete


--
http://www.petezilla.co.uk

==============================================================================

You received this message because you are subscribed to the Google Groups "comp.lang.python"
group.

To post to this group, visit http://groups.google.com/group/comp.lang.python?hl=en

To unsubscribe from this group, send email to comp.lang.python+unsubscribe@googlegroups.com

To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.lang.python/subscribe?hl=en

To report abuse, send email explaining the problem to abuse@googlegroups.com

==============================================================================
Google Groups: http://groups.google.com/?hl=en

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home


Real Estate