comp.lang.python - 25 new messages in 14 topics - digest
comp.lang.python
http://groups.google.com/group/comp.lang.python?hl=en
comp.lang.python@googlegroups.com
Today's topics:
* conditional import into global namespace - 3 messages, 3 authors
http://groups.google.com/group/comp.lang.python/t/8718bce88cea1cf5?hl=en
* os.fdopen() issue in Python 3.1? - 3 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/49a2ca4c1f9e7a59?hl=en
* case do problem - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/d73f6f6a59d3bbfd?hl=en
* Writing an assembler in Python - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/595a2db256807e85?hl=en
* freebsd and multiprocessing - 3 messages, 3 authors
http://groups.google.com/group/comp.lang.python/t/712e00ab1354e885?hl=en
* Email Script - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/76c43e013743795b?hl=en
* Draft PEP on RSON configuration file format - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/09ce33197b330e90?hl=en
* Queue peek? - 3 messages, 3 authors
http://groups.google.com/group/comp.lang.python/t/ba3cb62c81d4cb7a?hl=en
* cpan for python? - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/ecd51ced8d24593e?hl=en
* Docstrings considered too complicated - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/dea5c94f3d058e26?hl=en
* Adding to a module's __dict__? - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/40837c4567d64745?hl=en
* Broken references in postings - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/658a7033105e20f3?hl=en
* Multiprocessing problem - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/3909e9c08cc8efe1?hl=en
* CGI, POST, and file uploads - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/8a7752bd79d5f5d6?hl=en
==============================================================================
TOPIC: conditional import into global namespace
http://groups.google.com/group/comp.lang.python/t/8718bce88cea1cf5?hl=en
==============================================================================
== 1 of 3 ==
Date: Tues, Mar 2 2010 10:21 am
From: MRAB
mk wrote:
> Hello everyone,
>
> I have a class that is dependent on subprocess functionality. I would
> like to make it self-contained in the sense that it would import
> subprocess if it's not imported yet.
>
> What is the best way to proceed with this?
>
> I see a few possibilities:
>
> 1. do a class level import, like:
>
> class TimeSync(object):
>
> import subprocess
>
>
> 2. do an import in init, which is worse bc it's ran every time an
> instance is created:
>
> def __init__(self, shiftsec, ntpserver):
> import subprocess
>
>
> Both of those methods have disadvantage in this context, though: they
> create 'subprocess' namespace in a class or instance, respectively.
>
> Is there anyway to make it a global import?
>
The simplest solution is just import it at the top of the module.
== 2 of 3 ==
Date: Tues, Mar 2 2010 10:31 am
From: Jerry Hill
On Tue, Mar 2, 2010 at 12:46 PM, mk <mrkafk@gmail.com> wrote:
> Hello everyone,
>
> I have a class that is dependent on subprocess functionality. I would like
> to make it self-contained in the sense that it would import subprocess if
> it's not imported yet.
>
> What is the best way to proceed with this?
Just import subprocess at the top of your module. If subprocess
hasn't been imported yet, it will be imported when your module is
loaded. If it's already been imported, your module will use the
cached version that's already been imported.
In other words, it sounds like Python already does what you want. You
don't need to do anything special.
--
Jerry
== 3 of 3 ==
Date: Tues, Mar 2 2010 12:41 pm
From: mk
Jerry Hill wrote:
> Just import subprocess at the top of your module. If subprocess
> hasn't been imported yet, it will be imported when your module is
> loaded. If it's already been imported, your module will use the
> cached version that's already been imported.
>
> In other words, it sounds like Python already does what you want. You
> don't need to do anything special.
Oh, thanks!
Hmm it's different than dealing with packages I guess -- IIRC, in
packages only code in package's __init__.py was executed?
Regards,
mk
==============================================================================
TOPIC: os.fdopen() issue in Python 3.1?
http://groups.google.com/group/comp.lang.python/t/49a2ca4c1f9e7a59?hl=en
==============================================================================
== 1 of 3 ==
Date: Tues, Mar 2 2010 10:25 am
From: Terry Reedy
On 3/2/2010 9:24 AM, Albert Hopkins wrote:
> I have a snippet of code that looks like this:
>
> pid, fd = os.forkpty()
> if pid == 0:
> subprocess.call(args)
> else:
> input = os.fdopen(fd).read()
> ...
>
>
> This seems to work find for CPython 2.5 and 2.6 on my Linux system.
To get help, or report a bug, for something like this, be as specific as
possible. 'Linux' may be too generic.
> However, with CPython 3.1 I get:
>
> input = os.fdopen(fd).read()
> IOError: [Errno 5] Input/output error
>
> Is there something wrong in Python 3.1? Is this the correct way to do
> this (run a process in a pseudo-tty and read it's output) or is there
> another way I should/could be doing this?
No idea, however, the first thing I would do is call the .fdopen and
.read methods separately (on separate lines) to isolate which is raising
the error.
tjr
== 2 of 3 ==
Date: Tues, Mar 2 2010 12:22 pm
From: Albert Hopkins
On Tue, 2010-03-02 at 13:25 -0500, Terry Reedy wrote:
> To get help, or report a bug, for something like this, be as specific as
> possible. 'Linux' may be too generic.
This is on Python on Gentoo Linux x64 with kernel 2.6.33.
>
> > However, with CPython 3.1 I get:
> >
> > input = os.fdopen(fd).read()
> > IOError: [Errno 5] Input/output error
> >
> > Is there something wrong in Python 3.1? Is this the correct way to do
> > this (run a process in a pseudo-tty and read it's output) or is there
> > another way I should/could be doing this?
>
> No idea, however, the first thing I would do is call the .fdopen and
> .read methods separately (on separate lines) to isolate which is raising
> the error.
The exception occurs on the read() method.
== 3 of 3 ==
Date: Tues, Mar 2 2010 12:22 pm
From: Albert Hopkins
On Tue, 2010-03-02 at 17:32 +0000, MRAB wrote:
> The documentation also mentions the 'pty' module. Have you tried that
> instead?
I haven't but I'll give it a try. Thanks.
-a
==============================================================================
TOPIC: case do problem
http://groups.google.com/group/comp.lang.python/t/d73f6f6a59d3bbfd?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 10:26 am
From: MRAB
Tracubik wrote:
> hi, i've to convert from Pascal this code:
>
> iterations=0;
> count=0;
> REPEAT;
> iterations = iterations+1;
> ...
> IF (genericCondition) THEN count=count+1;
> ...
> CASE count OF:
> 1: m = 1
> 2: m = 10
> 3: m = 100
> UNTIL count = 4 OR iterations = 20
>
> i do something like this:
>
> iterations = 0
> count = 0
>
> m_Switch = (1,10,100)
>
> while True:
> iterations +=1
> ...
> if (genericCondition):
> count +=1
> ...
> try:
> m = m_Switch[count-1]
> except: pass
> if count = 4 or iterations = 20
>
> the problem is that when count = 4 m_Switch[4-1] have no value, so i use
> the try..except.
>
> Is there a better solution to solve this problem? and, generally
> speaking, the try..except block slow down the execution of the program or
> not?
>
Use a dict:
m_Switch = {1: 1, 2: 10, 3: 100}
and then catch the KeyError.
Don't use a bare 'except', catch the specific exception you want to
catch, and don't worry about the speed unless you discover that it's
real problem.
==============================================================================
TOPIC: Writing an assembler in Python
http://groups.google.com/group/comp.lang.python/t/595a2db256807e85?hl=en
==============================================================================
== 1 of 2 ==
Date: Tues, Mar 2 2010 10:48 am
From: Albert van der Horst
In article <Xns9D28186AF890Cfdnbgui7uhu5h8hrnuio@127.0.0.1>,
Giorgos Tzampanakis <gt67@hw.ac.uk> wrote:
>I'm implementing a CPU that will run on an FPGA. I want to have a
>(dead) simple assembler that will generate the machine code for
>me. I want to use Python for that. Are there any libraries that
>can help me with the parsing of the assembly code?
I have a pentium assembler in perl on my website below.
(postit-fixup principle).
You could borrow some idea's, if you can read perl.
The main purpose is to have a very simple and straightforward
assembler at the expense of ease of use.
Groetjes Albert
--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
== 2 of 2 ==
Date: Tues, Mar 2 2010 1:26 pm
From: Holger Mueller
Giorgos Tzampanakis <gt67@hw.ac.uk> wrote:
> I'm implementing a CPU that will run on an FPGA. I want to have a
> (dead) simple assembler that will generate the machine code for
> me. I want to use Python for that. Are there any libraries that
> can help me with the parsing of the assembly code?
Why coding assembler if you can type in hexdumps...
scnr
Holger
--
http://www.kati-und-holger.de/holgersblog.php
==============================================================================
TOPIC: freebsd and multiprocessing
http://groups.google.com/group/comp.lang.python/t/712e00ab1354e885?hl=en
==============================================================================
== 1 of 3 ==
Date: Tues, Mar 2 2010 10:31 am
From: Tim Arnold
On Mar 2, 12:59 pm, Tim Arnold <a_j...@bellsouth.net> wrote:
> On Mar 2, 11:52 am, Philip Semanchuk <phi...@semanchuk.com> wrote:
> > On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:
>
> > > Hi,
> > > I'm intending to use multiprocessing on a freebsd machine (6.3
> > > release, quad core, 8cpus, amd64). I see in the doc that on this
> > > platform I can't use synchronize:
>
> > > ImportError: This platform lacks a functioning sem_open
> > > implementation, therefore, the required synchronization primitives
> > > needed will not function, see issue 3770.
>
> > > As far as I can tell, I have no need to synchronize the processes--I
> > > have several processes run separately and I need to know when they're
> > > all finished; there's no communication between them and each owns its
> > > own log file for output.
>
> > > Is anyone using multiprocessing on FreeBSD and run into any other
> > > gotchas?
>
> > Hi Tim,
> > I don't use multiprocessing but I've written two low-level IPC
> > packages, one for SysV IPC and the other for POSIX IPC.
>
> > I think that multiprocessing prefers POSIX IPC (which is where
> > sem_open() comes from). I don't know what it uses if that's not
> > available, but SysV IPC seems a likely alternative. I must emphasize,
> > however, that that's a guess on my part.
>
> > FreeBSD didn't have POSIX IPC support until 7.0, and that was sort of
> > broken until 7.2. As it happens, I was testing my POSIX IPC code
> > against 7.2 last night and it works just fine.
>
> > SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is the
> > oldest I've tested). ISTR that by default each message queue is
> > limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably
> > tell you that and may even let you change it. Other than that I can't
> > think of any SysV limitations that might bite you.
>
> > HTH
> > Philip
>
> Hi Philip,
> Thanks for that information. I wish I could upgrade the machine to
> 7.2! alas, out of my power. I get the following results from sysctl:
> % sysctl kern.ipc | grep msg
> kern.ipc.msgseg: 2048
> kern.ipc.msgssz: 8
> kern.ipc.msgtql: 40
> kern.ipc.msgmnb: 2048
> kern.ipc.msgmni: 40
> kern.ipc.msgmax: 16384
>
> I'll write some test programs using multiprocessing and see how they
> go before committing to rewrite my current code. I've also been
> looking at 'parallel python' although it may have the same issues.http://www.parallelpython.com/
>
> thanks again,
> --Tim
Well that didn't work out well. I can't import either Queue or Pool
from multiprocessing, so I'm back to the drawing board. I'll see now
how parallel python does on freebsd.
--Tim
== 2 of 3 ==
Date: Tues, Mar 2 2010 11:28 am
From: Philip Semanchuk
On Mar 2, 2010, at 1:31 PM, Tim Arnold wrote:
> On Mar 2, 12:59 pm, Tim Arnold <a_j...@bellsouth.net> wrote:
>> On Mar 2, 11:52 am, Philip Semanchuk <phi...@semanchuk.com> wrote:
>>> On Mar 2, 2010, at 11:31 AM, Tim Arnold wrote:
>>
>>>> Hi,
>>>> I'm intending to use multiprocessing on a freebsd machine (6.3
>>>> release, quad core, 8cpus, amd64). I see in the doc that on this
>>>> platform I can't use synchronize:
>>
>>>> ImportError: This platform lacks a functioning sem_open
>>>> implementation, therefore, the required synchronization primitives
>>>> needed will not function, see issue 3770.
>>
>>>> As far as I can tell, I have no need to synchronize the
>>>> processes--I
>>>> have several processes run separately and I need to know when
>>>> they're
>>>> all finished; there's no communication between them and each owns
>>>> its
>>>> own log file for output.
>>
>>>> Is anyone using multiprocessing on FreeBSD and run into any other
>>>> gotchas?
>>
>>> Hi Tim,
>>> I don't use multiprocessing but I've written two low-level IPC
>>> packages, one for SysV IPC and the other for POSIX IPC.
>>
>>> I think that multiprocessing prefers POSIX IPC (which is where
>>> sem_open() comes from). I don't know what it uses if that's not
>>> available, but SysV IPC seems a likely alternative. I must
>>> emphasize,
>>> however, that that's a guess on my part.
>>
>>> FreeBSD didn't have POSIX IPC support until 7.0, and that was sort
>>> of
>>> broken until 7.2. As it happens, I was testing my POSIX IPC code
>>> against 7.2 last night and it works just fine.
>>
>>> SysV IPC works under FreeBSD 6 (and perhaps earlier versions; 6 is
>>> the
>>> oldest I've tested). ISTR that by default each message queue is
>>> limited to 2048 bytes in total size. 'sysctl kern.ipc' can probably
>>> tell you that and may even let you change it. Other than that I
>>> can't
>>> think of any SysV limitations that might bite you.
>>
>>> HTH
>>> Philip
>>
>> Hi Philip,
>> Thanks for that information. I wish I could upgrade the machine to
>> 7.2! alas, out of my power. I get the following results from sysctl:
>> % sysctl kern.ipc | grep msg
>> kern.ipc.msgseg: 2048
>> kern.ipc.msgssz: 8
>> kern.ipc.msgtql: 40
>> kern.ipc.msgmnb: 2048
>> kern.ipc.msgmni: 40
>> kern.ipc.msgmax: 16384
>>
>> I'll write some test programs using multiprocessing and see how they
>> go before committing to rewrite my current code. I've also been
>> looking at 'parallel python' although it may have the same issues.http://www.parallelpython.com/
>>
>> thanks again,
>> --Tim
>
> Well that didn't work out well. I can't import either Queue or Pool
> from multiprocessing, so I'm back to the drawing board. I'll see now
> how parallel python does on freebsd.
Sorry to hear that didn't work for you. Should you need to get down to
the nuts & bolts level, my module for SysV IPC is here:
http://semanchuk.com/philip/sysv_ipc/
Good luck with Parallel Python,
Philip
== 3 of 3 ==
Date: Tues, Mar 2 2010 12:57 pm
From: Pop User
On 3/2/2010 12:59 PM, Tim Arnold wrote:
>
> I'll write some test programs using multiprocessing and see how they
> go before committing to rewrite my current code. I've also been
> looking at 'parallel python' although it may have the same issues.
> http://www.parallelpython.com/
>
parallelpython works for me on FreeBSD 6.2.
==============================================================================
TOPIC: Email Script
http://groups.google.com/group/comp.lang.python/t/76c43e013743795b?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 10:33 am
From: mk
Where do you take class Email from? There's no info in your mail on this.
Regards,
mk
==============================================================================
TOPIC: Draft PEP on RSON configuration file format
http://groups.google.com/group/comp.lang.python/t/09ce33197b330e90?hl=en
==============================================================================
== 1 of 2 ==
Date: Tues, Mar 2 2010 10:39 am
From: Robert Kern
On 2010-03-02 11:59 AM, Terry Reedy wrote:
> On 3/2/2010 11:34 AM, Robert Kern wrote:
>> On 2010-03-01 22:55 PM, Terry Reedy wrote:
>>> On 3/1/2010 7:56 PM, Patrick Maupin wrote:
>>>> On Mar 1, 5:57 pm, Erik Max Francis<m...@alcyone.com> wrote:
>>>>> Patrick Maupin wrote:
>>>>> This not only seriously stretching the meaning of the term "superset"
>>>>> (as Python is most definitely not even remotely a superset of JSON),
>>>>> but
>>>>
>>>> Well, you are entitled to that opinion, but seriously, if I take valid
>>>> JSON, replace unquoted true with True, unquoted false with False,
>>>> replace unquoted null with None, and take the quoted strings and
>>>> replace occurrences of \uXXXX with the appropriate unicode, then I do,
>>>> in fact, have valid Python. But don't take my word for it -- try it
>>>> out!
>>>
>>> To me this is so strained that I do not see why why you are arguing the
>>> point. So what? The resulting Python 'program' will be equivalent, I
>>> believe, to 'pass'. Ie, construct objects and then discard them with no
>>> computation or output.
>>
>> Not if you eval() rather than exec().
>
> >>> eval(1)
>
> creates and objects and discards it, with a net result of 'pass'.
> What do you think I am missing.
x = eval('1')
> It's reasonably well-accepted that
>> JSON is very close to being a subset of Python's expression syntax with
>> just a few modifications.
>
> It is specifically JavaScript Object Notation, which is very similar to
> a subset of Python's object notation (number and string literals and
> list and dict displays (but not set displays), and three named
> constants). Without operators, it barely qualifies, to me, even as
> 'expression syntax'.
Literal expression syntax, then.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
== 2 of 2 ==
Date: Tues, Mar 2 2010 11:30 am
From: Patrick Maupin
On Mar 2, 11:59 am, Terry Reedy <tjre...@udel.edu> wrote:
> To me, comparing object notation with programming language is not
> helpful to the OP's purpose.
Yes, I agree, it was a distraction. I fell into the trap of
responding to the ludicrous claim that "if X is a superset of Y, then
X cannot possibly look better than Y" (a claim made by multiple people
all thinking it was clever) by showing that Y has other supersets that
do in fact look better than Y. In doing this, I made the mistake of
choosing a superset of an analogue to Y, rather than to Y itself.
When called out on it, I showed that, in fact, the actual X that IS a
superset of Y can be used in a way that looks better. However, you
are right that JSON is such a small subset of JS that it's really
pretty ridiculous to even compare them, but that still makes the point
that the original argument I was trying to refute is completely
specious. In retrospect, though, I should have chosen a better way to
make that point, because I let myself get caught up in making and then
defending a flippant statement that I don't really care about one way
or the other.
> His main claim is that JSON can be usefully
> extended but that YAML is too much, so that perhaps he, with help, can
> find a 'sweet spot' in between.
An excellent summary of my position.
Thanks,
Pat
==============================================================================
TOPIC: Queue peek?
http://groups.google.com/group/comp.lang.python/t/ba3cb62c81d4cb7a?hl=en
==============================================================================
== 1 of 3 ==
Date: Tues, Mar 2 2010 11:02 am
From: Veloz
On Mar 2, 1:18 pm, Raymond Hettinger <pyt...@rcn.com> wrote:
> On Mar 2, 8:29 am, Veloz <michaelve...@gmail.com> wrote:
>
> > Hi all
> > I'm looking for a queue that I can use with multiprocessing, which has
> > a peek method.
>
> > I've seen some discussion about queue.peek but don't see anything in
> > the docs about it.
>
> > Does python have a queue class with peek semantics?
>
> Am curious about your use case? Why peek at something
> that could be gone by the time you want to use it.
>
> val = q.peek()
> if something_i_want(val):
> v2 = q.get() # this could be different than val
>
> Wouldn't it be better to just get() the value and return if you don't
> need it?
>
> val = q.peek()
> if not something_i_want(val):
> q.put(val)
>
> Raymond
Yeah, I hear you. Perhaps queue is not the best solution. My highest
level use case is this: The user visits a web page (my app is a
Pylons app) and requests a "report" be created. The report takes too
long to create and display on the spot, so the user expects to visit
some url "later" and see if the specific report has completed, and if
so, have it returned to them.
At a lower level, I'm thinking of using some process workers to create
these reports in the background; there'd be a request queue (into
which requests for reports would go, each with an ID) and a completion
queue, into which the workers would write an entry when a report was
created, along with an ID matching the original request.
The "peek" parts comes in when the user comes back later to see if
their report has done. That is, in my page controller logic, I'd like
to look through the complete queue and see if the specific report has
been finished (I could tell by matching up the ID of the original
request to the ID in the completed queue). If there was an item in the
queue matching the ID, it would be removed.
It's since occurred to me that perhaps a queue is not the best way to
handle the completions. (We're ignoring the file system as a solution
for the time being, and focusing on in-memory structures). I'm
wondering now if a simple array of completed items wouldn't be better.
Of course, all the access to the array would have to be thread/process-
proof. As you pointed out, for example, multi-part operations such as
"is such-and-such an ID in the list? If so, remove it and return in"
would have to be treated atomically to avoid concurrency issues.
Any thoughts on this design approach are welcomed :-)
Michael
== 2 of 3 ==
Date: Tues, Mar 2 2010 11:44 am
From: MRAB
Veloz wrote:
> On Mar 2, 1:18 pm, Raymond Hettinger <pyt...@rcn.com> wrote:
>> On Mar 2, 8:29 am, Veloz <michaelve...@gmail.com> wrote:
>>
>>> Hi all
>>> I'm looking for a queue that I can use with multiprocessing, which has
>>> a peek method.
>>> I've seen some discussion about queue.peek but don't see anything in
>>> the docs about it.
>>> Does python have a queue class with peek semantics?
>> Am curious about your use case? Why peek at something
>> that could be gone by the time you want to use it.
>>
>> val = q.peek()
>> if something_i_want(val):
>> v2 = q.get() # this could be different than val
>>
>> Wouldn't it be better to just get() the value and return if you don't
>> need it?
>>
>> val = q.peek()
>> if not something_i_want(val):
>> q.put(val)
>>
>> Raymond
>
> Yeah, I hear you. Perhaps queue is not the best solution. My highest
> level use case is this: The user visits a web page (my app is a
> Pylons app) and requests a "report" be created. The report takes too
> long to create and display on the spot, so the user expects to visit
> some url "later" and see if the specific report has completed, and if
> so, have it returned to them.
>
> At a lower level, I'm thinking of using some process workers to create
> these reports in the background; there'd be a request queue (into
> which requests for reports would go, each with an ID) and a completion
> queue, into which the workers would write an entry when a report was
> created, along with an ID matching the original request.
>
> The "peek" parts comes in when the user comes back later to see if
> their report has done. That is, in my page controller logic, I'd like
> to look through the complete queue and see if the specific report has
> been finished (I could tell by matching up the ID of the original
> request to the ID in the completed queue). If there was an item in the
> queue matching the ID, it would be removed.
>
> It's since occurred to me that perhaps a queue is not the best way to
> handle the completions. (We're ignoring the file system as a solution
> for the time being, and focusing on in-memory structures). I'm
> wondering now if a simple array of completed items wouldn't be better.
> Of course, all the access to the array would have to be thread/process-
> proof. As you pointed out, for example, multi-part operations such as
> "is such-and-such an ID in the list? If so, remove it and return in"
> would have to be treated atomically to avoid concurrency issues.
>
> Any thoughts on this design approach are welcomed :-)
>
A set of completed reports, or a dict with the ID as the key? The
advantage of a dict is that the value could contain several bits of
information, such as when it was completed, the status (OK or failed),
etc. You might want to wrap it in a class with locks (mutexes) to ensure
it's threadsafe.
== 3 of 3 ==
Date: Tues, Mar 2 2010 11:58 am
From: "Martin P. Hellwig"
On 03/02/10 19:44, MRAB wrote:
<cut>
> information, such as when it was completed, the status (OK or failed),
> etc. You might want to wrap it in a class with locks (mutexes) to ensure
> it's threadsafe.
What actually happens if multiple threads at the same time, write to a
shared dictionary (Not using the same key)?
I would think that if the hashing part of the dictionary has some sort
of serialization (please forgive me if I misuse a term) it should 'just
work'(tm)?
--
mph
==============================================================================
TOPIC: cpan for python?
http://groups.google.com/group/comp.lang.python/t/ecd51ced8d24593e?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 1:14 pm
From: R Fritz
On 2010-02-28 06:31:56 -0800, ssteinerX@gmail.com said:
>
> On Feb 28, 2010, at 9:28 AM, Someone Something wrote:
>
>> Is there something like cpan for python? I like python's syntax, but
>> Iuse perl because of cpan and the tremendous modules that it has. --
>
> Please search the mailing list archives.
>
> This subject has been discussed to absolute death.
But somehow the question is not in the FAQ, though the answer is. See:
<http://www.python.org/doc/faq/library/#how-do-i-find-a-module-or-application-to-perform-task-x>
--
Randolph Fritz
design machine group, architecture department, university of washington
rfritz@u.washington.edu -or- rfritz333@gmail.com
==============================================================================
TOPIC: Docstrings considered too complicated
http://groups.google.com/group/comp.lang.python/t/dea5c94f3d058e26?hl=en
==============================================================================
== 1 of 2 ==
Date: Tues, Mar 2 2010 1:22 pm
From: Ben Finney
Andreas Waldenburger <usenot@geekmail.INVALID> writes:
> Don't get me wrong; our whole system is more fragile than I find
> comfortable. But I guess getting 10ish different parties around the
> globe to work in complete unison is quite a feat, and I'm surprised it
> even works as it is. But it does, and I'm glad we don't have to
> micromanage other people's code.
It's rather odd that you think of "require general quality standards,
independently measurable and testable" to be "micromanaging".
I guess that when even the *customers* will resist implementing such
quality expectations, it's little surprise that the vendors continue to
push out such shoddy work on their customers.
--
\ "Why am I an atheist? I ask you: Why is anybody not an atheist? |
`\ Everyone starts out being an atheist." —Andy Rooney, _Boston |
_o__) Globe_ 1982-05-30 |
Ben Finney
== 2 of 2 ==
Date: Tues, Mar 2 2010 1:51 pm
From: Andreas Waldenburger
On Tue, 02 Mar 2010 19:05:25 +0100 Jean-Michel Pichavant
<jeanmichel@sequans.com> wrote:
> Andreas Waldenburger wrote:
> >
> > I had hoped that everyone just read it, went like "Oh geez.",
> > smiled it off with a hint of lesson learned and got back to
> > whatever it was they were doing. Alas, I was wrong ... and I'm
> > sorry.
> >
> There's something wrong saying that stupid people write working code
> that totally satisfies your needs. Don't you agree ? ;-)
>
No, in fact I don't.
It works. They are supposed to make it work. And that's what they do.
Whether or not they put their docstrings in the place they should does
not change that their code works.
Sorry, you guys drained all the funny out of me.
/W
--
INVALID? DE!
==============================================================================
TOPIC: Adding to a module's __dict__?
http://groups.google.com/group/comp.lang.python/t/40837c4567d64745?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 1:23 pm
From: Dave Angel
Terry Reedy wrote:
> On 3/2/2010 11:18 AM, John Posner wrote:
>> On 3/2/2010 10:19 AM, Roy Smith wrote:
>>>
>>> Somewhat sadly, in my case, I can't even machine process the header
>>> file. I don't, strictly speaking, have a header file. What I have is
>>> a PDF which documents what's in the header file, and I'm manually re-
>>> typing the data out of that. Sigh.
>
> There are Python modules to read/write pdf.
>
>> Here's an idea, perhaps too obvious, to minimize your keystrokes:
>>
>> 1. Create a text file with the essential data:
>>
>> XYZ_FOO 0 The foo property
>> XYZ_BAR 1 The bar property
>> XYZ_BAZ 2 reserved for future use
>>
>> 2. Use a Python script to convert this into the desired code:
>>
>> declare('XYZ_FOO', 0, "The foo property")
>> declare('XYZ_BAR', 1, "The bar property")
>> declare('XYZ_BAZ', 2, "reserved for future use")
>>
>> Note:
>>
>> >>> s
>> 'XYZ_FOO 0 The foo property'
>> >>> s.split(None, 2)
>> ['XYZ_FOO', '0', 'The foo property']
>
> Given that set of triples is constant, I would think about having the
> Python script do the computation just once, instead of with every
> inport. In other words, the script should *call* the declare function
> and then write out the resulting set of dicts either to a .py or
> pickle file.
>
> tjr
>
>
There have been lots of good suggestions in this thread. Let me give
you my take:
1) you shouldn't want to clutter up the global dictionary of your main
processing module. There's too much risk of getting a collision, either
with the functions you write, or with some builtin. That's especially
true if you might later want to use a later version of that pdf file.
Easiest solution for your purposes, make it a separate module. Give it
a name like defines, and in your main module, you use
import defines
print defines.XYZ_FOO
And if that's too much typing, you can do:
import defines as I
print I.XYZ_FOO
Next problem is to parse that pdf file. One solution is to use a pdf
library. But another is to copy/paste it into a text file, and parse
that. Assuming it'll paste, and that the lines you want are
recognizable (eg. they all begin as #define), the parsing should be
pretty easy. The results of the parsing is a file defines.py
Now, if the pdf ever changes, rerun your parsing program. But don't run
it every time your application runs.
If the pdf file were changing often, then I'd have a different answer:
2) define an empty class, just as a placeholder, and make one instance I
Populate a class instance I with setattrib() calls, but access
it with direct syntax, same as our first example.
DaveA
==============================================================================
TOPIC: Broken references in postings
http://groups.google.com/group/comp.lang.python/t/658a7033105e20f3?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 1:25 pm
From: Ben Finney
Grant Edwards <invalid@invalid.invalid> writes:
> Or is it just individual news/mail clients that are broken?
This, I believe. Many clients mess up the References and In-Reply-To
fields, in the face of many years of complaint to the vendors.
Most free-software clients get it right, AFAICT.
--
\ "Contentment is a pearl of great price, and whosoever procures |
`\ it at the expense of ten thousand desires makes a wise and |
_o__) happy purchase." —J. Balguy |
Ben Finney
==============================================================================
TOPIC: Multiprocessing problem
http://groups.google.com/group/comp.lang.python/t/3909e9c08cc8efe1?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 12:59 pm
From: Matt Chaput
Hi,
I'm having a problem with the multiprocessing package.
I'm trying to use a simple pattern where a supervisor object starts a
bunch of worker processes, instantiating them with two queues (a job
queue for tasks to complete and an results queue for the results). The
supervisor puts all the jobs in the "job" queue, then join()s the
workers, and then pulls all the completed results off the "results" queue.
(I don't think I can just use something like Pool.imap_unordered for
this because the workers need to be objects with state.)
Here's a simplified example:
The problem is that seemingly randomly, but almost always, the worker
processes will deadlock at some point and stop working before they
complete. This will leave the whole program stalled forever. This seems
more likely the more work each worker does (to the point where adding
the time.sleep(0.01) as seen in the example code above guarantees it).
The problem seems to occur on both Windows and Mac OS X.
I've tried many random variations of the code (e.g. using JoinableQueue,
calling cancel_join_thread() on one or both queues even though I have no
idea what it does, etc.) but keep having the problem.
Am I just using multiprocessing wrong? Is this a bug? Any advice?
Thanks,
Matt
==============================================================================
TOPIC: CGI, POST, and file uploads
http://groups.google.com/group/comp.lang.python/t/8a7752bd79d5f5d6?hl=en
==============================================================================
== 1 of 1 ==
Date: Tues, Mar 2 2010 1:48 pm
From: Mitchell L Model
Can someone tell me how to upload the contents of a (relatively small)
file using an HTML form and CGI in Python 3.1? As far as I can tell
from a half-day of experimenting, browsing, and searching the Python
issue tracker, this is broken. Very simple example:
<html>
<head>
</head>
<body>
<form action="http://localhost:9000/cgi/cgi-test.py"
enctype="multipart/form-data"
method="post">
<label>File</label><br/>
<input type="file" name="contents"> <br/>
<button type="submit" >Submit</button> <br/>
</form>
</body>
</html>
cgi-test.py:
#!/usr/local/bin/python3
import cgi
import sys
form = cgi.FieldStorage()
print(form.getfirst('contents'), file=sys.stderr)
print('done')
I run a CGI server with:
#!/usr/bin/env python3
from http.server import HTTPServer, CGIHTTPRequestHandler
HTTPServer(('', 9000), CGIHTTPRequestHandler).serve_forever()
What happens is that the upload never stops. It works in 2.6.
If I cancel the upload from the browser, I get the following output,
so I know that basically things are working;
the cgi script just never finishes reading the POST input:
localhost - - [02/Mar/2010 16:37:36] "POST /cgi/cgi-test.py HTTP/1.1"
200 -
<<<CONTENTS OF MY FILE PRINTED HERE>>>
----------------------------------------
Exception happened during processing of request from ('127.0.0.1',
55779)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 307, in process_request
self.finish_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 614, in __init__
self.handle()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 352, in handle
self.handle_one_request()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 346, in handle_one_request
method()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 868, in do_POST
self.run_cgi()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 1045, in run_cgi
if not self.rfile.read(1):
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socket.py", line 214, in readinto
return self._sock.recv_into(b)
socket.error: [Errno 54] Connection reset by peer
----------------------------------------
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.lang.python"
group.
To post to this group, visit http://groups.google.com/group/comp.lang.python?hl=en
To unsubscribe from this group, send email to comp.lang.python+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.lang.python/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home