comp.lang.python - 25 new messages in 12 topics - digest
comp.lang.python
http://groups.google.com/group/comp.lang.python?hl=en
comp.lang.python@googlegroups.com
Today's topics:
* Duplicate keys in dict? - 4 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/4372a73f1e51af35?hl=en
* importerror: module Gnuplot missing - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/07cd009403d6a7eb?hl=en
* a simple def how-to - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/63641d2590adb295?hl=en
* click me - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/209afa455439be65?hl=en
* Calculating very large exponents in python - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/f43b4d63b0630386?hl=en
* negative "counts" in collections.Counter? - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/064d0fe87f7ea9e6?hl=en
* killing own process in windows - 6 messages, 3 authors
http://groups.google.com/group/comp.lang.python/t/5a6a7bcfd30fb7ce?hl=en
* stopping a multiprocessing.managers.BaseManager nicely (looks like a hack) -
2 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/f6495a5bb651aa2e?hl=en
* time_struct - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/1905a949453756ba?hl=en
* Window crash/freeze after "python test.py" in \Gnuplot - 1 messages, 1
author
http://groups.google.com/group/comp.lang.python/t/f291ade4edd798db?hl=en
* NoSQL Movement? - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/942e22a0145599b2?hl=en
* running a program on many processors - 4 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/0c33717cdfd82c14?hl=en
==============================================================================
TOPIC: Duplicate keys in dict?
http://groups.google.com/group/comp.lang.python/t/4372a73f1e51af35?hl=en
==============================================================================
== 1 of 4 ==
Date: Sun, Mar 7 2010 8:53 am
From: Steven D'Aprano
On Sun, 07 Mar 2010 08:23:13 -0800, vsoler wrote:
> Hello,
>
> My code snippet reads data from excel ranges. First row and first column
> are column headers and row headers respectively. After reding the range
> I build a dict.
>
> ................'A'..............'B'
> 'ab'............3................5
> 'cd'............7................2
> 'cd'............9................1
> 'ac'............7................2
>
> d={('ab','A'): 3, ('ab','B'): 5, ('cd','A'): 7, ...
>
> However, as you can see there are two rows that start with 'cd', and
> dicts, AFAIK do not accept duplicates.
> One of the difficulties I find here is that I want to be able to easily
> sum all the values for each row key: 'ab', 'cd' and 'ac'. However,
> using lists inside dicts makes it a difficult issue for me.
Given the sample above, what answer do you expect for summing the 'cd'
row? There are four reasonable answers:
7 + 2 = 9
9 + 1 = 10
7 + 2 + 9 + 1 = 19
Error
You need to decide what you want to do before asking how to do it.
--
Steven
== 2 of 4 ==
Date: Sun, Mar 7 2010 9:13 am
From: vsoler
On 7 mar, 17:53, Steven D'Aprano <st...@REMOVE-THIS-
cybersource.com.au> wrote:
> On Sun, 07 Mar 2010 08:23:13 -0800, vsoler wrote:
> > Hello,
>
> > My code snippet reads data from excel ranges. First row and first column
> > are column headers and row headers respectively. After reding the range
> > I build a dict.
>
> > ................'A'..............'B'
> > 'ab'............3................5
> > 'cd'............7................2
> > 'cd'............9................1
> > 'ac'............7................2
>
> > d={('ab','A'): 3, ('ab','B'): 5, ('cd','A'): 7, ...
>
> > However, as you can see there are two rows that start with 'cd', and
> > dicts, AFAIK do not accept duplicates.
> > One of the difficulties I find here is that I want to be able to easily
> > sum all the values for each row key: 'ab', 'cd' and 'ac'. However,
> > using lists inside dicts makes it a difficult issue for me.
>
> Given the sample above, what answer do you expect for summing the 'cd'
> row? There are four reasonable answers:
>
> 7 + 2 = 9
> 9 + 1 = 10
> 7 + 2 + 9 + 1 = 19
> Error
>
> You need to decide what you want to do before asking how to do it.
>
> --
> Steven
Steven,
What I need is that sum(('cd','A')) gives me 16, sum(('cd','B')) gives
me 3.
I apologize for not having made it clear.
== 3 of 4 ==
Date: Sun, Mar 7 2010 11:04 am
From: Dennis Lee Bieber
On Sun, 7 Mar 2010 08:23:13 -0800 (PST), vsoler
<vicente.soler@gmail.com> declaimed the following in
gmane.comp.python.general:
> Hello,
>
> My code snippet reads data from excel ranges. First row and first
> column are column headers and row headers respectively. After reding
> the range I build a dict.
>
> ................'A'..............'B'
> 'ab'............3................5
> 'cd'............7................2
> 'cd'............9................1
> 'ac'............7................2
>
> d={('ab','A'): 3, ('ab','B'): 5, ('cd','A'): 7, ...
>
> However, as you can see there are two rows that start with 'cd', and
> dicts, AFAIK do not accept duplicates.
>
First, I would not key using ("ab", "A")... I'd key just off "ab"...
The A and B columns are just that -- column positions -- the data would
look more like
{ "ab" : (3, 5),
"cd" : (7, 2)... }
Now... for the duplicate key processing...
If you initialize each key with a list, you can then append the data
tuple...
data = {}
for key, cA, cB in sourcedata:
if key in data:
data[key].append( (cA, cB) )
else:
data[key] = [ (cA, cB) ]
This way, what you end up with
"cd" : [ (7, 2), (9, 1) ]
> What is the best workaround for this? Should I discard dicts? Should I
> somehow have under 'cd'... a list of values?
>
If you really are using Excel -- it might be better to find out how
to command Excel to compute your results... Basically it sounds like you
are creating a grouped sum on the columns, with groups defined by the
value of the first column.
Would be child's play with a regular SQL database -- something like
(this is off the top of my head so may have some syntax errors)
select key, sum(A), sum(B) from aTable
group by key
order by key
Or just use a list of tuples, sorting, and create a report writer
break handler...
> 'ab'............3................5
> 'cd'............7................2
> 'cd'............9................1
> 'ac'............7................2
>
data = [ ("ab", 3, 5),
("cd", 7, 2),
("cd", 9, 1),
("ac", 7, 2) ]
skey = None
sA = 0
sB = 0
data.sort() #ensures same keys are adjacent
for (k, a, b) in data:
if k != skey: #key changed, report group output
if skey:
print "%s\t%s\t%s" % (skey, sA, sB)
skey = k #initialize next group
sA = a
sB = b
else:
sA += a #sum to current group
sB += b
if skey: #need to output the last key in the data at end
print "%s\t%s\t%s" % (skey, sA, sB)
--
Wulfraed Dennis Lee Bieber KD6MOG
wlfraed@ix.netcom.com HTTP://wlfraed.home.netcom.com/
== 4 of 4 ==
Date: Sun, Mar 7 2010 11:11 am
From: Tim Chase
vsoler wrote:
> On 7 mar, 17:53, Steven D'Aprano <st...@REMOVE-THIS-
> cybersource.com.au> wrote:
>> On Sun, 07 Mar 2010 08:23:13 -0800, vsoler wrote:
>>> Hello,
>>> My code snippet reads data from excel ranges. First row and first column
>>> are column headers and row headers respectively. After reding the range
>>> I build a dict.
>>> ................'A'..............'B'
>>> 'ab'............3................5
>>> 'cd'............7................2
>>> 'cd'............9................1
>>> 'ac'............7................2
>>> d={('ab','A'): 3, ('ab','B'): 5, ('cd','A'): 7, ...
>>> However, as you can see there are two rows that start with 'cd', and
>>> dicts, AFAIK do not accept duplicates.
>>> One of the difficulties I find here is that I want to be able to easily
>>> sum all the values for each row key: 'ab', 'cd' and 'ac'. However,
>>> using lists inside dicts makes it a difficult issue for me.
>
> What I need is that sum(('cd','A')) gives me 16, sum(('cd','B')) gives
> me 3.
But you really *do* want lists inside the dict if you want to be
able to call sum() on them. You want to map the tuple ('cd','A')
to the list [7,9] so you can sum the results. And if you plan to
sum the results, it's far easier to have one-element lists and
just sum them, instead of having to special case "if it's a list,
sum it, otherwise, return the value". So I'd use something like
import csv
f = file(INFILE, 'rb')
r = csv.reader(f, ...)
headers = r.next() # discard the headers
d = defaultdict(list)
for (label, a, b) in r:
d[(label, 'a')].append(int(a))
d[(label, 'b')].append(int(b))
# ...
for (label, col), value in d.iteritems():
print label, col, 'sum =', sum(value)
Alternatively, if you don't need to store the intermediate
values, and just want to store the sums, you can accrue them as
you go along:
d = defaultdict(int)
for (label, a, b) in r:
d[(label, 'a')] += int(a)
d[(label, 'b')] += int(b)
# ...
for (label, col), value in d.iteritems():
print label, col, 'sum =', value
Both are untested, but I'm pretty sure they're both viable,
modulo my sleep-deprived eyes.
-tkc
==============================================================================
TOPIC: importerror: module Gnuplot missing
http://groups.google.com/group/comp.lang.python/t/07cd009403d6a7eb?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 9:16 am
From: gujax
Hi,
I need help. I am trying to follow examples from a book "Python
Scripting for Computational Science" and the examples are all plotted
using Gnuplot. When I run the programs I get error saying "importerror
Gnuplot module missing".
I have installed Gnuplot in C:\My Programs\gnuplot directory (running
on WinME). I have installed python 2.5 and Gnuplot 4.2. I have also
installed Gnuplot-py-1.8 in Python's site-packages.
All paths are correctly set as I can see from sys.path argument i.e.,
path for gnuplot is set and so also the python path and scripting path
which is where book examples reside. Many examples did run well until
it came down to plotting...
I also ran 'python test.py' from Gnuplot-py-1.8 directory and
strangely it runs and then asks me to press return to see result. The
screen then freezes and the computer hangs. I have to reboot it again.
Some one suggested in the old archives that I should change in the
utils the code
import Gnuplot,utils to
import utils
from _Gnuplot import Gnuplot
I have tried it but same result. Window freezes and my "C:\Windows\ "
directory gets filled with fff #...tmp files. I have no idea what are
those.
My other attempts:
1. Is there an issue how one names the directory - gnuplot versus
Gnuplot etc...
2. I have done things like >>> from numpy import oldnumeric. And the
test runs fine.
3. "python setup.py" install for installing Gnuplot-py-1.8 also ran
fine.
4. I could from "\gnuplot\bin" directory open wgnuplot and in the
Gnuplot window could plot sinx/x.
5. I also have a pgnuplot application
6. Is the gnuplot version incompatible with Python 2.5 and its
associated older numpy versions..
I have seen several similar "missing Gnuplot" reports in the archive
but did not find any resolution to those. I will appreciate any help,
Thanks
gujax
==============================================================================
TOPIC: a simple def how-to
http://groups.google.com/group/comp.lang.python/t/63641d2590adb295?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 9:17 am
From: Stefan Behnel
vsoler, 07.03.2010 16:05:
> Hello,
>
> My script starts like this:
>
> book=readFromExcelRange('book')
> house=readFromExcelRange('house')
> table=readFromExcelRange('table')
> read=readFromExcelRange('read')
> ...
>
> But I would like to have something equivalent, like...
>
> ranges=['book','house','table','read']
> for i in ranges:
> var[i]=readFromExcelRange(i)
Note that the name "i" is rather badly chosen as it generally implies a
totally different thing (integer) than what you use it for (names of ranges).
"ranges" seems to fall into the same bucket, but I guess that's just
because I can't extract the meaning from your code snippet (which is not a
good sign).
Try to use expressive names in your code, so that people who look at it for
the first time get an idea about what it does with what kind of data.
Stefan
==============================================================================
TOPIC: click me
http://groups.google.com/group/comp.lang.python/t/209afa455439be65?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 10:09 am
From: madhan
please clic it it will be usefull to you
http://123maza.com/78/healthyfitness/
==============================================================================
TOPIC: Calculating very large exponents in python
http://groups.google.com/group/comp.lang.python/t/f43b4d63b0630386?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 12:40 pm
From: geremy condra
On Sun, Mar 7, 2010 at 1:55 PM, Fahad Ahmad <miraclesoul@hotmail.com> wrote:
> Dear All,
>
> i am writing my crytographic scheme in python, i am just a new user to it.
> I have written the complete code, the only point i am stuck it is that i am
> using 256 exponentiation which is normal in crytography but python just
> hangs on it.
>
> g**x [where both g and x are 256 bit numbers , in decimal they are around
> 77]
>
> after reading several forums, i just come to know it can be done using
> numpy, i have installed python(x,y) has has both numpy and scipy installed
> but i am not able to make it happen.
>
> any idea which library, module, or piece of code can solve this mystery :S
>
> sorry for bad english
A couple of things:
1) if you're working with modular exponentiation, remember that pow() takes
three arguments, ie:
a = 222222222222222222222222222
b = 5555555555555555555555555555
pow(a, b, 1200)
will calculate the correct answer (768) very quickly, while
a**b % 1200
has not terminated in the time it took me to compose this
email.
2) sage has a lot of excellent tools for crypto/cryptanalysis that you
may want to take a look at.
3) not saying you don't know what you're doing, but be careful when
rolling your own cryptosystems- even very good cryptographers make
implementation mistakes!
Geremy Condra
==============================================================================
TOPIC: negative "counts" in collections.Counter?
http://groups.google.com/group/comp.lang.python/t/064d0fe87f7ea9e6?hl=en
==============================================================================
== 1 of 2 ==
Date: Sun, Mar 7 2010 1:04 pm
From: Vlastimil Brom
Hi all,
I'd like to ask about the possibility of negative "counts" in
collections.Counter (using Python 3.1);
I believe, my usecase is rather trivial, basically I have the word
frequencies of two texts and I want do compare them (e.g. to see what
was added and removed between different versions of a text).
This is simple enough to do with own code, but I thought, this would
be exactly the case for Counter...
However, as the Counter only returns positive counts, one has to get
the difference in both directions and combine them afterwards, maybe
something like:
>>> c1=collections.Counter("aabcddd")
>>> c2=collections.Counter("abbbd")
>>> added_c2 = c2-c1
>>> removed_c2 = c1-c2
>>> negative_added_c2 = dict((k, v*-1) for (k, v) in removed_c2.items())
>>> changed_c2 = dict(added_c2)
>>> changed_c2.update(negative_added_c2)
>>> changed_c2
{'a': -1, 'c': -1, 'b': 2, 'd': -2}
>>>
It seems to me, that with negative counts allowed in Counter, this
would simply be the matter of a single difference:
changed_c2 = c2 - c1
Is there a possibility to make the Counter work this way (other than
replacing its methods in a subclass, which might be comparable to
writing the naive counting class itself)?
Are there maybe some reasons I missed to disable negative counts here?
(As I could roughly understand, the Counter isn't quite limited to the
mathematical notion of multiset; it seems to accept negative counts,
but its methods only output the positive part).
Is this kind of task - a comparison in both directions - an unusual
one, or is it simply not the case for Counter?
Thanks in advance,
vbr
== 2 of 2 ==
Date: Sun, Mar 7 2010 2:21 pm
From: Arnaud Delobelle
Vlastimil Brom <vlastimil.brom@gmail.com> writes:
> Hi all,
> I'd like to ask about the possibility of negative "counts" in
> collections.Counter (using Python 3.1);
> I believe, my usecase is rather trivial, basically I have the word
> frequencies of two texts and I want do compare them (e.g. to see what
> was added and removed between different versions of a text).
>
> This is simple enough to do with own code, but I thought, this would
> be exactly the case for Counter...
> However, as the Counter only returns positive counts, one has to get
> the difference in both directions and combine them afterwards, maybe
> something like:
>
>>>> c1=collections.Counter("aabcddd")
>>>> c2=collections.Counter("abbbd")
>>>> added_c2 = c2-c1
>>>> removed_c2 = c1-c2
>>>> negative_added_c2 = dict((k, v*-1) for (k, v) in removed_c2.items())
>>>> changed_c2 = dict(added_c2)
>>>> changed_c2.update(negative_added_c2)
>>>> changed_c2
> {'a': -1, 'c': -1, 'b': 2, 'd': -2}
>>>>
>
> It seems to me, that with negative counts allowed in Counter, this
> would simply be the matter of a single difference:
> changed_c2 = c2 - c1
>
> Is there a possibility to make the Counter work this way (other than
> replacing its methods in a subclass, which might be comparable to
> writing the naive counting class itself)?
> Are there maybe some reasons I missed to disable negative counts here?
> (As I could roughly understand, the Counter isn't quite limited to the
> mathematical notion of multiset; it seems to accept negative counts,
> but its methods only output the positive part).
> Is this kind of task - a comparison in both directions - an unusual
> one, or is it simply not the case for Counter?
Every time I have needed something like collections.Counter, I have
wanted the behaviour you require too. As a result, I have never used
collections.Counter. Instead I have used plain dictionaries or my own
class.
I don't understand why the Counter's + and - operators behave as they
do. Here is an example from the docs:
>>> c = Counter(a=3, b=1)
>>> d = Counter(a=1, b=2)
>>> c + d # add two counters together: c[x] + d[x]
Counter({'a': 4, 'b': 3})
>>> c - d # subtract (keeping only positive counts)
Counter({'a': 2})
>>> c & d # intersection: min(c[x], d[x])
Counter({'a': 1, 'b': 1})
>>> c | d # union: max(c[x], d[x])
Counter({'a': 3, 'b': 2})
If + and - just added or subtracted the multiplicities of elements,
keeping negative multiplicites, we would get:
>>> c - d
Counter({'a':2, 'b':-1})
Which I think is useful in many cases. But we could still get the
result of current c - d very simply:
>>> (c - d) | Counter() # | Counter() removes negative multiplicities
Counter({'a':2})
Altogether more versatile and coherent IMHO.
--
Arnaud
==============================================================================
TOPIC: killing own process in windows
http://groups.google.com/group/comp.lang.python/t/5a6a7bcfd30fb7ce?hl=en
==============================================================================
== 1 of 6 ==
Date: Sun, Mar 7 2010 1:08 pm
From: News123
Hi,
How can I kill my own process?
Some multithreaded programs, that I have are unable to stop when ctrl-C
is pressed.
Some can't be stopped with sys.exit()
So I'd just like to terminate my own program.
Examples of non killable (not killable with CTRL-C) programs:
- A program, that started an XMLRPC server with serve_forever
- a program, that started a multiprocessing.Manager with serve_forever
thanks in advance for some ideas.
N
== 2 of 6 ==
Date: Sun, Mar 7 2010 1:28 pm
From: "Martin P. Hellwig"
On 03/07/10 21:08, News123 wrote:
> Hi,
>
>
> How can I kill my own process?
>
> Some multithreaded programs, that I have are unable to stop when ctrl-C
> is pressed.
> Some can't be stopped with sys.exit()
>
> So I'd just like to terminate my own program.
>
>
> Examples of non killable (not killable with CTRL-C) programs:
> - A program, that started an XMLRPC server with serve_forever
> - a program, that started a multiprocessing.Manager with serve_forever
>
>
> thanks in advance for some ideas.
>
>
> N
If it is just the xml rpc server you want to kill, there might be better
ways. For example look at:
http://code.google.com/p/dcuktec/source/browse/source/wrapped_xmlrpc_server/rpc.py
with perhaps special interest at the comment on lines 172-174.
--
mph
== 3 of 6 ==
Date: Sun, Mar 7 2010 1:54 pm
From: News123
Hi Martin.
Hellwig wrote:
> On 03/07/10 21:08, News123 wrote:
>> Hi,
>>
>>
>> How can I kill my own process?
>>
>> Some multithreaded programs, that I have are unable to stop when ctrl-C
>> is pressed.
>> Some can't be stopped with sys.exit()
>>
>> So I'd just like to terminate my own program.
>>
>>
>> Examples of non killable (not killable with CTRL-C) programs:
>> - A program, that started an XMLRPC server with serve_forever
>> - a program, that started a multiprocessing.Manager with serve_forever
>>
>>
> If it is just the xml rpc server you want to kill, there might be better
> ways. For example look at:
> http://code.google.com/p/dcuktec/source/browse/source/wrapped_xmlrpc_server/rpc.py
>
> with perhaps special interest at the comment on lines 172-174.
I
Thanks. this looks like a good solution for an XMLRPC server.
However when playing with different server modules I fall over and over
again over code, that can't be shutdown nicely.
Currently I'm still struggling with multiprocessing.managers,BaseManager
bye
N
== 4 of 6 ==
Date: Sun, Mar 7 2010 2:27 pm
From: "Martin P. Hellwig"
On 03/07/10 21:54, News123 wrote:
> Hi Martin.
> Hellwig wrote:
>> On 03/07/10 21:08, News123 wrote:
>>> Hi,
>>>
>>>
>>> How can I kill my own process?
>>>
>>> Some multithreaded programs, that I have are unable to stop when ctrl-C
>>> is pressed.
>>> Some can't be stopped with sys.exit()
>>>
>>> So I'd just like to terminate my own program.
>>>
>>>
>>> Examples of non killable (not killable with CTRL-C) programs:
>>> - A program, that started an XMLRPC server with serve_forever
>>> - a program, that started a multiprocessing.Manager with serve_forever
>>>
>>>
>> If it is just the xml rpc server you want to kill, there might be better
>> ways. For example look at:
>> http://code.google.com/p/dcuktec/source/browse/source/wrapped_xmlrpc_server/rpc.py
>>
>> with perhaps special interest at the comment on lines 172-174.
> I
>
>
>
> Thanks. this looks like a good solution for an XMLRPC server.
> However when playing with different server modules I fall over and over
> again over code, that can't be shutdown nicely.
>
> Currently I'm still struggling with multiprocessing.managers,BaseManager
>
> bye
>
> N
I haven't used the multiprocessing module yet, but generally speaking I
believe that everything in python that is server-like inherits from
SocketServer BaseServer. Probably for you to have all servers behave in
a way you expect, is to override functionality there, for example in:
http://docs.python.org/library/socketserver.html?highlight=baseserver#SocketServer.BaseServer
the function: handle_request
Though from looking at the source the function serve_forever is just an
while loop over handle request (blocking or no-blocking), so might be a
better candidate to replace.
But you still might find that some tcp connections remain open, so
unless you want to go down to the socket level and explicit close the
socket, there is not much you can do about that.
For the client side, socket timeout is you enemy, I found something
rather long as default (300 seconds in the xml-rpc client) but yours
might be different (it is probably a Python defined standard default,
but I haven't checked that).
Sounds to me like you will be busy reading up on it now :-)
Oh and just a word to prevent over-engineering, if both the server and
client is written by you, a lot of problems you anticipate will probably
never occur because that would require a rogue server or client. Unless
of course you like making rogue server/clients :-)
--
mph
== 5 of 6 ==
Date: Sun, Mar 7 2010 2:36 pm
From: Christian Heimes
News123 wrote:
> Hi,
>
>
> How can I kill my own process?
>
> Some multithreaded programs, that I have are unable to stop when ctrl-C
> is pressed.
> Some can't be stopped with sys.exit()
You have to terminate the XMP-RPC server or the manager first. Check the
docs!
You can terminate a Python process with os._exit() but I recommend that
you find another way. os._exit() is a hard termination. It kills the
process without running any cleanup code like atexit handlers and
Python's internal cleanups. Open files aren't flushed to disk etc.
Christian
== 6 of 6 ==
Date: Sun, Mar 7 2010 3:01 pm
From: News123
Hi Cristian,
Christian Heimes wrote:
> News123 wrote:
>> Hi,
>>
>>
>> How can I kill my own process?
>>
>> Some multithreaded programs, that I have are unable to stop when ctrl-C
>> is pressed.
>> Some can't be stopped with sys.exit()
>
> You have to terminate the XMP-RPC server or the manager first. Check the
> docs!
>
> You can terminate a Python process with os._exit() but I recommend that
> you find another way. os._exit() is a hard termination. It kills the
> process without running any cleanup code like atexit handlers and
> Python's internal cleanups. Open files aren't flushed to disk etc.
>
This is exactly the problem:
Neither the XMLRPC server nor the manager can be stopped
both serve forever. The doc doesn't really help.
For the XMLRPC server there are tricks to subclass it and to change the
behaviour, as indicated by Martin.
for the manager I did not find a clean solution (Plese see my other
thread "stopping a multiprocessing.manage.....")
I'm surprised, that there are no 'canned' solution to stop servers
remotely or just by pressing ctrl-C
I consider this being quite useful for certain kinds of applications.
I prefer to ask a server to shutdown, than to just kill him.
Am interactive program, which also acts like a server should be able
to shutdown its server thread from the main thread in order to quit nicely.
Baseserver has even a shutdown method(), but it cannot be called if
were started with serve_forever().
N
==============================================================================
TOPIC: stopping a multiprocessing.managers.BaseManager nicely (looks like a
hack)
http://groups.google.com/group/comp.lang.python/t/f6495a5bb651aa2e?hl=en
==============================================================================
== 1 of 2 ==
Date: Sun, Mar 7 2010 1:26 pm
From: News123
Hi,
I have following program
from multiprocessing.managers import BaseManager
def myfunc(): return 3
class MyManager(BaseManager): pass
MyManager.register('myfunc',callable = myfunc)
m = MyManager(address=('127.0.0.1', 50000),authkey='abracadabra')
server = m.get_server()
server.serve_forever()
I'd like to replace server.serve_forever() with something, which is
abortable.
After digging in the sources I came up with following (in my opinion)
inelegant, but working solution.
I just copied the Server.serve_forever() function from
multiprocessing/managers.py and changed two lines.
Does anybody have a better idea than this:
# ------------------------------------------------
import multiprocessing.managers
from multiprocessing.managers import BaseManager
def serve_till_stop(self):
'''
Run the server forever
'''
#current_process()._manager_server = self # this lin removed
multiprocessing.managers.current_process()._manager_server = self #
this line added
try:
try:
#while 1: # this line removed
while self.running: # this line added
try:
c = self.listener.accept()
except (OSError, IOError):
continue
t = threading.Thread(target=self.handle_request, args=(c,))
t.daemon = True
t.start()
except (KeyboardInterrupt, SystemExit):
pass
finally:
self.stop = 999
self.listener.close()
def myfunc(): return 3
def stopme(): server.running = False
class MyManager(BaseManager): pass
MyManager.register('myfunc',callable = myfunc)
m = MyManager(address=('127.0.0.1', 50000),authkey='abracadabra')
server = m.get_server()
server.running = True
serve_till_stop(server)
thanks in advance and bye
N
== 2 of 2 ==
Date: Sun, Mar 7 2010 1:47 pm
From: News123
My fix has certain problems:
News123 wrote:
> Hi,
>
>
> I have following program
>
> from multiprocessing.managers import BaseManager
> def myfunc(): return 3
> class MyManager(BaseManager): pass
> MyManager.register('myfunc',callable = myfunc)
> m = MyManager(address=('127.0.0.1', 50000),authkey='abracadabra')
> server = m.get_server()
> server.serve_forever()
>
>
> I'd like to replace server.serve_forever() with something, which is
> abortable.
>
> After digging in the sources I came up with following (in my opinion)
> inelegant, but working solution.
>
> I just copied the Server.serve_forever() function from
> multiprocessing/managers.py and changed two lines.
>
>
> Does anybody have a better idea than this:
> # ------------------------------------------------
> import multiprocessing.managers
> from multiprocessing.managers import BaseManager
> def serve_till_stop(self):
> '''
> Run the server forever
> '''
> #current_process()._manager_server = self # this lin removed
> multiprocessing.managers.current_process()._manager_server = self #
> this line added
> try:
> try:
> #while 1: # this line removed
> while self.running: # this line added
> try:
> c = self.listener.accept()
> except (OSError, IOError):
> continue
> t = threading.Thread(target=self.handle_request, args=(c,))
> t.daemon = True
> t.start()
> except (KeyboardInterrupt, SystemExit):
> pass
> finally:
> self.stop = 999
> self.listener.close()
Problems will now occur on he client side.
The server terminates now immediately after the function stopme has been
called.
The client however wants still to perform a few requests, before it
considers calling of stopme done.
So I still don't have a solution :-(
>
>
> def myfunc(): return 3
> def stopme(): server.running = False
> class MyManager(BaseManager): pass
> MyManager.register('myfunc',callable = myfunc)
> m = MyManager(address=('127.0.0.1', 50000),authkey='abracadabra')
> server = m.get_server()
> server.running = True
> serve_till_stop(server)
>
> thanks in advance and bye
>
>
> N
>
>
==============================================================================
TOPIC: time_struct
http://groups.google.com/group/comp.lang.python/t/1905a949453756ba?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 3:29 pm
From: moerchendiser2k3
any ideas?
==============================================================================
TOPIC: Window crash/freeze after "python test.py" in \Gnuplot
http://groups.google.com/group/comp.lang.python/t/f291ade4edd798db?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 3:33 pm
From: gujax
Hi,
My computer OS is Win ME, and I am running a Py2.5 version. Gnuplot is
v4.2, Gnuplot_py is v1.8. However, whenever I give a command "python
test.py" to test Gnuplot_py, I sometimes get message "
#Gnuplot.................for enjoyment
#press return to open a window
>..
>clear terminal
....#####test function#######
and then computer hangs. I never see any windows appearing. A blue
screen appears with request for cntrl-alt-del.
I cannot exactly determine the messages because the crash occurs
relatively fast therefore some sentences above may not be accurate.
I have tried this with many scripts which use Gnuplot for plotting but
it results in same crashes e.g. demo.py in gnuplot_py.
I really have no clue what should be my next step. Any help will be
appreicated.
I however, can run Gnuplot by typing "pgnuplot" or "wgnuplot" in the
command shell and it opens a Gnuplot window. Therefore, it looks like
some incompatibility between gnuplot_py, python2.5?
Thanks,
gujax
==============================================================================
TOPIC: NoSQL Movement?
http://groups.google.com/group/comp.lang.python/t/942e22a0145599b2?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Mar 7 2010 3:55 pm
From: floaiza
I don't think there is any doubt about the value of relational
databases, particularly on the Internet. The issue in my mind is how
to leverage all the information that resides in the "deep web" using
strictly the relational database paradigm.
Because that paradigm imposes a tight and rigid coupling between
semantics and syntax when you attempt to efficiently "merge" or
"federate" data from disparate sources you can find yourself spending
a lot of time and money building mappings and maintaining translators.
That's why approaches that try to separate syntax from the semantics
are now becoming so popular, but, again, as others have said, it is
not a matter of replacing one with the other, but of figuring out how
best to exploit what each technology offers.
I base my remarks on some initial explorations I have made on the use
of RDF Triple Stores, which, by the way, use RDBMSs to persist the
triples, but which offer a really high degree of flexibility WRT the
merging and federating of data from different semantic spaces.
The way I hope things will move forward is that eventually it will
become inexpensive and easy to "expose" as RDF triples all the
relevant data that now sits in special-purpose databases.
(just an opinion)
Francisco
On Mar 3, 12:36 pm, Xah Lee <xah...@gmail.com> wrote:
> recently i wrote a blog article on The NoSQL Movement
> athttp://xahlee.org/comp/nosql.html
>
> i'd like to post it somewhere public to solicit opinions, but in the
> 20 min or so, i couldn't find a proper newsgroup, nor private list
> that my somewhat anti-NoSQL Movement article is fitting.
>
> So, i thought i'd post here to solicit some opinins from the programer
> community i know.
>
> Here's the plain text version
>
> -----------------------------
> The NoSQL Movement
>
> Xah Lee, 2010-01-26
>
> In the past few years, there's new fashionable thinking about anti
> relational database, now blessed with a rhyming term: NoSQL.
> Basically, it considers that relational database is outdated, and not
> "horizontally" scalable. I'm quite dubious of these claims.
>
> According to Wikipedia Scalability article, verticle scalability means
> adding more resource to a single node, such as more cpu, memory. (You
> can easily do this by running your db server on a more powerful
> machine.), and "Horizontal scalability" means adding more machines.
> (and indeed, this is not simple with sql databases, but again, it is
> the same situation with any software, not just database. To add more
> machines to run one single software, the software must have some sort
> of grid computing infrastructure built-in. This is not a problem of
> the software per se, it is just the way things are. It is not a
> problem of databases.)
>
> I'm quite old fashioned when it comes to computer technology. In order
> to convience me of some revolutionary new-fangled technology, i must
> see improvement based on math foundation. I am a expert of SQL, and
> believe that relational database is pretty much the gist of database
> with respect to math. Sure, a tight definition of relations of your
> data may not be necessary for many applications that simply just need
> store and retrieve and modify data without much concern about the
> relations of them. But still, that's what relational database
> technology do too. You just don't worry about normalizing when you
> design your table schema.
>
> The NoSQL movement is really about scaling movement, about adding more
> machines, about some so-called "cloud computing" and services with
> simple interfaces. (like so many fashionable movements in the
> computing industry, often they are not well defined.) It is not really
> about anti relation designs in your data. It's more about adding
> features for practical need such as providing easy-to-user APIs (so
> you users don't have to know SQL or Schemas), ability to add more
> nodes, provide commercial interface services to your database, provide
> parallel systems that access your data. Of course, these needs are all
> done by any big old relational database companies such as Oracle over
> the years as they constantly adopt the changing industry's needs and
> cheaper computing power. If you need any relations in your data, you
> can't escape relational database model. That is just the cold truth of
> math.
>
> Importat data, such as used in the bank transactions, has relations.
> You have to have tight relational definitions and assurance of data
> integrity.
>
> Here's a second hand quote from Microsoft's Technical Fellow David
> Campbell. Source
>
> I've been doing this database stuff for over 20 years and I
> remember hearing that the object databases were going to wipe out
> the SQL databases. And then a little less than 10 years ago the
> XML databases were going to wipe out.... We actually ... you
> know... people inside Microsoft, [have said] 'let's stop working
> on SQL Server, let's go build a native XML store because in five
> years it's all going....'
>
> LOL. That's exactly my thought.
>
> Though, i'd have to have some hands on experience with one of those
> new database services to see what it's all about.
>
> --------------------
> Amazon S3 and Dynamo
>
> Look at Structured storage. That seems to be what these nosql
> databases are. Most are just a key-value pair structure, or just
> storage of documents with no relations. I don't see how this differ
> from a sql database using one single table as schema.
>
> Amazon's Amazon S3 is another storage service, which uses Amazon's
> Dynamo (storage system), indicated by Wikipedia to be one of those
> NoSQL db. Looking at the S3 and Dynamo articles, it appears the db is
> just a Distributed hash table system, with added http access
> interface. So, basically, little or no relations. Again, i don't see
> how this is different from, say, MySQL with one single table of 2
> columns, added with distributed infrastructure. (distributed database
> is often a integrated feature of commercial dbs, e.g. Wikipedia Oracle
> database article cites Oracle Real Application Clusters )
>
> Here's a interesting quote on S3:
>
> Bucket names and keys are chosen so that objects are addressable
> using HTTP URLs:
>
> *http://s3.amazonaws.com/bucket/key
> *http://bucket.s3.amazonaws.com/key
> *http://bucket/key(where bucket is a DNS CNAME record
> pointing to bucket.s3.amazonaws.com)
>
> Because objects are accessible by unmodified HTTP clients, S3 can
> be used to replace significant existing (static) web hosting
> infrastructure.
>
> So this means, for example, i can store all my images in S3, and in my
> html document, the inline images are just normal img tags with normal
> urls. This applies to any other type of file, pdf, audio, but html
> too. So, S3 becomes the web host server as well as the file system.
>
> Here's Amazon's instruction on how to use it as image server. Seems
> quite simple: How to use Amazon S3 for hosting web pages and media
> files? Source
>
> --------------------
> Google BigTable
>
> Another is Google's BigTable. I can't make much comment. To make a
> sensible comment, one must have some experience of actually
> implementing a database. For example, a file system is a sort of
> database. If i created a scheme that allows me to access my data as
> files in NTFS that are distributed over hundreds of PC, communicated
> thru http running Apache. This will let me access my files. To insert,
> delete, data, one can have cgi scripts on each machine. Would this be
> considered as a new fantastic NoNoSQL?
>
> ---------------------
>
> comments can also be posted tohttp://xahlee.blogspot.com/2010/01/nosql-movement.html
>
> Thanks.
>
> Xah
> ∑http://xahlee.org/
>
> ☄
==============================================================================
TOPIC: running a program on many processors
http://groups.google.com/group/comp.lang.python/t/0c33717cdfd82c14?hl=en
==============================================================================
== 1 of 4 ==
Date: Sun, Mar 7 2010 4:18 pm
From: Paweł Banyś
Hello,
I have already read about Python and multiprocessing which allows using
many processors. The idea is to split a program into separate tasks and
run each of them on a separate processor. However I want to run a Python
program doing a single simple task on many processors so that their
cumulative power is available to the program as if there was one huge
CPU instead of many separate ones. Is it possible? How can it be achieved?
Best regards,
Paweł
== 2 of 4 ==
Date: Sun, Mar 7 2010 4:28 pm
From: "Diez B. Roggisch"
Am 08.03.10 01:18, schrieb Paweł Banyś:
> Hello,
>
> I have already read about Python and multiprocessing which allows using
> many processors. The idea is to split a program into separate tasks and
> run each of them on a separate processor. However I want to run a Python
> program doing a single simple task on many processors so that their
> cumulative power is available to the program as if there was one huge
> CPU instead of many separate ones. Is it possible? How can it be achieved?
That's impossible to answer without knowing anything about your actual
task. Not everything is parallelizable, or algorithms suffer from
penalties if parallelization is overdone.
So in essence, what you've read already covers it: if your "simple task"
is dividable in several, independent sub-tasks that don't need
serialization, multiprocessing is your friend.
Diez
== 3 of 4 ==
Date: Sun, Mar 7 2010 4:49 pm
From: Gib Bogle
Paweł Banyś wrote:
...
How can it be achieved?
Very carefully.
== 4 of 4 ==
Date: Sun, Mar 7 2010 5:08 pm
From: Steven D'Aprano
On Mon, 08 Mar 2010 01:18:13 +0100, Paweł Banyś wrote:
> Hello,
>
> I have already read about Python and multiprocessing which allows using
> many processors. The idea is to split a program into separate tasks and
> run each of them on a separate processor. However I want to run a Python
> program doing a single simple task on many processors so that their
> cumulative power is available to the program as if there was one huge
> CPU instead of many separate ones. Is it possible? How can it be
> achieved?
Try Parallel Python.
http://www.parallelpython.com/
I haven't used it, but it looks interesting.
However, the obligatory warning against premature optimization: any sort
of parallel execution (including even lightweight threads) is hard to
build and much harder to debug. You should make sure that the potential
performance benefits are worth the pain before you embark on the job: are
you sure that the naive, single process version isn't fast enough?
--
Steven
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.lang.python"
group.
To post to this group, visit http://groups.google.com/group/comp.lang.python?hl=en
To unsubscribe from this group, send email to comp.lang.python+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.lang.python/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home