comp.lang.python - 26 new messages in 6 topics - digest
comp.lang.python
http://groups.google.com/group/comp.lang.python?hl=en
comp.lang.python@googlegroups.com
Today's topics:
* Trying to wrap my head around futures and coroutines - 6 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/9d51a767380b9642?hl=en
* the Gravity of Python 2 - 7 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/0afb505736567141?hl=en
* "More About Unicode in Python 2 and 3" - 9 messages, 4 authors
http://groups.google.com/group/comp.lang.python/t/fc13dd8f17f64a45?hl=en
* django question - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.python/t/4beb455a8db96ba1?hl=en
* Python 3 Q & A (Nick C.) updated - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/8c189025b600a46a?hl=en
* /usr/lib/python2.7/subprocess.py:OSError: [Errno 2] No such file or
directory - 1 messages, 1 author
http://groups.google.com/group/comp.lang.python/t/7788f17239427eba?hl=en
==============================================================================
TOPIC: Trying to wrap my head around futures and coroutines
http://groups.google.com/group/comp.lang.python/t/9d51a767380b9642?hl=en
==============================================================================
== 1 of 6 ==
Date: Mon, Jan 6 2014 4:56 pm
From: Skip Montanaro
I've been programming for a long while in an event&callback-driven world.
While I am comfortable enough with the mechanisms available (almost 100% of
what I do is in a PyGTK world with its signal mechanism), it's never been
all that satisfying, breaking up my calculations into various pieces, and
thus having my algorithm scattered all over the place.
So, I'm looking for a little guidance. It seems to me that futures,
coroutines, and/or the new Tulip/asyncio package might be my salvation, but
I'm having a bit of trouble seeing exactly how that would work. Let me
outline a simple hypothetical calculation. I'm looking for ways in which
these new facilities might improve the structure of my code.
Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
B". Each corresponds to executing a particular activity, A or B, which take
some non-zero amount of time to complete (as perceived by the user) or
cancel (as perceived by the state of the running system - not safe to run A
until B is complete/canceled, and vice versa). The user, being the fickle
sort that he is, might change his mind while A is running, and decide to
execute B instead. (The roles can also be reversed.) If s/he wants to run
task A, task B must be canceled or allowed to complete before A can be
started. Logically, the code looks something like (I fear Gmail is going to
destroy my indentation):
def do_A():
when B is complete, _do_A()
cancel_B()
def do_B():
when A is complete, _do_B()
cancel_A()
def _do_A():
do the real A work here, we are guaranteed B is no longer running
def _do_B():
do the real B work here, we are guaranteed A is no longer running
cancel_A and cancel_B might be no-ops, in which case they need to start up
the other calculation immediately, if one is pending.
This is pretty simple execution, and if my job was this simple, I'd
probably just keep doing things the way I do now, which is basically to
catch a "complete" or "canceled" signal from the A and B tasks and execute
the opposite task if it's pending. But it's not this simple. In reality
there are lots of, "oh, you want to do X? You need to make sure A, B, and C
are not active." And other stuff like that.
I have this notion that I should be able to write do_A() something like
this:
def do_A():
cancel_B()
yield from ... ???
_do_A()
...
or
def do_A():
future = cancel_B()
future.on_completion(_do_A)
... or ???
with the obvious similar structure for do_B. To my mind's eye, the first
option is preferable, since it's obvious that when control reaches the line
after the yield from statement, it's fine to do the guts of task A.
So, is my simpleminded view of the world a possibility with the current
facilities available in 3.3 or 3.4?
Thx,
Skip
== 2 of 6 ==
Date: Mon, Jan 6 2014 6:22 pm
From: MRAB
On 2014-01-07 00:56, Skip Montanaro wrote:
> I've been programming for a long while in an event&callback-driven
> world. While I am comfortable enough with the mechanisms available
> (almost 100% of what I do is in a PyGTK world with its signal
> mechanism), it's never been all that satisfying, breaking up my
> calculations into various pieces, and thus having my algorithm scattered
> all over the place.
>
> So, I'm looking for a little guidance. It seems to me that futures,
> coroutines, and/or the new Tulip/asyncio package might be my salvation,
> but I'm having a bit of trouble seeing exactly how that would work. Let
> me outline a simple hypothetical calculation. I'm looking for ways in
> which these new facilities might improve the structure of my code.
>
> Let's say I have a dead simple GUI with two buttons labeled, "Do A" and
> "Do B". Each corresponds to executing a particular activity, A or B,
> which take some non-zero amount of time to complete (as perceived by the
> user) or cancel (as perceived by the state of the running system - not
> safe to run A until B is complete/canceled, and vice versa). The user,
> being the fickle sort that he is, might change his mind while A is
> running, and decide to execute B instead. (The roles can also be
> reversed.) If s/he wants to run task A, task B must be canceled or
> allowed to complete before A can be started. Logically, the code looks
> something like (I fear Gmail is going to destroy my indentation):
>
> def do_A():
> when B is complete, _do_A()
> cancel_B()
>
> def do_B():
> when A is complete, _do_B()
> cancel_A()
>
> def _do_A():
> do the real A work here, we are guaranteed B is no longer running
>
> def _do_B():
> do the real B work here, we are guaranteed A is no longer running
>
> cancel_A and cancel_B might be no-ops, in which case they need to start
> up the other calculation immediately, if one is pending.
>
> This is pretty simple execution, and if my job was this simple, I'd
> probably just keep doing things the way I do now, which is basically to
> catch a "complete" or "canceled" signal from the A and B tasks and
> execute the opposite task if it's pending. But it's not this simple. In
> reality there are lots of, "oh, you want to do X? You need to make sure
> A, B, and C are not active." And other stuff like that.
>
[snip]
Do you really need to use futures, etc?
What you could do is keep track of which tasks are active.
When the user clicks a button to start a task, the task checks whether
it can run. If it can run, it starts the real work. On the other hand,
if it can't run, it's set as the pending task.
When a task completes or is cancelled, if there is a pending task, that
task is unset as the pending task and retried; it'll then either start
the real work or be set as the pending task again.
== 3 of 6 ==
Date: Mon, Jan 6 2014 6:29 pm
From: Cameron Simpson
On 06Jan2014 18:56, Skip Montanaro <skip.montanaro@gmail.com> wrote:
[...]
> Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
> B". Each corresponds to executing a particular activity, A or B, which take
> some non-zero amount of time to complete (as perceived by the user) or
> cancel (as perceived by the state of the running system - not safe to run A
> until B is complete/canceled, and vice versa). The user, being the fickle
> sort that he is, might change his mind while A is running, and decide to
> execute B instead. (The roles can also be reversed.) If s/he wants to run
> task A, task B must be canceled or allowed to complete before A can be
> started.
I take it we can ignore user's hammering on buttons faster than
jobs can run or be cancelled?
> Logically, the code looks something like (I fear Gmail is going to
> destroy my indentation):
>
> def do_A():
> when B is complete, _do_A()
> cancel_B()
[...]
> def _do_A():
> do the real A work here, we are guaranteed B is no longer running
[...]
> cancel_A and cancel_B might be no-ops, in which case they need to start up
> the other calculation immediately, if one is pending.
I wouldn't have cancel_A do this, I'd have do_A do this more overtly.
> This is pretty simple execution, and if my job was this simple, I'd
> probably just keep doing things the way I do now, which is basically to
> catch a "complete" or "canceled" signal from the A and B tasks and execute
> the opposite task if it's pending. But it's not this simple. In reality
> there are lots of, "oh, you want to do X? You need to make sure A, B, and C
> are not active." And other stuff like that.
What's wrong with variations on:
from threading import Lock
lock_A = Lock()
lock_B = Lock()
def do_A():
with lock_B():
with lock_A():
_do_A()
def do_B():
with lock_A():
with lock_B():
_do_B()
You can extend this with multiple locks for A,B,C provided you take
the excluding locks before taking the inner lock for the core task.
Regarding cancellation, I presume your code polls some cancellation
flag regularly during the task?
Cheers,
--
Cameron Simpson <cs@zip.com.au>
Many are stubborn in pursuit of the path they have chosen, few in pursuit
of the goal. - Friedrich Nietzsche
== 4 of 6 ==
Date: Mon, Jan 6 2014 6:45 pm
From: Cameron Simpson
On 07Jan2014 13:29, I wrote:
> def do_A():
> with lock_B():
> with lock_A():
> _do_A()
Um, of course there would be a cancel_B() up front above, like this:
def do_A():
cancel_B()
with lock_B():
with lock_A():
_do_A()
I'm with MRAB: you don't really need futures unless you looking to
move to a multithreaded appraoch and aren't multithreaded already.
Even then, you don't need futures, just track running threads and
what's meant to run next.
You can do all your blocking with Locks fairly easily unless there
are complexities not yet revealed. (Of course, this is a truism,
but I mean "conveniently".)
Cheers,
--
Cameron Simpson <cs@zip.com.au>
Follow! But! Follow only if ye be men of valor, for the entrance to this cave
is guarded by a creature so foul, so cruel that no man yet has fought with it
and lived! Bones of four fifty men lie strewn about its lair. So,
brave knights, if you do doubt your courage or your strength, come no
further, for death awaits you all with nasty big pointy teeth.
- Tim The Enchanter
== 5 of 6 ==
Date: Mon, Jan 6 2014 7:15 pm
From: Skip Montanaro
>From the couple responses I've seen, I must have not made myself
clear. Let's skip specific hypothetical tasks. Using coroutines,
futures, or other programming paradigms that have been introduced in
recent versions of Python 3.x, can traditionally event-driven code be
written in a more linear manner so that the overall algorithms
implemented in the code are easier to follow? My code is not
multi-threaded, so using threads and locking is not really part of the
picture. In fact, I'm thinking about this now precisely because the
first sentence of the asyncio documentation mentions single-threaded
concurrent code: "This module provides infrastructure for writing
single-threaded concurrent code using coroutines, multiplexing I/O
access over sockets and other resources, running network clients and
servers, and other related primitives."
I'm trying to understand if it's possible to use coroutines or objects
like asyncio.Future to write more readable code, that today would be
implemented using callbacks, GTK signals, etc.
S
== 6 of 6 ==
Date: Mon, Jan 6 2014 7:23 pm
From: MRAB
On 2014-01-07 02:29, Cameron Simpson wrote:
> On 06Jan2014 18:56, Skip Montanaro <skip.montanaro@gmail.com> wrote:
> [...]
>> Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
>> B". Each corresponds to executing a particular activity, A or B, which take
>> some non-zero amount of time to complete (as perceived by the user) or
>> cancel (as perceived by the state of the running system - not safe to run A
>> until B is complete/canceled, and vice versa). The user, being the fickle
>> sort that he is, might change his mind while A is running, and decide to
>> execute B instead. (The roles can also be reversed.) If s/he wants to run
>> task A, task B must be canceled or allowed to complete before A can be
>> started.
>
> I take it we can ignore user's hammering on buttons faster than
> jobs can run or be cancelled?
>
>> Logically, the code looks something like (I fear Gmail is going to
>> destroy my indentation):
>>
>> def do_A():
>> when B is complete, _do_A()
>> cancel_B()
> [...]
>> def _do_A():
>> do the real A work here, we are guaranteed B is no longer running
> [...]
>> cancel_A and cancel_B might be no-ops, in which case they need to start up
>> the other calculation immediately, if one is pending.
>
> I wouldn't have cancel_A do this, I'd have do_A do this more overtly.
>
>> This is pretty simple execution, and if my job was this simple, I'd
>> probably just keep doing things the way I do now, which is basically to
>> catch a "complete" or "canceled" signal from the A and B tasks and execute
>> the opposite task if it's pending. But it's not this simple. In reality
>> there are lots of, "oh, you want to do X? You need to make sure A, B, and C
>> are not active." And other stuff like that.
>
> What's wrong with variations on:
>
> from threading import Lock
>
> lock_A = Lock()
> lock_B = Lock()
>
> def do_A():
> with lock_B():
> with lock_A():
> _do_A()
>
> def do_B():
> with lock_A():
> with lock_B():
> _do_B()
>
It's safer to lock in the same order to reduce the chance of deadlock:
def do_A():
with lock_A():
with lock_B():
_do_A()
def do_B():
with lock_A():
with lock_B():
_do_B()
> You can extend this with multiple locks for A,B,C provided you take
> the excluding locks before taking the inner lock for the core task.
>
> Regarding cancellation, I presume your code polls some cancellation
> flag regularly during the task?
>
==============================================================================
TOPIC: the Gravity of Python 2
http://groups.google.com/group/comp.lang.python/t/0afb505736567141?hl=en
==============================================================================
== 1 of 7 ==
Date: Mon, Jan 6 2014 5:00 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 11:27 AM, Devin Jeanpierre
<jeanpierreda@gmail.com> wrote:
> For example, I imagine that it is kind of _silly_ to have a
> __future__.disable_str_autoencoding on a per-module basis, because
> some modules' functions will fail when they are given the wrong type,
> and some won't -- but in the context of making migration easier, that
> silliness is probably OK.
At what point does the auto-encoding happen, though? If a function
calls another function calls another function, at what point do you
decide that this ought to have become a str?
I suspect there'll be quite a few problems that can't be solved
per-module. The division change is easy, because it just changes the
way code gets compiled (there's still "integer division" and "float
division", it's just that / gets compiled into the latter instead of
the former). With print_function I can imagine there might be some
interactions that are affected, but nothing too major. Deploying
new-style classes exclusively could be minorly problematic, but it'd
probably work (effectively, a future directive stipulates that
everything in this module inherits from object - technically should
work, but might cause code readability confusion). But there are much
subtler issues. Compare this code in Python 2 and Python 3:
def f1():
return {1:2, 11:22, 111:222}
def f2(d):
return d.keys()
def f3(k):
return k.pop()
process_me = f2(f1())
try:
while True:
current = f3(process_me)
# ....
except IndexError:
pass
Obviously this works in Python 2, and fails in Python 3 (because
keys() returns a view). Now imagine these are four separate modules.
Somewhere along the way, something needs to pass the view through
list() to make it poppable. Or, putting it the other way, somewhere
there needs to be an alert saying that this won't work in Py3. Whose
responsibility is it?
* Is it f1's responsibility to create a different sort of dict that
has a keys() method that returns a view?
* Is it f2's responsibility to notice that it's calling keys() on a
dictionary, and that it should warn that this will change (or switch
to compatibility mode, or raise error, or whatever)? This is where the
error actually is.
* Is it f3's responsibility? This one I'm pretty sure is not so.
* Is it the main routine's job to turn process_me into a list? I don't
think so. There's nothing in that code that indicates that it's using
either a dictionary or a list.
I'd put the job either on f1 or on f2. A __future__ directive could
change the interpretation of the { } literal syntax and have it return
a dictionary with a keys view, but the fix would be better done in f2
- where it's not obvious that it's using a dictionary at all.
I'm not sure that a future directive can really solve this one. Maybe
a command-line argument could, but that doesn't help with the gradual
migration of individual modules.
ChrisA
== 2 of 7 ==
Date: Mon, Jan 6 2014 6:00 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 12:55 PM, Devin Jeanpierre
<jeanpierreda@gmail.com> wrote:
> What if we decide there is no single source of responsibility, and it
> can't be limited exactly to a module, and make a __future__ feature
> the best we can regardless? We can still exact some benefit from a
> "sloppy" __future__ feature: we can still move code piecemeal.
I worry that it's starting to get into the realm of magic, though.
Maybe dict.keys() isn't the best example (you can easily make your
code 2+3 compat by just calling list() on it immediately, which is
effectively "from __past__ import absence_of_views"), but the issue is
the same with string autoencodings. It's really hard to define that
the + operator will do magic differently based on a future directive,
and changing the object ("this string will not autoencode") means
you're not tweaking things per-module, and behaviour will change
and/or break based on where some object was created, rather than the
settings on the module with the code in it.
ChrisA
== 3 of 7 ==
Date: Mon, Jan 6 2014 5:55 pm
From: Devin Jeanpierre
On Mon, Jan 6, 2014 at 5:00 PM, Chris Angelico <rosuav@gmail.com> wrote:
> On Tue, Jan 7, 2014 at 11:27 AM, Devin Jeanpierre
> <jeanpierreda@gmail.com> wrote:
>> For example, I imagine that it is kind of _silly_ to have a
>> __future__.disable_str_autoencoding on a per-module basis, because
>> some modules' functions will fail when they are given the wrong type,
>> and some won't -- but in the context of making migration easier, that
>> silliness is probably OK.
>
> At what point does the auto-encoding happen, though? If a function
> calls another function calls another function, at what point do you
> decide that this ought to have become a str?
Python has a defined place where it happens. For example the __add__
method of str objects can do it.
As you note below for dicts, the place where you change behavior can
change, though. e.g. maybe all str objects created in a module cannot
be coerced anywhere else, or maybe it's coercions that happen inside a
module that are disabled. The former is more efficient, but it has
effects that creep out transitively in the most difficult way
possible. The latter is essentially just an API change (rather than
type change), and so easy enough, but it's prohibitively expensive, in
a way that makes all code everywhere in Python slower. In the end, we
can still choose one of those, and in principle the __future__ feature
would work, even if it's not the best. (In fact, if you want, you
could even do both.)
> I suspect there'll be quite a few problems that can't be solved
> per-module. The division change is easy, because it just changes the
> way code gets compiled (there's still "integer division" and "float
> division", it's just that / gets compiled into the latter instead of
> the former). With print_function I can imagine there might be some
> interactions that are affected, but nothing too major. Deploying
> new-style classes exclusively could be minorly problematic, but it'd
> probably work (effectively, a future directive stipulates that
> everything in this module inherits from object - technically should
> work, but might cause code readability confusion). But there are much
> subtler issues. Compare this code in Python 2 and Python 3:
>
> def f1():
> return {1:2, 11:22, 111:222}
>
> def f2(d):
> return d.keys()
>
> def f3(k):
> return k.pop()
>
> process_me = f2(f1())
> try:
> while True:
> current = f3(process_me)
> # ....
> except IndexError:
> pass
>
> Obviously this works in Python 2, and fails in Python 3 (because
> keys() returns a view). Now imagine these are four separate modules.
> Somewhere along the way, something needs to pass the view through
> list() to make it poppable. Or, putting it the other way, somewhere
> there needs to be an alert saying that this won't work in Py3. Whose
> responsibility is it?
>
> * Is it f1's responsibility to create a different sort of dict that
> has a keys() method that returns a view?
> * Is it f2's responsibility to notice that it's calling keys() on a
> dictionary, and that it should warn that this will change (or switch
> to compatibility mode, or raise error, or whatever)? This is where the
> error actually is.
> * Is it f3's responsibility? This one I'm pretty sure is not so.
> * Is it the main routine's job to turn process_me into a list? I don't
> think so. There's nothing in that code that indicates that it's using
> either a dictionary or a list.
>
> I'd put the job either on f1 or on f2. A __future__ directive could
> change the interpretation of the { } literal syntax and have it return
> a dictionary with a keys view, but the fix would be better done in f2
> - where it's not obvious that it's using a dictionary at all.
>
> I'm not sure that a future directive can really solve this one. Maybe
> a command-line argument could, but that doesn't help with the gradual
> migration of individual modules.
What if we decide there is no single source of responsibility, and it
can't be limited exactly to a module, and make a __future__ feature
the best we can regardless? We can still exact some benefit from a
"sloppy" __future__ feature: we can still move code piecemeal.
If whatever __future__ feature there is, when enabled on the module
with f2 (or, in another case, f1), causes an error in f3, that's a
little misleading in that the error is in the wrong place, but it
doesn't fundamentally mean we can't move the codebase piecemeal. It
means that the change we make to the file for f2 (or f1) might require
some additional changes elsewhere or internally due to outside-facing
changes in semantics. It makes the required changes larger than in the
case of division, like you say, but it's still potentially smaller and
simpler than in the case of an atomic migration to Python 3.
-- Devin
== 4 of 7 ==
Date: Mon, Jan 6 2014 6:15 pm
From: Devin Jeanpierre
On Mon, Jan 6, 2014 at 6:00 PM, Chris Angelico <rosuav@gmail.com> wrote:
> On Tue, Jan 7, 2014 at 12:55 PM, Devin Jeanpierre
> <jeanpierreda@gmail.com> wrote:
>> What if we decide there is no single source of responsibility, and it
>> can't be limited exactly to a module, and make a __future__ feature
>> the best we can regardless? We can still exact some benefit from a
>> "sloppy" __future__ feature: we can still move code piecemeal.
>
> I worry that it's starting to get into the realm of magic, though.
> Maybe dict.keys() isn't the best example (you can easily make your
> code 2+3 compat by just calling list() on it immediately, which is
> effectively "from __past__ import absence_of_views"), but the issue is
> the same with string autoencodings. It's really hard to define that
> the + operator will do magic differently based on a future directive,
> and changing the object ("this string will not autoencode") means
> you're not tweaking things per-module, and behaviour will change
> and/or break based on where some object was created, rather than the
> settings on the module with the code in it.
Well, what's "magic"? There are two ideas that I know roughly how to
implement, and that maybe would make the world better.
Behaviour that changes or breaks based on what module some object was
created in sounds pretty bad, but I don't think it's so bad that it's
not a tolerable solution. The error messages can be made obvious, and
it would still allow a gradual migration. Allowing the exception to be
toned down to a warning might help with the gradual move without
breaking code unpredictably. It's bad, but if there were no better
alternative, I would be OK with it. (i.e. I think it's better than
nothing.)
The other alternative is having + (etc.) do something different
depending on what module it's in. It's not hard to do: add a condition
to all places where Python automatically converts, and check the call
stack to see what module you're in. I mostly retract my worries about
performance, I for some reason was thinking you'd do it on all
operations, but it only has to be checked if the string is about to be
automatically encoded or decoded, (and that's already slow). It still
has the effect that your API is different and you raise exceptions
when you didn't before, which usually affects your callers more than
it affects you, but I feel like it propagates outside of the module
more "nicely".
It's "magic" in that looking at the call stack is "magic", but that's
the kind of magic that you can even do without touching the
interpreter, using functions from sys. I don't think the definition of
when exactly automatic encoding/decoding occurs is magical. It's
already a part of Python behaviour, changing it isn't outrageous.
-- Devin
== 5 of 7 ==
Date: Mon, Jan 6 2014 6:28 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 1:15 PM, Devin Jeanpierre <jeanpierreda@gmail.com> wrote:
> The other alternative is having + (etc.) do something different
> depending on what module it's in. It's not hard to do: add a condition
> to all places where Python automatically converts, and check the call
> stack to see what module you're in.
Currently, there are __add__ methods (and __radd__ but let's focus on
__add__) on a bunch of objects, which determine what happens when you
use the + operator.
class Foo(str):
def __add__(self, other):
if isinstance(other, unicode): return self + other.encode("cp500")
return str.__add__(self, other)
What happens if you have the __future__ directive disabling
autoencoding on (a) the module in which this class is defined, (b) the
one in which the it was instantiated, (c) the one that actually uses
the +?
This is why I think it's getting magical. Far better to do this sort
of change on a per-application basis - maybe with a warning parameter
that you can enable when running your test suite, as has been
suggested (and in many cases implemented) for other 2-vs-3 problems.
ChrisA
== 6 of 7 ==
Date: Mon, Jan 6 2014 7:12 pm
From: Devin Jeanpierre
On Mon, Jan 6, 2014 at 6:28 PM, Chris Angelico <rosuav@gmail.com> wrote:
> class Foo(str):
> def __add__(self, other):
> if isinstance(other, unicode): return self + other.encode("cp500")
> return str.__add__(self, other)
>
> What happens if you have the __future__ directive disabling
> autoencoding on (a) the module in which this class is defined, (b) the
> one in which the it was instantiated, (c) the one that actually uses
> the +?
In both methods I described, all uses of instances of the class are
changed, but only in case A. That's a really good point, I hadn't
considered that the second case could be converted into the first.
> This is why I think it's getting magical.
Yes, it's magical, but to be fair that's stack inspection as it always is.
I am OK with a little ugliness if it makes actual work easier.
> Far better to do this sort
> of change on a per-application basis - maybe with a warning parameter
> that you can enable when running your test suite, as has been
> suggested (and in many cases implemented) for other 2-vs-3 problems.
Doing a flag like that that enables a backwards incompatible change
does in fact address that issue you were worried about originally, so
that's something. And feature-by-feature moves are, like the OP said,
still lower cost than a wholesale move.
In the end a gradual transition can still be done with the polyglot
approach, but I'm not happy that there's no way to enforce/test a
polyglot conversion until it is complete. Any kind of granularity
would have helped. :(
-- Devin
== 7 of 7 ==
Date: Mon, Jan 6 2014 7:59 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 2:12 PM, Devin Jeanpierre <jeanpierreda@gmail.com> wrote:
> Doing a flag like that that enables a backwards incompatible change
> does in fact address that issue you were worried about originally, so
> that's something. And feature-by-feature moves are, like the OP said,
> still lower cost than a wholesale move.
>
> In the end a gradual transition can still be done with the polyglot
> approach, but I'm not happy that there's no way to enforce/test a
> polyglot conversion until it is complete. Any kind of granularity
> would have helped. :(
Yeah, feature-by-feature is possible; but it doesn't help with one of
the big (and common) complaints, that a library can't migrate without
the application migrating. The way I see it, polyglot coding should be
considered a superset of 2.7 coding, at which point there should
hopefully be some perceived value in boasting "Requires 2.7 *OR
3.3*!", and ideally that value should be greater than the cost of
supporting both. There are two ways to achieve that: Increase the
perceived value, and decrease the cost. Making 3.3 (or 3.4, or
whatever) look better is simply a matter of there being more
applications (or potential applications) written for that, and that's
going to be largely circular, and it's completely not in the hands of
Python development, so the focus has to be on decreasing the cost.
Hence the question: What are the breakages between 2.7 and 3.3, and
which ones can be solved per-module? If the solution to the breakage
has to be done per-application, that's a problem, even if it is
feature-by-feature. But stuff that can be changed per-module can
entirely eliminate the cost of polyglot code (for that feature), as
it'll simply be written in the Py3 way, with one little future
directive at the top.
ChrisA
==============================================================================
TOPIC: "More About Unicode in Python 2 and 3"
http://groups.google.com/group/comp.lang.python/t/fc13dd8f17f64a45?hl=en
==============================================================================
== 1 of 9 ==
Date: Mon, Jan 6 2014 5:05 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 11:23 AM, Dennis Lee Bieber
<wlfraed@ix.netcom.com> wrote:
>>Uhh, I think you're the only one here who has that nightmare, like
>>Chris Knight with his sun-god robes and naked women throwing pickles
>>at him.
>>
>
> Will somebody please wash out my brain... "Pickles straight from the
> jar, or somewhat 'used'?"
I was making a reference to the movie "Real Genius", which involves
lasers, popcorn, and geeks. And it's been explored by Mythbusters. If
you haven't seen it, do!
ChrisA
== 2 of 9 ==
Date: Mon, Jan 6 2014 5:26 pm
From: "Rhodri James"
On Mon, 06 Jan 2014 21:17:06 -0000, Gene Heskett <gheskett@wdtv.com> wrote:
> On Monday 06 January 2014 16:16:13 Terry Reedy did opine:
>
>> On 1/6/2014 9:32 AM, Gene Heskett wrote:
>> > And from my lurking here, its quite plain to me that 3.x python has a
>> > problem with everyday dealing with strings.
>>
>> Strings of what? And what specific 'everyday' problem are you referring
>> to?
>
> Strings start a new thread here at nominally weekly intervals. Seems to
> me that might be usable info.
I haven't actually checked subject lines, but I'm pretty sure GUIs raise
more questions than that by some considerable margin.
--
Rhodri James *-* Wildebeest Herder to the Masses
== 3 of 9 ==
Date: Mon, Jan 6 2014 5:35 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 12:26 PM, Rhodri James <rhodri@wildebst.org.uk> wrote:
> On Mon, 06 Jan 2014 21:17:06 -0000, Gene Heskett <gheskett@wdtv.com> wrote:
>
>> On Monday 06 January 2014 16:16:13 Terry Reedy did opine:
>>
>>> On 1/6/2014 9:32 AM, Gene Heskett wrote:
>>> > And from my lurking here, its quite plain to me that 3.x python has a
>>> > problem with everyday dealing with strings.
>>>
>>> Strings of what? And what specific 'everyday' problem are you referring
>>> to?
>>
>>
>> Strings start a new thread here at nominally weekly intervals. Seems to
>> me that might be usable info.
>
>
> I haven't actually checked subject lines, but I'm pretty sure GUIs raise
> more questions than that by some considerable margin.
About the difference between Py2 and Py3? Most of the GUI toolkits
work fine on both.
ChrisA
== 4 of 9 ==
Date: Mon, Jan 6 2014 6:15 pm
From: "Rhodri James"
On Tue, 07 Jan 2014 01:35:54 -0000, Chris Angelico <rosuav@gmail.com>
wrote:
> On Tue, Jan 7, 2014 at 12:26 PM, Rhodri James <rhodri@wildebst.org.uk>
> wrote:
>> On Mon, 06 Jan 2014 21:17:06 -0000, Gene Heskett <gheskett@wdtv.com>
>> wrote:
>>
>>> On Monday 06 January 2014 16:16:13 Terry Reedy did opine:
>>>
>>>> On 1/6/2014 9:32 AM, Gene Heskett wrote:
>>>> > And from my lurking here, its quite plain to me that 3.x python has
>>>> a
>>>> > problem with everyday dealing with strings.
>>>>
>>>> Strings of what? And what specific 'everyday' problem are you
>>>> referring
>>>> to?
>>>
>>>
>>> Strings start a new thread here at nominally weekly intervals. Seems
>>> to
>>> me that might be usable info.
>>
>>
>> I haven't actually checked subject lines, but I'm pretty sure GUIs raise
>> more questions than that by some considerable margin.
>
> About the difference between Py2 and Py3? Most of the GUI toolkits
> work fine on both.
Sorry, I assumed we were talking about threads in general. Py2 vs Py3
threads that aren't interminable trolling don't show up often enough to
register for me; the current set is something of an exception.
--
Rhodri James *-* Wildebeest Herder to the Masses
== 5 of 9 ==
Date: Mon, Jan 6 2014 6:23 pm
From: Terry Reedy
On 1/6/2014 5:25 PM, Ned Batchelder wrote:
> I do respect you, and all the core developers. As I've said elsewhere
> in the thread, I greatly appreciate everything you do. I dedicate a
> great deal of time and energy to the Python community, primarily because
> of the amazing product that you have all built.
Let me add a quote from Nick's essay (see other new thread).
'''Ned Batchelder's wonderful Pragmatic Unicode talk/essay could just as
well be titled "This is why Python 3 exists".'''
http://nedbatchelder.com/text/unipain.html
I do not know if Nick's re-titling expression your intent or not, but it
expresses my feeling also. Part of what makes the presentation great is
the humor, which we need more of. I recommend it to anyone who has not
seen it.
--
Terry Jan Reedy
== 6 of 9 ==
Date: Mon, Jan 6 2014 6:29 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 1:15 PM, Rhodri James <rhodri@wildebst.org.uk> wrote:
> Sorry, I assumed we were talking about threads in general. Py2 vs Py3
> threads that aren't interminable trolling don't show up often enough to
> register for me; the current set is something of an exception.
Sure. In that case, I would agree that yes, GUI coding is at least
comparable to Unicode in terms of number of threads. I don't have
figures, but it wouldn't surprise me to learn that either exceeds the
other.
ChrisA
== 7 of 9 ==
Date: Mon, Jan 6 2014 6:42 pm
From: Terry Reedy
On 1/6/2014 6:24 PM, Chris Angelico wrote:
> On Tue, Jan 7, 2014 at 10:06 AM, Antoine Pitrou <solipsis@pitrou.net> wrote:
>> Terry Reedy <tjreedy <at> udel.edu> writes:
>>>
>>> On 1/6/2014 11:29 AM, Antoine Pitrou wrote:
>>>
>>>> People don't use? According to available figures, there are more
>> downloads of
>>>> Python 3 than downloads of Python 2 (Windows installers, mostly):
>>>> http://www.python.org/webstats/
>>>
>>> While I would like the claim to be true, I do not see 2 versus 3
>>> downloads on that page. Did you mean another link?
>>
>> Just click on a recent month, scroll down to the "Total URLs By kB"
>> table, and compute the sum of the largest numbers for each Python
>> version.
>
> Here's what I see there (expanding on what I said in the other post,
> which was based on one table further up, URLs by hit count) for
> December:
>
> 3.3.3: 1214571
> - amd64 627672
> - win32 586899
> 2.7.6: 1049096
> - win32 607972
> - amd64 441124
Earlier today, I was guessing 1000000 Python programmers. I do not know
how downloads translates to programmers, but I may have been a bit low.
> The next highest number is 167K downloads, so I'm going to ignore
> their figures as they won't make more than 15% difference in these
> stats. This is 2263667 total downloads of the current versions of
> Python, 46% 2.7.6 and 54% 3.3.3. That's not incredibly significant
> statistically, but certainly it disproves the notion that 3.x isn't
> used at all.
Last February:
1 553203 0.82% /ftp/python/3.3.0/python-3.3.0.msi
2 498926 0.74% /ftp/python/2.7.3/python-2.7.3.msi
3 336601 0.50% /ftp/python/3.3.0/python-3.3.0.amd64.msi
4 241796 0.36% /ftp/python/2.7.3/python-2.7.3.amd64.msi
What has really increased are the amd64 numbers. I am pleased to see
that the bug-fix releases get downloaded so heavily.
--
Terry Jan Reedy
== 8 of 9 ==
Date: Mon, Jan 6 2014 8:01 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 1:42 PM, Terry Reedy <tjreedy@udel.edu> wrote:
> I am pleased to see that the bug-fix releases get downloaded so heavily.
That's a tricky one, though. It's impossible to say how many 3.3.1
downloads were upgrading from 3.3.0, and how many were simply "I want
Python, give me the latest". But either way, it's still a lot of
downloads.
ChrisA
== 9 of 9 ==
Date: Mon, Jan 6 2014 8:01 pm
From: Steven D'Aprano
On Mon, 06 Jan 2014 16:32:01 -0500, Ned Batchelder wrote:
> On 1/6/14 12:50 PM, Steven D'Aprano wrote:
>> Ned Batchelder wrote:
>>
>>> You are still talking about whether Armin is right, and whether he
>>> writes well, about flaws in his statistics, etc. I'm talking about
>>> the fact that an organization (Python core development) has a product
>>> (Python 3) that is getting bad press. Popular and vocal customers
>>> (Armin, Kenneth, and others) are unhappy. What is being done to make
>>> them happy? Who is working with them? They are not unique, and their
>>> viewpoints are not outliers.
>>>
>>> I'm not talking about the technical details of bytes and Unicode. I'm
>>> talking about making customers happy.
>>
>> Oh? How much did Armin pay for his Python support? If he didn't pay,
>> he's not a customer. He's a user.
>
> I use the term "customer" in the larger sense of, "someone using your
> product that you are trying to please." I'd like to think that an open
> source project with only users would treat them as customers. Not in
> the sense of a legal obligation in exchange for money, but in the sense
> that the point of the work is to please them.
But isn't the strength of open source that people write software that
pleases *themselves*, and if others can make use of it, we all win? If GvR
wrote Python to please others, it would have braces, it would be more
like Perl and C, and it would probably be a mess.
All else being equal, it's better for open source software if your users
are happy than if they are unhappy, but at what cost? You can't make
everyone happy.
[...]
> I was only avoiding talking about Unicode vs bytes because I'm not the
> one who needs a better way to do it, Armin and Kenneth are. You seem to
> be arguing from the standpoint of, "I've never had problems, so there
> are no problems."
Certainly not. I've tried hard not to say, or imply, that Armin is wrong.
I know he is an extremely competent Python developer, and I don't
understand his problem domain well enough to categorically say he's
wrong. I *suspect* he's doing something wrong, or at least sub-optimally,
and making things more difficult for himself than they need be, but what
do I know? Maybe that's just my wishful thinking.
> I suspect an undercurrent here is also the difference between writing
> Python 3 code, and writing code that can run on both Python 2 and 3.
Of course. It's always harder to target multiple versions with
incompatibilities than a single version.
> In my original post, I provided two possible responses, one of which
> you've omitted: work with Armin to explain the easier way that he has
> missed.
Isn't that just another way of saying what I said earlier?
"try to educate Armin (and others) so they stop blaming Python for their
own errors"
Although your version is more diplomatic.
> It sounds like you think there isn't an easier way, and that's
> OK? I would love to see a Python 3 advocate work with Armin or Kenneth
> on the code that's caused them such pain, and find a way to make it
> good.
Is it a good think that there's code that's hard to write in Python 3?
Not in isolation. But sometimes when you design a language, you
implicitly or explicitly decide that certain types of code will not be a
good fit for that language: you can't write an efficient operating system
kernel in Python. Maybe you can't easily do the sort of low-level network
stuff that Armin is trying to do in Python 3. But I doubt it. I expect
that probably half the problem is that he's missing something, or doing
something wrong, and the other half is that Python 3 genuinely makes it
harder than it should be. But again, what do I know?
> It's clear from other discussions happening elsewhere that there is the
> possibility of improving the situation, for example PEP 460 proposing
> "bytes % args" and "bytes.format(args)". That's good.
I don't know... if byte formatting ends up encouraging people to use
bytes when they ought to use strings, maybe it will be an attractive
nuisance and in the long-term do more harm than good. But I'm keeping an
open mind.
--
Steven
==============================================================================
TOPIC: django question
http://groups.google.com/group/comp.lang.python/t/4beb455a8db96ba1?hl=en
==============================================================================
== 1 of 2 ==
Date: Mon, Jan 6 2014 5:57 pm
From: Roy Smith
In article <f1732cf8-b829-4162-99e5-91e0d82cc0f2@googlegroups.com>,
CM <cmpython@gmail.com> wrote:
> On Sunday, January 5, 2014 4:50:55 PM UTC-5, Roy Smith wrote:
>
> > One of the things we try to do is put as little in the views as
> > possible. Views should be all about accepting and validating request
> > parameters, and generating output (be that HTML via templates, or JSON,
> > or whatever). All the business logic should be kept isolated from the
> > views. The better (and more disciplined) you are about doing this, the
> > easier it will be to move your business logic to a different framework.
>
> I just started playing with Django and hadn't realized that yet. So,
> what, you have other modules that you import into Views that you call
> functions from to do the business logic?
Yes, exactly. There's nothing magic about a django view. It's just a
function which is passed an instance of HttpRequest (and possibly a few
other things, depending on your url mapping), and which is expected to
return an instance of HttpResponse. Within that framework, it can call
any other functions it wants.
For example, http://legalipsum.com/ is a silly little site I built in
django. Here's the view for the home page:
from markov import markov
files = markov.corpus_files()
chainer = markov.from_files(files)
@require_GET
def home(request):
count = request.GET.get('count', 0)
try:
count = int(count)
except ValueError:
count = 0
paragraphs = [chainer.paragraph(3, 3) for i in range(count)]
ctx = {
'paragraphs': paragraphs,
'selected': str(count),
'pagename': 'home',
}
return render(request, 'legal_ipsum/home.html', ctx)
Notice how the view knows nothing about generating the actual markov
text. That's in another module, which lives somewhere on my PYTHONPATH.
ALso, the view knows nothing about how the page is laid out; only the
templates know that. If I decided to redo this in tornado or flask,
whatever, I would need to rewrite my view, but there's not much to
rewrite. Most of the logic is in the Markov chainer, and that would
cary over to the new implementation unchanged.
BTW, my suggestion to keep business logic and presentation code distinct
isn't unique to django, it's a good idea in pretty much all systems.
== 2 of 2 ==
Date: Mon, Jan 6 2014 11:55 pm
From: CM
On Monday, January 6, 2014 8:57:22 PM UTC-5, Roy Smith wrote:
> Yes, exactly. There's nothing magic about a django view. It's just a
> function which is passed an instance of HttpRequest (and possibly a few
> other things, depending on your url mapping), and which is expected to
> return an instance of HttpResponse. Within that framework, it can call
> any other functions it wants.
>
> For example, http://legalipsum.com/ is a silly little site I built in
> django. Here's the view for the home page:
Nice!
> Notice how the view knows nothing about generating the actual markov
> text. That's in another module, which lives somewhere on my PYTHONPATH.
> ALso, the view knows nothing about how the page is laid out; only the
> templates know that. If I decided to redo this in tornado or flask,
> whatever, I would need to rewrite my view, but there's not much to
> rewrite. Most of the logic is in the Markov chainer, and that would
> cary over to the new implementation unchanged.
>
> BTW, my suggestion to keep business logic and presentation code distinct
> isn't unique to django, it's a good idea in pretty much all systems.
Thanks for these points, helpful to see in practice. I'm trying to be
more mindful of good coding practices, and this will be helpful as I continue
to learn Django and making web applications generally.
==============================================================================
TOPIC: Python 3 Q & A (Nick C.) updated
http://groups.google.com/group/comp.lang.python/t/8c189025b600a46a?hl=en
==============================================================================
== 1 of 1 ==
Date: Mon, Jan 6 2014 6:07 pm
From: Terry Reedy
As a counterpoint to 2 versus 3:
http://python-notes.curiousefficiency.org/en/latest/python3/questions_and_answers.html
by Nick Coughlan, one of Python's major core developers.
A couple of points related to the other threads.
1. There were real social and technical reasons for the non-2.8 pep (404).
2. Python 3 is a work in progress with respect to the new text model. In
particular wire protocol programming is a known problem area and will
get attention. But it is only a small part of the total Python universe.
--
Terry Jan Reedy
==============================================================================
TOPIC: /usr/lib/python2.7/subprocess.py:OSError: [Errno 2] No such file or
directory
http://groups.google.com/group/comp.lang.python/t/7788f17239427eba?hl=en
==============================================================================
== 1 of 1 ==
Date: Mon, Jan 6 2014 9:21 pm
From: Chris Angelico
On Tue, Jan 7, 2014 at 4:20 AM, Marco Ippolito <ippolito.marco@gmail.com> wrote:
> File "/usr/local/lib/python2.7/dist-packages/nltk/classify/megam.py",
> line 167, in call_megam
> p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
> File "/usr/lib/python2.7/subprocess.py", line 679, ininit
> errread, errwrite)
> File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
> raise child_exception
> OSError: [Errno 2] No such file or directory
>
> subprocess.py exists:
The problem isn't inside subprocess itself, which is raising the
error. It looks like the problem is with the command being executed
via Popen. I would look in megam.py (the first line that I quoted
here) and see what the command is, and then see if you have that
installed correctly. It ought to be listed as a prerequisite.
ChrisA
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.lang.python"
group.
To post to this group, visit http://groups.google.com/group/comp.lang.python?hl=en
To unsubscribe from this group, send email to comp.lang.python+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.lang.python/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home