[PEP 444] Future- and Generator-Based Async Idea

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[PEP 444] Future- and Generator-Based Async Idea

Alice Bevan–McGregor
Warning: this assumes we're running on bizzaro-world PEP 444 that
mandates applications are generators.  Please do not dismiss this idea
out of hand but give it a good look and maybe some feedback.  ;)

--

Howdy!

I've finished touching up the p-code illustrating my idea of using
generators to implement async functionality within a WSGI application
and middleware, including the idea of a wsgi2ref-supplied decorator to
simplify middleware.

        https://gist.github.com/770743

There may be a few typos in there; I switched from the idea of passing
back the returned value of the future to passing back the future itself
in order to better handle exception handling (i.e. not requiring utter
insanity in the middleware to determine the true source of an exception
and the need to pass it along).

The second middleware demonstration (using a decorator) makes
middleware look a lot more like an application: yielding futures, or a
response, with the addition of yielding an application callable not
explored in the first (long, but trivial) example.  I believe this
should cover 99% of middleware use cases, including interactive
debugging, request routing, etc. and the syntax isn't too bad, if you
don't mind standardized decorators.

This should be implementable within the context of Marrow HTTPd
(http://bit.ly/fLfamO) without too much difficulty.

As a side note, I'll be adding threading support to the server
(actually, marrow.server, the underlying server/protocol abstraction
m.s.http utilizes) using futures some time over the week-end by
wrapping the async callback that calls the application with a call to
an executor, making it immune to blocking, but I suspect the overhead
will outweigh the benefit for speedy applications.

Testing multi-process vs. multi-threaded using 2 workers each and the
prime calculation example, threading is 1.5x slower for CPU-intensive
tasks under Python 2.7.  That's terrible.  It should be 2x; I have 2
cores.  :/

        - Alice.


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Alice Bevan–McGregor
As a quick note, this proposal would signifigantly benefit from the
simplified syntax offered by PEP 380 (Syntax for Delegating to a
Subgenerator) [1] and possibly PEP 3152 (Cofunctions) [2].  The former
simplifies delegation and exception passing, and the latter simplifies
the async side of this.

Unfortunately, AFIK, both are affected by PEP 3003 (Python Language
Moratorium) [3], which kinda sucks.

        - Alice.

[1] http://www.python.org/dev/peps/pep-0380/
[2] http://www.python.org/dev/peps/pep-3152/
[3] http://www.python.org/dev/peps/pep-3003/


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

David Stanek
In reply to this post by Alice Bevan–McGregor
On Sat, Jan 8, 2011 at 6:26 AM, Alice Bevan–McGregor
<[hidden email]> wrote:
>
> I've finished touching up the p-code illustrating my idea of using
> generators to implement async functionality within a WSGI application and
> middleware, including the idea of a wsgi2ref-supplied decorator to simplify
> middleware.
>
>        https://gist.github.com/770743
>

Under the new spec would I be forced to make my applications and
middleware this complicated? Where is the most up-to-date version of
pep444?

--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Alice Bevan–McGregor
On 2011-01-08 06:08:57 -0800, David Stanek said:
> Under the new spec would I be forced to make my applications and
> middleware this complicated?

An application that does not utilize futures (and thus this proposal
for async) is different from the current draft as it is written [1] by
only one word.  Replace 'return' with 'yield' and you're done.

Middleware is somewhat different (if using the decorator or PEP 380
syntax) or substantially different (if not using either of the two
mentioned simplifications) as middleware, by definition, needs to
implement both server and application sides of the "WSGI conversation".

As a side benefit, this should further reduce the perceived mis-use of
middleware [2,3] by the coercion (by implementation difficulty) of
inappropriate middleware being reimplemented as functional calls.

> Where is the most up-to-date version of pep444?

I’m in the process right now of completing my transcription of [1] into
[4].  Upon completion I will re-submit it for incorporation on the
Python.org website.  (Still marked as draft, of course.)

        - Alice.

[1] https://github.com/GothAlice/wsgi2/blob/master/pep444.textile
[2] http://dirtsimple.org/2007/02/wsgi-middleware-considered-harmful.html
[3] http://mockit.blogspot.com/2009/07/its-all-wrong.html
[4] https://github.com/GothAlice/wsgi2/blob/master/pep-0444.rst


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

PJ Eby
In reply to this post by Alice Bevan–McGregor
At 03:26 AM 1/8/2011 -0800, Alice Bevan­McGregor wrote:
>Warning: this assumes we're running on bizzaro-world PEP 444 that
>mandates applications are generators.  Please do not dismiss this
>idea out of hand but give it a good look and maybe some feedback.  ;)

First-glance feedback: I'm impressed.  You may have something going
here after all.  I just wish you'd sent this sooner.  ;-)

I can easily see why I didn't think of this myself: I hadn't shifted
my thinking to accomodate for two important changes in the Python
environment since the first WSGI spec, circa 2003-04:

1. Coroutines and decorators are ubiquitous and non-intrusive
2. WSGI has stdlib support, and in any event it is much easier to
rely on non-stdlib packages

My major concern about the approach is still that it requires a fair
amount of overhead on the part of both app developers and middleware
developers, even if that overhead mostly consists of importing and
decorating.  (More below.)


>The second middleware demonstration (using a decorator) makes
>middleware look a lot more like an application: yielding futures, or
>a response, with the addition of yielding an application callable
>not explored in the first (long, but trivial) example.  I believe
>this should cover 99% of middleware use cases, including interactive
>debugging, request routing, etc. and the syntax isn't too bad, if
>you don't mind standardized decorators.

If we assume that the implementation would be in a wsgi2ref for
Python 3.3 and distributed standalone for 2.x, I think we can make
something work.  (In the sense of practical to implement, not
necessarily *desirable*.)

One of my goals is that it should be possible to write "async-naive"
applications and middleware, so that people who don't care about
async can ignore it.

On the application side, this is easy: a trivial decorator suffices
to translate a return into a yield.

For middleware, it's not quite as simple, unless you have a pure
ingress or egress filter, since you can't simply "call" the
application.  However, a "context manager"-like pattern applies,
wherein you could simply yield to calling a wrapped version of the application.

Hm.  This seems to pretty much generalize to a standard
coroutine/trampoline pattern, where the server provides the
trampoline, and can provide APIs in the environ to create waitable
objects that can be yielded upward.

Actually, this is kind of like what I really wanted the futures PEP
to be about.  And it also preserves composability nicely.

In fact, it doesn't actually need any middleware decorators, if the
server provides the trampoline.

We would leave your "my_awesome_application" example intact (possibly
apart from having a friendlier API for reading from wsgi.input), but
change my_middleware as follows:

    def my_middleware(app):
        def wrapper(environ):
            # pre-response code here
            response = yield app(environ)
            # post-response code here
            yield altered_response
        return wrapper

That's it.  No decorators, no nothing.

The server-level trampoline is then just a function that looks
something like this:

     def app_trampoline(coroutine, yielded):
         if [yielded is a future of some sort]:
             [arrange to invoke 'coroutine(result)' upon completion]
             [arrange to inovke 'coroutine(None, exc_info)' upon error]
             return "pause"
         elif [yielded is a response]:
             return "return"
         elif [yielded has send/throw methods]:
             return "call"  # tell the coroutine to call it
         else:
             raise TypeError

The trampoline function is used with a coroutine class like this:

     class Coroutine:

         def __init__(self, iterator, trampoline, callback):
             self.stack = [iterator]
             self.trampoline = trampoline
             self()

         def __call__(self, value=None, exc_info=()):
             stack = self.stack
             while stack:
                 try:
                     it = stack[-1]
                     if exc_info:
                         try:
                             rv = it.throw(*exc_info)
                         finally:
                             exc_info = ()
                     else:
                         rv = it.send(value)
                 except BaseException:
                     value = None
                     exc_info = sys.exc_info()
                     if exc_info[0] is StopIteration:
                         # pass return value up the stack
                         value, = exc_info[1].args or (None,)
                         exc_info = ()   # but not the error
                     stack.pop()
                 else:
                     switch = self.trampoline(self, rv)
                     if switch=="pause":
                         return
                     elif switch=="call":
                         stack.append(rv)  # Call subgenerator
                         value, exc_info = None, ()
                     elif switch=="return":
                         value, exc_info = rv, ()
                         stack.pop()

             # Coroutine is entirely finished
             self.callback(value)

And run by simply calling:

     Coroutine(app(environ), app_trampoline, process_response)

Where process_response() is a function receiving a three-tuple to
process the actual result.

That's basically it.  The Coroutine class is
server/framework-independent; the minimal trampoline function is the
part the server author has to write.

The body iterator can follow a similar protocol, but the trampoline
function is different:

     def body_trampoline(coroutine, yielded):
         if type(yielded) is bytes:
             if len(coroutine.stack)==1:  # only accept from
outermost middleware
                 [send the bytes out]
                 [arrange to invoke coroutine() when send is completed]
                 return "pause"
             else:
                 return "return"
         if [yielded is a future of some sort]:
             [arrange to invoke 'coroutine(result)' upon completion]
             [arrange to inovke 'coroutine(None, exc_info)' upon error]
             return "pause"
         elif [yielded has send/throw methods]:
             return "call"  # tell the coroutine to call it
         else:
             raise TypeError

So, part of the server's "process_response" callback would look like:

     Coroutine(body_iter, body_trampoline, finish_response)


You can then implement response-processing middleware like this:

     def latinize_body(body_iter):
         while True:
             chunk = yield body_iter
             if chunk is None:
                 break
             else:
                 yield piglatin(yield body_iter)

     def piglatin(app):
         def wrapper(environ):
             s, h, b = yield app(environ)
             if [suitable for processing]:
                 yield s, h, latinize_body(b)
             else:
                 yield s, h, b  # skip body processing


My overall impression is still that there's something worth
considering here, but there is still some ugly mental overheads
involved for body-processing middleware, if we want to support
pausing during the body iteration.  The latinize_body function above
isn't exactly intuitively obvious, compared to a for loop, and it
can't be replaced by one without using greenlets.

On the plus side, it can actually all be done without any decorators at all.

(The next interesting challenge would be to integrate this with
Graham's proposal for adding cleanup handlers...)

_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

PJ Eby
I made a few errors in that massive post...

At 12:00 PM 1/8/2011 -0500, P.J. Eby wrote:
>My major concern about the approach is still that it requires a fair
>amount of overhead on the part of both app developers and middleware
>developers, even if that overhead mostly consists of importing and
>decorating.  (More below.)

The above turned out to be happily wrong by the end of the post,
since no decorators or imports are actually required for app and
middleware developers.


>You can then implement response-processing middleware like this:
>
>     def latinize_body(body_iter):
>         while True:
>             chunk = yield body_iter
>             if chunk is None:
>                 break
>             else:
>                 yield piglatin(yield body_iter)

The last line above is incorrect; it should've been "yield
piglatin(chunk)", i.e.:

     def latinize_body(body_iter):
         while True:
             chunk = yield body_iter
             if chunk is None:
                 break
             else:
                 yield piglatin(chunk)

It's still rather unintuitive, though.  There are also plenty of
topics left to discuss, both of the substantial and bikeshedding varieties.

One big open question still in my mind is, are these middleware
idioms any easier to get right than the WSGI 1 ones?  For things that
don't process response bodies, the answer seems to be yes: you just
stick in a "yield" and you're done.

For things that DO process response bodies, however, you have to have
ugly loops like the one above.

I suppose it could be argued that, as unintuitive as that
body-processing loop is, it's still orders of magnitude more
intuitive than a piece of WSGI 1 middleware that has to handle both
application yields and write()s!

I suppose my hesitance is due to the fact that it's not as simple as:

     return (piglatin(chunk) for chunk in body_iter)

Which is really the level of simplicity that I was looking
for.  (IOW, all response-processing middleware pays in this
slightly-added complexity to support the subset of apps and
response-processing middleware that need to wait for events during
body output.)

_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

PJ Eby
In reply to this post by Alice Bevan–McGregor
At 05:39 AM 1/8/2011 -0800, Alice Bevan­McGregor wrote:
>As a quick note, this proposal would signifigantly benefit from the
>simplified syntax offered by PEP 380 (Syntax for Delegating to a
>Subgenerator) [1] and possibly PEP 3152 (Cofunctions) [2].  The
>former simplifies delegation and exception passing, and the latter
>simplifies the async side of this.
>
>Unfortunately, AFIK, both are affected by PEP 3003 (Python Language
>Moratorium) [3], which kinda sucks.

Luckily, neither PEP is necessary, since we do not need to support
arbitrary protocols for the "subgenerators" being called.  This makes
it possible to simply "yield" instead of "yield from", and the
trampoline functions take care of distinguishing a terminal
("return") result from an intermediate one.

The Coroutine class I suggested, however, *does* accept explicit
returns via "raise StopIteration(value)", so it is actually fully
equivalent to supporting "yield from", as long as it's used with an
appropriate trampoline function.

(In fact, the structure of the Coroutine class I proposed was stolen
from an earlier Python-Dev post I did in an attempt to show why PEP
380 was unnecessary for doing coroutines.  ;-) )

In effect, the only thing that PEP 380 would add here is the syntax
sugar for 'raise StopIteration(value)', but you can do that with:

     def return_(value):
         raise StopIteration(value)

In any case, my suggestion doesn't need this for either apps or
response bodies, since the type of data yielded suffices to indicate
whether the value is a "return" or not.  You only need a subgenerator
to raise StopIteration if you want to return something to your caller
that *isn't* a response or body chunk.

_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Paul Davis-11
In reply to this post by Alice Bevan–McGregor
On Sat, Jan 8, 2011 at 6:26 AM, Alice Bevan–McGregor
<[hidden email]> wrote:

> Warning: this assumes we're running on bizzaro-world PEP 444 that mandates
> applications are generators.  Please do not dismiss this idea out of hand
> but give it a good look and maybe some feedback.  ;)
>
> --
>
> Howdy!
>
> I've finished touching up the p-code illustrating my idea of using
> generators to implement async functionality within a WSGI application and
> middleware, including the idea of a wsgi2ref-supplied decorator to simplify
> middleware.
>
>        https://gist.github.com/770743
>
> There may be a few typos in there; I switched from the idea of passing back
> the returned value of the future to passing back the future itself in order
> to better handle exception handling (i.e. not requiring utter insanity in
> the middleware to determine the true source of an exception and the need to
> pass it along).
>
> The second middleware demonstration (using a decorator) makes middleware
> look a lot more like an application: yielding futures, or a response, with
> the addition of yielding an application callable not explored in the first
> (long, but trivial) example.  I believe this should cover 99% of middleware
> use cases, including interactive debugging, request routing, etc. and the
> syntax isn't too bad, if you don't mind standardized decorators.
>
> This should be implementable within the context of Marrow HTTPd
> (http://bit.ly/fLfamO) without too much difficulty.
>
> As a side note, I'll be adding threading support to the server (actually,
> marrow.server, the underlying server/protocol abstraction m.s.http utilizes)
> using futures some time over the week-end by wrapping the async callback
> that calls the application with a call to an executor, making it immune to
> blocking, but I suspect the overhead will outweigh the benefit for speedy
> applications.
>
> Testing multi-process vs. multi-threaded using 2 workers each and the prime
> calculation example, threading is 1.5x slower for CPU-intensive tasks under
> Python 2.7.  That's terrible.  It should be 2x; I have 2 cores.  :/
>
>        - Alice.
>
>
> _______________________________________________
> Web-SIG mailing list
> [hidden email]
> Web SIG: http://www.python.org/sigs/web-sig
> Unsubscribe:
> http://mail.python.org/mailman/options/web-sig/paul.joseph.davis%40gmail.com
>

For contrast, I thought it might be beneficial to have a comparison
with an implementation that didn't use async might look like:

http://friendpaste.com/4lFbZsTpPGA9N9niyOt9PF

If your implementation requires that people change source code (yield
vs return) when they move code between sync and async servers, doesn't
that pretty much violate the main WSGI goal of portability?

IMO, the async middleware is quite more complex than the current state
of things with start_response. The ability to subtly miss invoking the
generator, or invoking it too many times and dropping part of a
response. Forcing every middleware to unwrap iterators and handle
their own StopExceptions is worrisome as well.

I can't decide if casting the complexity of the async middleware as a
side benefit of discouraging authors was a joke or not.

Either way this proposal reminds me quite a bit of Duff's device [1].
On its own Duff's device is quite amusing and could even be employed
in some situations to great effect. On the other hand, any WSGI spec
has to be understandable and implementable by people from all skill
ranges. If its a spec that only a handful of people comprehend, then I
fear its adoption would be significantly slowed in practice.


[1] http://en.wikipedia.org/wiki/Duff's_device
_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

PJ Eby
At 01:24 PM 1/8/2011 -0500, Paul Davis wrote:
>For contrast, I thought it might be beneficial to have a comparison
>with an implementation that didn't use async might look like:
>
>http://friendpaste.com/4lFbZsTpPGA9N9niyOt9PF

Compare your version with this one, that uses my revision of Alice's proposal:

def my_awesome_application(environ):
     # do stuff
     yield b'200 OK', [], ["Hello, World!"]

def my_middleware(app):
     def wrapper(environ):
         # maybe edit environ
         try:
             status, headers, body = yield app(environ)
             # maybe edit response:
             # body = latinize(body)
             yield status, headers, body
         except:
             # maybe handle error
         finally:
             # maybe release resources

def my_server(app, httpreq):
     environ = wsgi.make_environ(httpreq)

     def process_response(result):
         status, headers, body = result
         write_headers(httpreq, status, headers)
         Coroutine(body, body_trampoline, finish_response)

     def finish_response(result):
         # cleanup, if any

     Coroutine(app(environ), app_trampoline, process_response)


The primary differences are that the server needs to split some of
its processing into separate routines, and response-processing done
by middleware has to happen in a while loop rather than a for loop.


>If your implementation requires that people change source code (yield
>vs return) when they move code between sync and async servers, doesn't
>that pretty much violate the main WSGI goal of portability?

The idea here would be to have WSGI 2 use this protocol exclusively,
not to have two different protocols.


>IMO, the async middleware is quite more complex than the current state
>of things with start_response.

Under the above proposal, it isn't, since you can't (only) do a for
loop over the response body; you have to write a loop and a
push-based handler as well.  In this case, it is reduced to just
writing one loop.

I'm still not entirely convinced of the viability of the approach,
but I'm no longer in the "that's just crazy talk" category regarding
an async WSGI.  The cost is no longer crazy, but there's still some
cost involved, and the use case rationale hasn't improved much.

OTOH, I can now conceive of actually *using* such an async API for
something, and that's no small feat.  Before now, the idea held
virtually zero interest for me.


>Either way this proposal reminds me quite a bit of Duff's device [1].
>On its own Duff's device is quite amusing and could even be employed
>in some situations to great effect. On the other hand, any WSGI spec
>has to be understandable and implementable by people from all skill
>ranges. If its a spec that only a handful of people comprehend, then I
>fear its adoption would be significantly slowed in practice.

Under my modification of Alice's proposal, nearly all of the
complexity involved migrates to the server, mostly in the (shareable)
Coroutine implementation.

For an async server, the "arrange for coroutine(result) to be called"
operations are generally native to async APIs, so I'd expect them to
find that simple to implement.  Synchronous servers just need to
invoke the waited-on operation synchronously, then pass the value
back into the coroutine.  (e.g. by returning "pause" from the
trampoline, then calling coroutine(value, exc_info) to resume
processing after the result is obtained.)


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Alice Bevan–McGregor
In reply to this post by PJ Eby
On 2011-01-08 09:00:18 -0800, P.J. Eby said:

> (The next interesting challenge would be to integrate this withGraham's
> proposal for adding cleanup handlers...)

class MyApplication(object):
    def __init__(self):
        pass # process startup code

    def __call__(self, environ):
        yield None # must be a generator
        pass # request code

    def __enter__(self):
        pass # request startup code

    def __exit(exc_type, exc_val, exc_tb):
        pass # request shutdown code -- regardless of exceptions

We could mandate context managers!  :D  (Which means you can still wrap
a simple function in @contextmanager.)

        - Alice.


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Graham Dumpleton-2
On 9 January 2011 12:16, Alice Bevan–McGregor <[hidden email]> wrote:

> On 2011-01-08 09:00:18 -0800, P.J. Eby said:
>
>> (The next interesting challenge would be to integrate this withGraham's
>> proposal for adding cleanup handlers...)
>
> class MyApplication(object):
>   def __init__(self):
>       pass # process startup code
>
>   def __call__(self, environ):
>       yield None # must be a generator
>       pass # request code
>
>   def __enter__(self):
>       pass # request startup code
>
>   def __exit(exc_type, exc_val, exc_tb):
>       pass # request shutdown code -- regardless of exceptions
>
> We could mandate context managers!  :D  (Which means you can still wrap a
> simple function in @contextmanager.)

Context managers don't solve the problem I am trying to address. The
'with' statement doesn't apply context managers to WSGI application
objects in way that is desirable and use of a decorator to achieve the
same means having to replace close() which is what am trying to avoid
because of extra complexity that causes for WSGI middleware just to
make sure wsgi.file_wrapper works. We want a world where it should
never be necessary for WSGI middleware, or proxy decorators, to have
to fudge up a generator and override the close() chain to add
cleanups.

Graham

>        - Alice.
>
>
> _______________________________________________
> Web-SIG mailing list
> [hidden email]
> Web SIG: http://www.python.org/sigs/web-sig
> Unsubscribe:
> http://mail.python.org/mailman/options/web-sig/graham.dumpleton%40gmail.com
>
_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com
Reply | Threaded
Open this post in threaded view
|

Re: [PEP 444] Future- and Generator-Based Async Idea

Alice Bevan–McGregor
In reply to this post by Alice Bevan–McGregor
Here's what I've mutated Alex Grönholm's minimal middleware example
into: (see the change history for the evolution of this)

        https://gist.github.com/771398

A complete functional (as in function, not working ;) async-capable
middleware layer (that does nothing) is 12 lines.  That, I think is a
reasonable amount of boilerplate.  Also, no decorators needed.  It's
quite readable, even the way I've compressed it.

The class-based version is basically identical, but with added comments
explaining the assumptions this example makes and demonstrating where
the acutal middleware code can be implemented for simple middleware.

        - Alice.


_______________________________________________
Web-SIG mailing list
[hidden email]
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: http://mail.python.org/mailman/options/web-sig/lists%40nabble.com