text
stringlengths 330
67k
| status
stringclasses 9
values | title
stringlengths 18
80
| type
stringclasses 3
values | abstract
stringlengths 4
917
|
---|---|---|---|---|
PEP 3112 – Bytes literals in Python 3000
Author:
Jason Orendorff <jason.orendorff at gmail.com>
Status:
Final
Type:
Standards Track
Requires:
358
Created:
23-Feb-2007
Python-Version:
3.0
Post-History:
23-Feb-2007
Table of Contents
Abstract
Motivation
Grammar Changes
Semantics
Rationale
Reference Implementation
References
Copyright
Abstract
This PEP proposes a literal syntax for the bytes objects
introduced in PEP 358. The purpose is to provide a convenient way to
spell ASCII strings and arbitrary binary data.
Motivation
Existing spellings of an ASCII string in Python 3000 include:
bytes('Hello world', 'ascii')
'Hello world'.encode('ascii')
The proposed syntax is:
b'Hello world'
Existing spellings of an 8-bit binary sequence in Python 3000 include:
bytes([0x7f, 0x45, 0x4c, 0x46, 0x01, 0x01, 0x01, 0x00])
bytes('\x7fELF\x01\x01\x01\0', 'latin-1')
'7f454c4601010100'.decode('hex')
The proposed syntax is:
b'\x7f\x45\x4c\x46\x01\x01\x01\x00'
b'\x7fELF\x01\x01\x01\0'
In both cases, the advantages of the new syntax are brevity, some
small efficiency gain, and the detection of encoding errors at compile
time rather than at runtime. The brevity benefit is especially felt
when using the string-like methods of bytes objects:
lines = bdata.split(bytes('\n', 'ascii')) # existing syntax
lines = bdata.split(b'\n') # proposed syntax
And when converting code from Python 2.x to Python 3000:
sok.send('EXIT\r\n') # Python 2.x
sok.send('EXIT\r\n'.encode('ascii')) # Python 3000 existing
sok.send(b'EXIT\r\n') # proposed
Grammar Changes
The proposed syntax is an extension of the existing string
syntax [1].
The new syntax for strings, including the new bytes literal, is:
stringliteral: [stringprefix] (shortstring | longstring)
stringprefix: "b" | "r" | "br" | "B" | "R" | "BR" | "Br" | "bR"
shortstring: "'" shortstringitem* "'" | '"' shortstringitem* '"'
longstring: "'''" longstringitem* "'''" | '"""' longstringitem* '"""'
shortstringitem: shortstringchar | escapeseq
longstringitem: longstringchar | escapeseq
shortstringchar:
<any source character except "\" or newline or the quote>
longstringchar: <any source character except "\">
escapeseq: "\" NL
| "\\" | "\'" | '\"'
| "\a" | "\b" | "\f" | "\n" | "\r" | "\t" | "\v"
| "\ooo" | "\xhh"
| "\uxxxx" | "\Uxxxxxxxx" | "\N{name}"
The following additional restrictions apply only to bytes literals
(stringliteral tokens with b or B in the
stringprefix):
Each shortstringchar or longstringchar must be a character
between 1 and 127 inclusive, regardless of any encoding
declaration [2] in the source file.
The Unicode-specific escape sequences \uxxxx,
\Uxxxxxxxx, and \N{name} are unrecognized in
Python 2.x and forbidden in Python 3000.
Adjacent bytes literals are subject to the same concatenation rules as
adjacent string literals [3]. A bytes literal adjacent to a
string literal is an error.
Semantics
Each evaluation of a bytes literal produces a new bytes object.
The bytes in the new object are the bytes represented by the
shortstringitem or longstringitem parts of the literal, in the
same order.
Rationale
The proposed syntax provides a cleaner migration path from Python 2.x
to Python 3000 for most code involving 8-bit strings. Preserving the
old 8-bit meaning of a string literal is usually as simple as adding a
b prefix. The one exception is Python 2.x strings containing
bytes >127, which must be rewritten using escape sequences.
Transcoding a source file from one encoding to another, and fixing up
the encoding declaration, should preserve the meaning of the program.
Python 2.x non-Unicode strings violate this principle; Python 3000
bytes literals shouldn’t.
A string literal with a b in the prefix is always a syntax error
in Python 2.5, so this syntax can be introduced in Python 2.6, along
with the bytes type.
A bytes literal produces a new object each time it is evaluated, like
list displays and unlike string literals. This is necessary because
bytes literals, like lists and unlike strings, are
mutable [4].
Reference Implementation
Thomas Wouters has checked an implementation into the Py3K branch,
r53872.
References
[1]
http://docs.python.org/reference/lexical_analysis.html#string-literals
[2]
http://docs.python.org/reference/lexical_analysis.html#encoding-declarations
[3]
http://docs.python.org/reference/lexical_analysis.html#string-literal-concatenation
[4]
https://mail.python.org/pipermail/python-3000/2007-February/005779.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3112 – Bytes literals in Python 3000 | Standards Track | This PEP proposes a literal syntax for the bytes objects
introduced in PEP 358. The purpose is to provide a convenient way to
spell ASCII strings and arbitrary binary data. |
PEP 3113 – Removal of Tuple Parameter Unpacking
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Standards Track
Created:
02-Mar-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Why They Should Go
Introspection Issues
No Loss of Abilities If Removed
Exception To The Rule
Uninformative Error Messages
Little Usage
Why They Should (Supposedly) Stay
Practical Use
Self-Documentation For Parameters
Transition Plan
References
Copyright
Abstract
Tuple parameter unpacking is the use of a tuple as a parameter in a
function signature so as to have a sequence argument automatically
unpacked. An example is:
def fxn(a, (b, c), d):
pass
The use of (b, c) in the signature requires that the second
argument to the function be a sequence of length two (e.g.,
[42, -13]). When such a sequence is passed it is unpacked and
has its values assigned to the parameters, just as if the statement
b, c = [42, -13] had been executed in the parameter.
Unfortunately this feature of Python’s rich function signature
abilities, while handy in some situations, causes more issues than
they are worth. Thus this PEP proposes their removal from the
language in Python 3.0.
Why They Should Go
Introspection Issues
Python has very powerful introspection capabilities. These extend to
function signatures. There are no hidden details as to what a
function’s call signature is. In general it is fairly easy to figure
out various details about a function’s signature by viewing the
function object and various attributes on it (including the function’s
func_code attribute).
But there is great difficulty when it comes to tuple parameters. The
existence of a tuple parameter is denoted by its name being made of a
. and a number in the co_varnames attribute of the function’s
code object. This allows the tuple argument to be bound to a name
that only the bytecode is aware of and cannot be typed in Python
source. But this does not specify the format of the tuple: its
length, whether there are nested tuples, etc.
In order to get all of the details about the tuple from the function
one must analyse the bytecode of the function. This is because the
first bytecode in the function literally translates into the tuple
argument being unpacked. Assuming the tuple parameter is
named .1 and is expected to unpack to variables spam and
monty (meaning it is the tuple (spam, monty)), the first
bytecode in the function will be for the statement
spam, monty = .1. This means that to know all of the details of
the tuple parameter one must look at the initial bytecode of the
function to detect tuple unpacking for parameters formatted as
\.\d+ and deduce any and all information about the expected
argument. Bytecode analysis is how the inspect.getargspec
function is able to provide information on tuple parameters. This is
not easy to do and is burdensome on introspection tools as they must
know how Python bytecode works (an otherwise unneeded burden as all
other types of parameters do not require knowledge of Python
bytecode).
The difficulty of analysing bytecode not withstanding, there is
another issue with the dependency on using Python bytecode.
IronPython [3] does not use Python’s bytecode. Because it
is based on the .NET framework it instead stores MSIL [4] in
func_code.co_code attribute of the function. This fact prevents
the inspect.getargspec function from working when run under
IronPython. It is unknown whether other Python implementations are
affected but is reasonable to assume if the implementation is not just
a re-implementation of the Python virtual machine.
No Loss of Abilities If Removed
As mentioned in Introspection Issues, to handle tuple parameters
the function’s bytecode starts with the bytecode required to unpack
the argument into the proper parameter names. This means that there
is no special support required to implement tuple parameters and thus
there is no loss of abilities if they were to be removed, only a
possible convenience (which is addressed in
Why They Should (Supposedly) Stay).
The example function at the beginning of this PEP could easily be
rewritten as:
def fxn(a, b_c, d):
b, c = b_c
pass
and in no way lose functionality.
Exception To The Rule
When looking at the various types of parameters that a Python function
can have, one will notice that tuple parameters tend to be an
exception rather than the rule.
Consider PEP 3102 (keyword-only arguments) and PEP 3107 (function
annotations). Both PEPs have been accepted and
introduce new functionality within a function’s signature. And yet
for both PEPs the new feature cannot be applied to tuple parameters as
a whole. PEP 3102 has no support for tuple parameters at all (which
makes sense as there is no way to reference a tuple parameter by
name). PEP 3107 allows annotations for each item within the tuple
(e.g., (x:int, y:int)), but not the whole tuple (e.g.,
(x, y):int).
The existence of tuple parameters also places sequence objects
separately from mapping objects in a function signature. There is no
way to pass in a mapping object (e.g., a dict) as a parameter and have
it unpack in the same fashion as a sequence does into a tuple
parameter.
Uninformative Error Messages
Consider the following function:
def fxn((a, b), (c, d)):
pass
If called as fxn(1, (2, 3)) one is given the error message
TypeError: unpack non-sequence. This error message in no way
tells you which tuple was not unpacked properly. There is also no
indication that this was a result that occurred because of the
arguments. Other error messages regarding arguments to functions
explicitly state its relation to the signature:
TypeError: fxn() takes exactly 2 arguments (0 given), etc.
Little Usage
While an informal poll of the handful of Python programmers I know
personally and from the PyCon 2007 sprint indicates a huge majority of
people do not know of this feature and the rest just do not use it,
some hard numbers is needed to back up the claim that the feature is
not heavily used.
Iterating over every line in Python’s code repository in the Lib/
directory using the regular expression ^\s*def\s*\w+\s*\( to
detect function and method definitions there were 22,252 matches in
the trunk.
Tacking on .*,\s*\( to find def statements that contained a
tuple parameter, only 41 matches were found. This means that for
def statements, only 0.18% of them seem to use a tuple parameter.
Why They Should (Supposedly) Stay
Practical Use
In certain instances tuple parameters can be useful. A common example
is code that expects a two-item tuple that represents a Cartesian
point. While true it is nice to be able to have the unpacking of the
x and y coordinates for you, the argument is that this small amount of
practical usefulness is heavily outweighed by other issues pertaining
to tuple parameters. And as shown in
No Loss Of Abilities If Removed, their use is purely practical and
in no way provide a unique ability that cannot be handled in other
ways very easily.
Self-Documentation For Parameters
It has been argued that tuple parameters provide a way of
self-documentation for parameters that are expected to be of a certain
sequence format. Using our Cartesian point example from
Practical Use, seeing (x, y) as a parameter in a function makes
it obvious that a tuple of length two is expected as an argument for
that parameter.
But Python provides several other ways to document what parameters are
for. Documentation strings are meant to provide enough information
needed to explain what arguments are expected. Tuple parameters might
tell you the expected length of a sequence argument, it does not tell
you what that data will be used for. One must also read the docstring
to know what other arguments are expected if not all parameters are
tuple parameters.
Function annotations (which do not work with tuple parameters) can
also supply documentation. Because annotations can be of any form,
what was once a tuple parameter can be a single argument parameter
with an annotation of tuple, tuple(2), Cartesian point,
(x, y), etc. Annotations provide great flexibility for
documenting what an argument is expected to be for a parameter,
including being a sequence of a certain length.
Transition Plan
To transition Python 2.x code to 3.x where tuple parameters are
removed, two steps are suggested. First, the proper warning is to be
emitted when Python’s compiler comes across a tuple parameter in
Python 2.6. This will be treated like any other syntactic change that
is to occur in Python 3.0 compared to Python 2.6.
Second, the 2to3 refactoring tool [1] will gain a fixer
[2] for translating tuple parameters to being a single parameter
that is unpacked as the first statement in the function. The name of
the new parameter will be changed. The new parameter will then be
unpacked into the names originally used in the tuple parameter. This
means that the following function:
def fxn((a, (b, c))):
pass
will be translated into:
def fxn(a_b_c):
(a, (b, c)) = a_b_c
pass
As tuple parameters are used by lambdas because of the single
expression limitation, they must also be supported. This is done by
having the expected sequence argument bound to a single parameter and
then indexing on that parameter:
lambda (x, y): x + y
will be translated into:
lambda x_y: x_y[0] + x_y[1]
References
[1]
2to3 refactoring tool
(http://svn.python.org/view/sandbox/trunk/2to3/)
[2]
2to3 fixer
(http://svn.python.org/view/sandbox/trunk/2to3/fixes/fix_tuple_params.py)
[3]
IronPython
(http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython)
[4]
Microsoft Intermediate Language
(http://msdn.microsoft.com/library/en-us/cpguide/html/cpconmicrosoftintermediatelanguagemsil.asp?frame=true)
Copyright
This document has been placed in the public domain.
| Final | PEP 3113 – Removal of Tuple Parameter Unpacking | Standards Track | Tuple parameter unpacking is the use of a tuple as a parameter in a
function signature so as to have a sequence argument automatically
unpacked. An example is: |
PEP 3114 – Renaming iterator.next() to iterator.__next__()
Author:
Ka-Ping Yee <ping at zesty.ca>
Status:
Final
Type:
Standards Track
Created:
04-Mar-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Names With Double Underscores
Double-Underscore Methods and Built-In Functions
Previous Proposals
Objections
Transition Plan
Approval
Implementation
References
Copyright
Abstract
The iterator protocol in Python 2.x consists of two methods:
__iter__() called on an iterable object to yield an iterator, and
next() called on an iterator object to yield the next item in the
sequence. Using a for loop to iterate over an iterable object
implicitly calls both of these methods. This PEP proposes that the
next method be renamed to __next__, consistent with all the
other protocols in Python in which a method is implicitly called as
part of a language-level protocol, and that a built-in function named
next be introduced to invoke __next__ method, consistent with
the manner in which other protocols are explicitly invoked.
Names With Double Underscores
In Python, double underscores before and after a name are used to
distinguish names that belong to the language itself. Attributes and
methods that are implicitly used or created by the interpreter employ
this naming convention; some examples are:
__file__ - an attribute automatically created by the interpreter
__dict__ - an attribute with special meaning to the interpreter
__init__ - a method implicitly called by the interpreter
Note that this convention applies to methods such as __init__ that
are explicitly defined by the programmer, as well as attributes such as
__file__ that can only be accessed by naming them explicitly, so it
includes names that are used or created by the interpreter.
(Not all things that are called “protocols” are made of methods with
double-underscore names. For example, the __contains__ method has
double underscores because the language construct x in y implicitly
calls __contains__. But even though the read method is part of
the file protocol, it does not have double underscores because there is
no language construct that implicitly invokes x.read().)
The use of double underscores creates a separate namespace for names
that are part of the Python language definition, so that programmers
are free to create variables, attributes, and methods that start with
letters, without fear of silently colliding with names that have a
language-defined purpose. (Colliding with reserved keywords is still
a concern, but at least this will immediately yield a syntax error.)
The naming of the next method on iterators is an exception to
this convention. Code that nowhere contains an explicit call to a
next method can nonetheless be silently affected by the presence
of such a method. Therefore, this PEP proposes that iterators should
have a __next__ method instead of a next method (with no
change in semantics).
Double-Underscore Methods and Built-In Functions
The Python language defines several protocols that are implemented or
customized by defining methods with double-underscore names. In each
case, the protocol is provided by an internal method implemented as a
C function in the interpreter. For objects defined in Python, this
C function supports customization by implicitly invoking a Python method
with a double-underscore name (it often does a little bit of additional
work beyond just calling the Python method.)
Sometimes the protocol is invoked by a syntactic construct:
x[y] –> internal tp_getitem –> x.__getitem__(y)
x + y –> internal nb_add –> x.__add__(y)
-x –> internal nb_negative –> x.__neg__()
Sometimes there is no syntactic construct, but it is still useful to be
able to explicitly invoke the protocol. For such cases Python offers a
built-in function of the same name but without the double underscores.
len(x) –> internal sq_length –> x.__len__()
hash(x) –> internal tp_hash –> x.__hash__()
iter(x) –> internal tp_iter –> x.__iter__()
Following this pattern, the natural way to handle next is to add a
next built-in function that behaves in exactly the same fashion.
next(x) –> internal tp_iternext –> x.__next__()
Further, it is proposed that the next built-in function accept a
sentinel value as an optional second argument, following the style of
the getattr and iter built-in functions. When called with two
arguments, next catches the StopIteration exception and returns
the sentinel value instead of propagating the exception. This creates
a nice duality between iter and next:
iter(function, sentinel) <–> next(iterator, sentinel)
Previous Proposals
This proposal is not a new idea. The idea proposed here was supported
by the BDFL on python-dev [1] and is even mentioned in the original
iterator PEP, PEP 234:
(In retrospect, it might have been better to go for __next__()
and have a new built-in, next(it), which calls it.__next__().
But alas, it's too late; this has been deployed in Python 2.2
since December 2001.)
Objections
There have been a few objections to the addition of more built-ins.
In particular, Martin von Loewis writes [2]:
I dislike the introduction of more builtins unless they have a true
generality (i.e. are likely to be needed in many programs). For this
one, I think the normal usage of __next__ will be with a for loop, so
I don't think one would often need an explicit next() invocation.
It is also not true that most protocols are explicitly invoked through
builtin functions. Instead, most protocols are can be explicitly invoked
through methods in the operator module. So following tradition, it
should be operator.next.
...
As an alternative, I propose that object grows a .next() method,
which calls __next__ by default.
Transition Plan
Two additional transformations will be added to the 2to3 translation
tool [3]:
Method definitions named next will be renamed to __next__.
Explicit calls to the next method will be replaced with calls
to the built-in next function. For example, x.next() will
become next(x).
Collin Winter looked into the possibility of automatically deciding
whether to perform the second transformation depending on the presence
of a module-level binding to next [4] and found that it would be
“ugly and slow”. Instead, the translation tool will emit warnings
upon detecting such a binding. Collin has proposed warnings for the
following conditions [5]:
Module-level assignments to next.
Module-level definitions of a function named next.
Module-level imports of the name next.
Assignments to __builtin__.next.
Approval
This PEP was accepted by Guido on March 6, 2007 [6].
Implementation
A patch with the necessary changes (except the 2to3 tool) was written
by Georg Brandl and committed as revision 54910.
References
[1]
Single- vs. Multi-pass iterability (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2002-July/026814.html
[2]
PEP: rename it.next() to it.__next__()… (Martin von Loewis)
https://mail.python.org/pipermail/python-3000/2007-March/005965.html
[3]
2to3 refactoring tool
https://github.com/python/cpython/tree/ef04c44e29a8276a484f58d03a75a2dec516302d/Lib/lib2to3
[4]
PEP: rename it.next() to it.__next__()… (Collin Winter)
https://mail.python.org/pipermail/python-3000/2007-March/006020.html
[5]
PEP 3113 transition plan
https://mail.python.org/pipermail/python-3000/2007-March/006044.html
[6]
PEP: rename it.next() to it.__next__()… (Guido van Rossum)
https://mail.python.org/pipermail/python-3000/2007-March/006027.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3114 – Renaming iterator.next() to iterator.__next__() | Standards Track | The iterator protocol in Python 2.x consists of two methods:
__iter__() called on an iterable object to yield an iterator, and
next() called on an iterator object to yield the next item in the
sequence. Using a for loop to iterate over an iterable object
implicitly calls both of these methods. This PEP proposes that the
next method be renamed to __next__, consistent with all the
other protocols in Python in which a method is implicitly called as
part of a language-level protocol, and that a built-in function named
next be introduced to invoke __next__ method, consistent with
the manner in which other protocols are explicitly invoked. |
PEP 3115 – Metaclasses in Python 3000
Author:
Talin <viridia at gmail.com>
Status:
Final
Type:
Standards Track
Created:
07-Mar-2007
Python-Version:
3.0
Post-History:
11-Mar-2007, 14-Mar-2007
Table of Contents
Abstract
Rationale
Specification
Invoking the Metaclass
Example
Sample Implementation
Alternate Proposals
Backwards Compatibility
References
Copyright
Abstract
This PEP proposes changing the syntax for declaring metaclasses,
and alters the semantics for how classes with metaclasses are
constructed.
Rationale
There are two rationales for this PEP, both of which are somewhat
subtle.
The primary reason for changing the way metaclasses work, is that
there are a number of interesting use cases that require the
metaclass to get involved earlier in the class construction process
than is currently possible. Currently, the metaclass mechanism is
essentially a post-processing step. With the advent of class
decorators, much of these post-processing chores can be taken over
by the decorator mechanism.
In particular, there is an important body of use cases where it
would be useful to preserve the order in which a class members are
declared. Ordinary Python objects store their members in a
dictionary, in which ordering is unimportant, and members are
accessed strictly by name. However, Python is often used to
interface with external systems in which the members are organized
according to an implicit ordering. Examples include declaration of C
structs; COM objects; Automatic translation of Python classes into
IDL or database schemas, such as used in an ORM; and so on.
In such cases, it would be useful for a Python programmer to specify
such ordering directly using the declaration order of class members.
Currently, such orderings must be specified explicitly, using some
other mechanism (see the ctypes module for an example.)
Unfortunately, the current method for declaring a metaclass does
not allow for this, since the ordering information has already been
lost by the time the metaclass comes into play. By allowing the
metaclass to get involved in the class construction process earlier,
the new system allows the ordering or other early artifacts of
construction to be preserved and examined.
There proposed metaclass mechanism also supports a number of other
interesting use cases beyond preserving the ordering of declarations.
One use case is to insert symbols into the namespace of the class
body which are only valid during class construction. An example of
this might be “field constructors”, small functions that are used in
the creation of class members. Another interesting possibility is
supporting forward references, i.e. references to Python
symbols that are declared further down in the class body.
The other, weaker, rationale is purely cosmetic: The current method
for specifying a metaclass is by assignment to the special variable
__metaclass__, which is considered by some to be aesthetically less
than ideal. Others disagree strongly with that opinion. This PEP
will not address this issue, other than to note it, since aesthetic
debates cannot be resolved via logical proofs.
Specification
In the new model, the syntax for specifying a metaclass is via a
keyword argument in the list of base classes:
class Foo(base1, base2, metaclass=mymeta):
...
Additional keywords will also be allowed here, and will be passed to
the metaclass, as in the following example:
class Foo(base1, base2, metaclass=mymeta, private=True):
...
Note that this PEP makes no attempt to define what these other
keywords might be - that is up to metaclass implementors to
determine.
More generally, the parameter list passed to a class definition will
now support all of the features of a function call, meaning that you
can now use *args and **kwargs-style arguments in the class base
list:
class Foo(*bases, **kwds):
...
Invoking the Metaclass
In the current metaclass system, the metaclass object can be any
callable type. This does not change, however in order to fully
exploit all of the new features the metaclass will need to have an
extra attribute which is used during class pre-construction.
This attribute is named __prepare__, which is invoked as a function
before the evaluation of the class body. The __prepare__ function
takes two positional arguments, and an arbitrary number of keyword
arguments. The two positional arguments are:
name
the name of the class being created.
bases
the list of base classes.
The interpreter always tests for the existence of __prepare__ before
calling it; If it is not present, then a regular dictionary is used,
as illustrated in the following Python snippet.
def prepare_class(name, *bases, metaclass=None, **kwargs):
if metaclass is None:
metaclass = compute_default_metaclass(bases)
prepare = getattr(metaclass, '__prepare__', None)
if prepare is not None:
return prepare(name, bases, **kwargs)
else:
return dict()
The example above illustrates how the arguments to ‘class’ are
interpreted. The class name is the first argument, followed by
an arbitrary length list of base classes. After the base classes,
there may be one or more keyword arguments, one of which can be
metaclass. Note that the metaclass argument is not included
in kwargs, since it is filtered out by the normal parameter
assignment algorithm. (Note also that metaclass is a
keyword-only argument as per PEP 3102.)
Even though __prepare__ is not required, the default metaclass
(‘type’) implements it, for the convenience of subclasses calling
it via super().
__prepare__ returns a dictionary-like object which is used to store
the class member definitions during evaluation of the class body.
In other words, the class body is evaluated as a function block
(just like it is now), except that the local variables dictionary
is replaced by the dictionary returned from __prepare__. This
dictionary object can be a regular dictionary or a custom mapping
type.
This dictionary-like object is not required to support the full
dictionary interface. A dictionary which supports a limited set of
dictionary operations will restrict what kinds of actions can occur
during evaluation of the class body. A minimal implementation might
only support adding and retrieving values from the dictionary - most
class bodies will do no more than that during evaluation. For some
classes, it may be desirable to support deletion as well. Many
metaclasses will need to make a copy of this dictionary afterwards,
so iteration or other means for reading out the dictionary contents
may also be useful.
The __prepare__ method will most often be implemented as a class
method rather than an instance method because it is called before
the metaclass instance (i.e. the class itself) is created.
Once the class body has finished evaluating, the metaclass will be
called (as a callable) with the class dictionary, which is no
different from the current metaclass mechanism.
Typically, a metaclass will create a custom dictionary - either a
subclass of dict, or a wrapper around it - that will contain
additional properties that are set either before or during the
evaluation of the class body. Then in the second phase, the
metaclass can use these additional properties to further customize
the class.
An example would be a metaclass that uses information about the
ordering of member declarations to create a C struct. The metaclass
would provide a custom dictionary that simply keeps a record of the
order of insertions. This does not need to be a full ‘ordered dict’
implementation, but rather just a Python list of (key,value) pairs
that is appended to for each insertion.
Note that in such a case, the metaclass would be required to deal
with the possibility of duplicate keys, but in most cases that is
trivial. The metaclass can use the first declaration, the last,
combine them in some fashion, or simply throw an exception. It’s up
to the metaclass to decide how it wants to handle that case.
Example
Here’s a simple example of a metaclass which creates a list of
the names of all class members, in the order that they were
declared:
# The custom dictionary
class member_table(dict):
def __init__(self):
self.member_names = []
def __setitem__(self, key, value):
# if the key is not already defined, add to the
# list of keys.
if key not in self:
self.member_names.append(key)
# Call superclass
dict.__setitem__(self, key, value)
# The metaclass
class OrderedClass(type):
# The prepare function
@classmethod
def __prepare__(metacls, name, bases): # No keywords in this case
return member_table()
# The metaclass invocation
def __new__(cls, name, bases, classdict):
# Note that we replace the classdict with a regular
# dict before passing it to the superclass, so that we
# don't continue to record member names after the class
# has been created.
result = type.__new__(cls, name, bases, dict(classdict))
result.member_names = classdict.member_names
return result
class MyClass(metaclass=OrderedClass):
# method1 goes in array element 0
def method1(self):
pass
# method2 goes in array element 1
def method2(self):
pass
Sample Implementation
Guido van Rossum has created a patch which implements the new
functionality: https://bugs.python.org/issue1681101
Alternate Proposals
Josiah Carlson proposed using the name ‘type’ instead of
‘metaclass’, on the theory that what is really being specified is
the type of the type. While this is technically correct, it is also
confusing from the point of view of a programmer creating a new
class. From the application programmer’s point of view, the ‘type’
that they are interested in is the class that they are writing; the
type of that type is the metaclass.
There were some objections in the discussion to the ‘two-phase’
creation process, where the metaclass is invoked twice, once to
create the class dictionary and once to ‘finish’ the class. Some
people felt that these two phases should be completely separate, in
that there ought to be separate syntax for specifying the custom
dict as for specifying the metaclass. However, in most cases, the
two will be intimately tied together, and the metaclass will most
likely have an intimate knowledge of the internal details of the
class dict. Requiring the programmer to insure that the correct dict
type and the correct metaclass type are used together creates an
additional and unneeded burden on the programmer.
Another good suggestion was to simply use an ordered dict for all
classes, and skip the whole ‘custom dict’ mechanism. This was based
on the observation that most use cases for a custom dict were for
the purposes of preserving order information. However, this idea has
several drawbacks, first because it means that an ordered dict
implementation would have to be added to the set of built-in types
in Python, and second because it would impose a slight speed (and
complexity) penalty on all class declarations. Later, several people
came up with ideas for use cases for custom dictionaries other
than preserving field orderings, so this idea was dropped.
Backwards Compatibility
It would be possible to leave the existing __metaclass__ syntax in
place. Alternatively, it would not be too difficult to modify the
syntax rules of the Py3K translation tool to convert from the old to
the new syntax.
References
[1] [Python-3000] Metaclasses in Py3K (original proposal)
https://mail.python.org/pipermail/python-3000/2006-December/005030.html
[2] [Python-3000] Metaclasses in Py3K (Guido’s suggested syntax)
https://mail.python.org/pipermail/python-3000/2006-December/005033.html
[3] [Python-3000] Metaclasses in Py3K (Objections to two-phase init)
https://mail.python.org/pipermail/python-3000/2006-December/005108.html
[4] [Python-3000] Metaclasses in Py3K (Always use an ordered dict)
https://mail.python.org/pipermail/python-3000/2006-December/005118.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3115 – Metaclasses in Python 3000 | Standards Track | This PEP proposes changing the syntax for declaring metaclasses,
and alters the semantics for how classes with metaclasses are
constructed. |
PEP 3117 – Postfix type declarations
Author:
Georg Brandl <georg at python.org>
Status:
Rejected
Type:
Standards Track
Created:
01-Apr-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Specification
Unicode replacement units
The typedef statement
Example
Compatibility issues
Rejection
References
Acknowledgements
Copyright
Abstract
This PEP proposes the addition of a postfix type declaration syntax to
Python. It also specifies a new typedef statement which is used to create
new mappings between types and declarators.
Its acceptance will greatly enhance the Python user experience as well as
eliminate one of the warts that deter users of other programming languages from
switching to Python.
Rationale
Python has long suffered from the lack of explicit type declarations. Being one
of the few aspects in which the language deviates from its Zen, this wart has
sparked many a discussion between Python heretics and members of the PSU (for
a few examples, see [EX1], [EX2] or [EX3]), and it also made it a large-scale
enterprise success unlikely.
However, if one wants to put an end to this misery, a decent Pythonic syntax
must be found. In almost all languages that have them, type declarations lack
this quality: they are verbose, often needing multiple words for a single
type, or they are hard to comprehend (e.g., a certain language uses completely
unrelated [1] adjectives like dim for type declaration).
Therefore, this PEP combines the move to type declarations with another bold
move that will once again prove that Python is not only future-proof but
future-embracing: the introduction of Unicode characters as an integral
constituent of source code.
Unicode makes it possible to express much more with much less characters, which
is in accordance with the Zen (“Readability counts.”). Additionally, it
eliminates the need for a separate type declaration statement, and last but not
least, it makes Python measure up to Perl 6, which already uses Unicode for its
operators. [2]
Specification
When the type declaration mode is in operation, the grammar is changed so that
each NAME must consist of two parts: a name and a type declarator, which is
exactly one Unicode character.
The declarator uniquely specifies the type of the name, and if it occurs on the
left hand side of an expression, this type is enforced: an InquisitionError
exception is raised if the returned type doesn’t match the declared type. [3]
Also, function call result types have to be specified. If the result of the call
does not have the declared type, an InquisitionError is raised. Caution: the
declarator for the result should not be confused with the declarator for the
function object (see the example below).
Type declarators after names that are only read, not assigned to, are not strictly
necessary but enforced anyway (see the Python Zen: “Explicit is better than
implicit.”).
The mapping between types and declarators is not static. It can be completely
customized by the programmer, but for convenience there are some predefined
mappings for some built-in types:
Type
Declarator
object
� (REPLACEMENT CHARACTER)
int
ℕ (DOUBLE-STRUCK CAPITAL N)
float
℮ (ESTIMATED SYMBOL)
bool
✓ (CHECK MARK)
complex
ℂ (DOUBLE-STRUCK CAPITAL C)
str
✎ (LOWER RIGHT PENCIL)
unicode
✒ (BLACK NIB)
tuple
⒯ (PARENTHESIZED LATIN SMALL LETTER T)
list
♨ (HOT SPRINGS)
dict
⧟ (DOUBLE-ENDED MULTIMAP)
set
∅ (EMPTY SET) (Note: this is also for full sets)
frozenset
☃ (SNOWMAN)
datetime
⌚ (WATCH)
function
ƛ (LATIN SMALL LETTER LAMBDA WITH STROKE)
generator
⚛ (ATOM SYMBOL)
Exception
⌁ (ELECTRIC ARROW)
The declarator for the None type is a zero-width space.
These characters should be obvious and easy to remember and type for every
programmer.
Unicode replacement units
Since even in our modern, globalized world there are still some old-fashioned
rebels who can’t or don’t want to use Unicode in their source code, and since
Python is a forgiving language, a fallback is provided for those:
Instead of the single Unicode character, they can type name${UNICODE NAME OF
THE DECLARATOR}$. For example, these two function definitions are equivalent:
def fooƛ(xℂ):
return None
and
def foo${LATIN SMALL LETTER LAMBDA WITH STROKE}$(x${DOUBLE-STRUCK CAPITAL C}$):
return None${ZERO WIDTH NO-BREAK SPACE}$
This is still easy to read and makes the full power of type-annotated Python
available to ASCII believers.
The typedef statement
The mapping between types and declarators can be extended with this new statement.
The syntax is as follows:
typedef_stmt ::= "typedef" expr DECLARATOR
where expr resolves to a type object. For convenience, the typedef statement
can also be mixed with the class statement for new classes, like so:
typedef class Foo☺(object�):
pass
Example
This is the standard os.path.normpath function, converted to type declaration
syntax:
def normpathƛ(path✎)✎:
"""Normalize path, eliminating double slashes, etc."""
if path✎ == '':
return '.'
initial_slashes✓ = path✎.startswithƛ('/')✓
# POSIX allows one or two initial slashes, but treats three or more
# as single slash.
if (initial_slashes✓ and
path✎.startswithƛ('//')✓ and not path✎.startswithƛ('///')✓)✓:
initial_slashesℕ = 2
comps♨ = path✎.splitƛ('/')♨
new_comps♨ = []♨
for comp✎ in comps♨:
if comp✎ in ('', '.')⒯:
continue
if (comp✎ != '..' or (not initial_slashesℕ and not new_comps♨)✓ or
(new_comps♨ and new_comps♨[-1]✎ == '..')✓)✓:
new_comps♨.appendƛ(comp✎)
elif new_comps♨:
new_comps♨.popƛ()✎
comps♨ = new_comps♨
path✎ = '/'.join(comps♨)✎
if initial_slashesℕ:
path✎ = '/'*initial_slashesℕ + path✎
return path✎ or '.'
As you can clearly see, the type declarations add expressiveness, while at the
same time they make the code look much more professional.
Compatibility issues
To enable type declaration mode, one has to write:
from __future__ import type_declarations
which enables Unicode parsing of the source [4], makes typedef a keyword
and enforces correct types for all assignments and function calls.
Rejection
After careful considering, much soul-searching, gnashing of teeth and rending
of garments, it has been decided to reject this PEP.
References
[EX1]
https://mail.python.org/pipermail/python-list/2003-June/210588.html
[EX2]
https://mail.python.org/pipermail/python-list/2000-May/034685.html
[EX3]
http://groups.google.com/group/comp.lang.python/browse_frm/thread/6ae8c6add913635a/de40d4ffe9bd4304?lnk=gst&q=type+declarations&rnum=6
[1]
Though, if you know the language in question, it may not be that unrelated.
[2]
Well, it would, if there was a Perl 6.
[3]
Since the name TypeError is already in use, this name has been chosen
for obvious reasons.
[4]
The encoding in which the code is written is read from a standard coding
cookie. There will also be an autodetection mechanism, invoked by from
__future__ import encoding_hell.
Acknowledgements
Many thanks go to Armin Ronacher, Alexander Schremmer and Marek Kubica who helped
find the most suitable and mnemonic declarator for built-in types.
Thanks also to the Unicode Consortium for including all those useful characters
in the Unicode standard.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3117 – Postfix type declarations | Standards Track | This PEP proposes the addition of a postfix type declaration syntax to
Python. It also specifies a new typedef statement which is used to create
new mappings between types and declarators. |
PEP 3118 – Revising the buffer protocol
Author:
Travis Oliphant <oliphant at ee.byu.edu>, Carl Banks <pythondev at aerojockey.com>
Status:
Final
Type:
Standards Track
Created:
28-Aug-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Proposal Overview
Specification
Access flags
The Py_buffer struct
Releasing the buffer
New C-API calls are proposed
Additions to the struct string-syntax
Examples of Data-Format Descriptions
Code to be affected
Issues and Details
Code
Examples
Ex. 1
Ex. 2
Ex. 3
Ex. 4
Copyright
Abstract
This PEP proposes re-designing the buffer interface (PyBufferProcs
function pointers) to improve the way Python allows memory sharing in
Python 3.0
In particular, it is proposed that the character buffer portion
of the API be eliminated and the multiple-segment portion be
re-designed in conjunction with allowing for strided memory
to be shared. In addition, the new buffer interface will
allow the sharing of any multi-dimensional nature of the
memory and what data-format the memory contains.
This interface will allow any extension module to either
create objects that share memory or create algorithms that
use and manipulate raw memory from arbitrary objects that
export the interface.
Rationale
The Python 2.X buffer protocol allows different Python types to
exchange a pointer to a sequence of internal buffers. This
functionality is extremely useful for sharing large segments of
memory between different high-level objects, but it is too limited and
has issues:
There is the little used “sequence-of-segments” option
(bf_getsegcount) that is not well motivated.
There is the apparently redundant character-buffer option
(bf_getcharbuffer)
There is no way for a consumer to tell the buffer-API-exporting
object it is “finished” with its view of the memory and
therefore no way for the exporting object to be sure that it is
safe to reallocate the pointer to the memory that it owns (for
example, the array object reallocating its memory after sharing
it with the buffer object which held the original pointer led
to the infamous buffer-object problem).
Memory is just a pointer with a length. There is no way to
describe what is “in” the memory (float, int, C-structure, etc.)
There is no shape information provided for the memory. But,
several array-like Python types could make use of a standard
way to describe the shape-interpretation of the memory
(wxPython, GTK, pyQT, CVXOPT, PyVox, Audio and Video
Libraries, ctypes, NumPy, data-base interfaces, etc.)
There is no way to share discontiguous memory (except through
the sequence of segments notion).There are two widely used libraries that use the concept of
discontiguous memory: PIL and NumPy. Their view of discontiguous
arrays is different, though. The proposed buffer interface allows
sharing of either memory model. Exporters will typically use only one
approach and consumers may choose to support discontiguous
arrays of each type however they choose.
NumPy uses the notion of constant striding in each dimension as its
basic concept of an array. With this concept, a simple sub-region
of a larger array can be described without copying the data.
Thus, stride information is the additional information that must be
shared.
The PIL uses a more opaque memory representation. Sometimes an
image is contained in a contiguous segment of memory, but sometimes
it is contained in an array of pointers to the contiguous segments
(usually lines) of the image. The PIL is where the idea of multiple
buffer segments in the original buffer interface came from.
NumPy’s strided memory model is used more often in computational
libraries and because it is so simple it makes sense to support
memory sharing using this model. The PIL memory model is sometimes
used in C-code where a 2-d array can then be accessed using double
pointer indirection: e.g. image[i][j].
The buffer interface should allow the object to export either of these
memory models. Consumers are free to either require contiguous memory
or write code to handle one or both of these memory models.
Proposal Overview
Eliminate the char-buffer and multiple-segment sections of the
buffer-protocol.
Unify the read/write versions of getting the buffer.
Add a new function to the interface that should be called when
the consumer object is “done” with the memory area.
Add a new variable to allow the interface to describe what is in
memory (unifying what is currently done now in struct and
array)
Add a new variable to allow the protocol to share shape information
Add a new variable for sharing stride information
Add a new mechanism for sharing arrays that must
be accessed using pointer indirection.
Fix all objects in the core and the standard library to conform
to the new interface
Extend the struct module to handle more format specifiers
Extend the buffer object into a new memory object which places
a Python veneer around the buffer interface.
Add a few functions to make it easy to copy contiguous data
in and out of object supporting the buffer interface.
Specification
While the new specification allows for complicated memory sharing,
simple contiguous buffers of bytes can still be obtained from an
object. In fact, the new protocol allows a standard mechanism for
doing this even if the original object is not represented as a
contiguous chunk of memory.
The easiest way to obtain a simple contiguous chunk of memory is
to use the provided C-API to obtain a chunk of memory.
Change the PyBufferProcs structure to
typedef struct {
getbufferproc bf_getbuffer;
releasebufferproc bf_releasebuffer;
} PyBufferProcs;
Both of these routines are optional for a type object
typedef int (*getbufferproc)(PyObject *obj, PyBuffer *view, int flags)
This function returns 0 on success and -1 on failure (and raises an
error). The first variable is the “exporting” object. The second
argument is the address to a bufferinfo structure. Both arguments must
never be NULL.
The third argument indicates what kind of buffer the consumer is
prepared to deal with and therefore what kind of buffer the exporter
is allowed to return. The new buffer interface allows for much more
complicated memory sharing possibilities. Some consumers may not be
able to handle all the complexity but may want to see if the
exporter will let them take a simpler view to its memory.
In addition, some exporters may not be able to share memory in every
possible way and may need to raise errors to signal to some consumers
that something is just not possible. These errors should be
PyErr_BufferError unless there is another error that is actually
causing the problem. The exporter can use flags information to
simplify how much of the PyBuffer structure is filled in with
non-default values and/or raise an error if the object can’t support a
simpler view of its memory.
The exporter should always fill in all elements of the buffer
structure (with defaults or NULLs if nothing else is requested). The
PyBuffer_FillInfo function can be used for simple cases.
Access flags
Some flags are useful for requesting a specific kind of memory
segment, while others indicate to the exporter what kind of
information the consumer can deal with. If certain information is not
asked for by the consumer, but the exporter cannot share its memory
without that information, then a PyErr_BufferError should be raised.
PyBUF_SIMPLE
This is the default flag state (0). The returned buffer may or may
not have writable memory. The format will be assumed to be
unsigned bytes. This is a “stand-alone” flag constant. It never
needs to be |’d to the others. The exporter will raise an error if
it cannot provide such a contiguous buffer of bytes.
PyBUF_WRITABLE
The returned buffer must be writable. If it is not writable,
then raise an error.
PyBUF_FORMAT
The returned buffer must have true format information if this flag
is provided. This would be used when the consumer is going to be
checking for what ‘kind’ of data is actually stored. An exporter
should always be able to provide this information if requested. If
format is not explicitly requested then the format must be returned
as NULL (which means “B”, or unsigned bytes)
PyBUF_ND
The returned buffer must provide shape information. The memory will
be assumed C-style contiguous (last dimension varies the fastest).
The exporter may raise an error if it cannot provide this kind of
contiguous buffer. If this is not given then shape will be NULL.
PyBUF_STRIDES (implies PyBUF_ND)
The returned buffer must provide strides information (i.e. the
strides cannot be NULL). This would be used when the consumer can
handle strided, discontiguous arrays. Handling strides
automatically assumes you can handle shape. The exporter may raise
an error if cannot provide a strided-only representation of the
data (i.e. without the suboffsets).
PyBUF_C_CONTIGUOUS
PyBUF_F_CONTIGUOUS
PyBUF_ANY_CONTIGUOUS
These flags indicate that the returned buffer must be respectively,
C-contiguous (last dimension varies the fastest), Fortran
contiguous (first dimension varies the fastest) or either one.
All of these flags imply PyBUF_STRIDES and guarantee that the
strides buffer info structure will be filled in correctly.
PyBUF_INDIRECT (implies PyBUF_STRIDES)
The returned buffer must have suboffsets information (which can be
NULL if no suboffsets are needed). This would be used when the
consumer can handle indirect array referencing implied by these
suboffsets.
Specialized combinations of flags for specific kinds of memory_sharing.
Multi-dimensional (but contiguous)
PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE)
PyBUF_CONTIG_RO (PyBUF_ND)
Multi-dimensional using strides but aligned
PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE)
PyBUF_STRIDED_RO (PyBUF_STRIDES)
Multi-dimensional using strides and not necessarily aligned
PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT)
PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT)
Multi-dimensional using sub-offsets
PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT)
PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT)
Thus, the consumer simply wanting a contiguous chunk of bytes from
the object would use PyBUF_SIMPLE, while a consumer that understands
how to make use of the most complicated cases could use PyBUF_FULL.
The format information is only guaranteed to be non-NULL if
PyBUF_FORMAT is in the flag argument, otherwise it is expected the
consumer will assume unsigned bytes.
There is a C-API that simple exporting objects can use to fill-in the
buffer info structure correctly according to the provided flags if a
contiguous chunk of “unsigned bytes” is all that can be exported.
The Py_buffer struct
The bufferinfo structure is:
struct bufferinfo {
void *buf;
Py_ssize_t len;
int readonly;
const char *format;
int ndim;
Py_ssize_t *shape;
Py_ssize_t *strides;
Py_ssize_t *suboffsets;
Py_ssize_t itemsize;
void *internal;
} Py_buffer;
Before calling the bf_getbuffer function, the bufferinfo structure can
be filled with whatever, but the buf field must be NULL when
requesting a new buffer. Upon return from bf_getbuffer, the
bufferinfo structure is filled in with relevant information about the
buffer. This same bufferinfo structure must be passed to
bf_releasebuffer (if available) when the consumer is done with the
memory. The caller is responsible for keeping a reference to obj until
releasebuffer is called (i.e. the call to bf_getbuffer does not alter
the reference count of obj).
The members of the bufferinfo structure are:
bufa pointer to the start of the memory for the object
lenthe total bytes of memory the object uses. This should be the
same as the product of the shape array multiplied by the number of
bytes per item of memory.
readonlyan integer variable to hold whether or not the memory is readonly.
1 means the memory is readonly, zero means the memory is writable.
formata NULL-terminated format-string (following the struct-style syntax
including extensions) indicating what is in each element of
memory. The number of elements is len / itemsize, where itemsize
is the number of bytes implied by the format. This can be NULL which
implies standard unsigned bytes (“B”).
ndima variable storing the number of dimensions the memory represents.
Must be >=0. A value of 0 means that shape and strides and suboffsets
must be NULL (i.e. the memory represents a scalar).
shapean array of Py_ssize_t of length ndims indicating the
shape of the memory as an N-D array. Note that ((*shape)[0] *
... * (*shape)[ndims-1])*itemsize = len. If ndims is 0 (indicating
a scalar), then this must be NULL.
stridesaddress of a Py_ssize_t* variable that will be filled with a
pointer to an array of Py_ssize_t of length ndims (or NULL
if ndims is 0). indicating the number of bytes to skip to get to
the next element in each dimension. If this is not requested by
the caller (PyBUF_STRIDES is not set), then this should be set
to NULL which indicates a C-style contiguous array or a
PyExc_BufferError raised if this is not possible.
suboffsetsaddress of a Py_ssize_t * variable that will be filled with a
pointer to an array of Py_ssize_t of length *ndims. If
these suboffset numbers are >=0, then the value stored along the
indicated dimension is a pointer and the suboffset value dictates
how many bytes to add to the pointer after de-referencing. A
suboffset value that it negative indicates that no de-referencing
should occur (striding in a contiguous memory block). If all
suboffsets are negative (i.e. no de-referencing is needed, then
this must be NULL (the default value). If this is not requested
by the caller (PyBUF_INDIRECT is not set), then this should be
set to NULL or an PyExc_BufferError raised if this is not possible.For clarity, here is a function that returns a pointer to the
element in an N-D array pointed to by an N-dimensional index when
there are both non-NULL strides and suboffsets:
void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,
Py_ssize_t *suboffsets, Py_ssize_t *indices) {
char *pointer = (char*)buf;
int i;
for (i = 0; i < ndim; i++) {
pointer += strides[i] * indices[i];
if (suboffsets[i] >=0 ) {
pointer = *((char**)pointer) + suboffsets[i];
}
}
return (void*)pointer;
}
Notice the suboffset is added “after” the dereferencing occurs.
Thus slicing in the ith dimension would add to the suboffsets in
the (i-1)st dimension. Slicing in the first dimension would change
the location of the starting pointer directly (i.e. buf would
be modified).
itemsizeThis is a storage for the itemsize (in bytes) of each element of the shared
memory. It is technically un-necessary as it can be obtained using
PyBuffer_SizeFromFormat, however an exporter may know this
information without parsing the format string and it is necessary
to know the itemsize for proper interpretation of striding.
Therefore, storing it is more convenient and faster.
internalThis is for use internally by the exporting object. For example,
this might be re-cast as an integer by the exporter and used to
store flags about whether or not the shape, strides, and suboffsets
arrays must be freed when the buffer is released. The consumer
should never alter this value.
The exporter is responsible for making sure that any memory pointed to
by buf, format, shape, strides, and suboffsets is valid until
releasebuffer is called. If the exporter wants to be able to change
an object’s shape, strides, and/or suboffsets before releasebuffer is
called then it should allocate those arrays when getbuffer is called
(pointing to them in the buffer-info structure provided) and free them
when releasebuffer is called.
Releasing the buffer
The same bufferinfo struct should be used in the release-buffer
interface call. The caller is responsible for the memory of the
Py_buffer structure itself.
typedef void (*releasebufferproc)(PyObject *obj, Py_buffer *view)
Callers of getbufferproc must make sure that this function is called
when memory previously acquired from the object is no longer needed.
The exporter of the interface must make sure that any memory pointed
to in the bufferinfo structure remains valid until releasebuffer is
called.
If the bf_releasebuffer function is not provided (i.e. it is NULL),
then it does not ever need to be called.
Exporters will need to define a bf_releasebuffer function if they can
re-allocate their memory, strides, shape, suboffsets, or format
variables which they might share through the struct bufferinfo.
Several mechanisms could be used to keep track of how many getbuffer
calls have been made and shared. Either a single variable could be
used to keep track of how many “views” have been exported, or a
linked-list of bufferinfo structures filled in could be maintained in
each object.
All that is specifically required by the exporter, however, is to
ensure that any memory shared through the bufferinfo structure remains
valid until releasebuffer is called on the bufferinfo structure
exporting that memory.
New C-API calls are proposed
int PyObject_CheckBuffer(PyObject *obj)
Return 1 if the getbuffer function is available otherwise 0.
int PyObject_GetBuffer(PyObject *obj, Py_buffer *view,
int flags)
This is a C-API version of the getbuffer function call. It checks to
make sure object has the required function pointer and issues the
call. Returns -1 and raises an error on failure and returns 0 on
success.
void PyBuffer_Release(PyObject *obj, Py_buffer *view)
This is a C-API version of the releasebuffer function call. It checks
to make sure the object has the required function pointer and issues
the call. This function always succeeds even if there is no releasebuffer
function for the object.
PyObject *PyObject_GetMemoryView(PyObject *obj)
Return a memory-view object from an object that defines the buffer interface.
A memory-view object is an extended buffer object that could replace
the buffer object (but doesn’t have to as that could be kept as a
simple 1-d memory-view object). Its C-structure is
typedef struct {
PyObject_HEAD
PyObject *base;
Py_buffer view;
} PyMemoryViewObject;
This is functionally similar to the current buffer object except a
reference to base is kept and the memory view is not re-grabbed.
Thus, this memory view object holds on to the memory of base until it
is deleted.
This memory-view object will support multi-dimensional slicing and be
the first object provided with Python to do so. Slices of the
memory-view object are other memory-view objects with the same base
but with a different view of the base object.
When an “element” from the memory-view is returned it is always a
bytes object whose format should be interpreted by the format
attribute of the memoryview object. The struct module can be used to
“decode” the bytes in Python if desired. Or the contents can be
passed to a NumPy array or other object consuming the buffer protocol.
The Python name will be
__builtin__.memoryview
Methods:
__getitem__ (will support multi-dimensional slicing)
__setitem__ (will support multi-dimensional slicing)
tobytes (obtain a new bytes-object of a copy of the memory).
tolist (obtain a “nested” list of the memory. Everything
is interpreted into standard Python objects
as the struct module unpack would do – in fact
it uses struct.unpack to accomplish it).
Attributes (taken from the memory of the base object):
format
itemsize
shape
strides
suboffsets
readonly
ndim
Py_ssize_t PyBuffer_SizeFromFormat(const char *)
Return the implied itemsize of the data-format area from a struct-style
description.
PyObject * PyMemoryView_GetContiguous(PyObject *obj, int buffertype,
char fortran)
Return a memoryview object to a contiguous chunk of memory represented
by obj. If a copy must be made (because the memory pointed to by obj
is not contiguous), then a new bytes object will be created and become
the base object for the returned memory view object.
The buffertype argument can be PyBUF_READ, PyBUF_WRITE,
PyBUF_UPDATEIFCOPY to determine whether the returned buffer should be
readable, writable, or set to update the original buffer if a copy
must be made. If buffertype is PyBUF_WRITE and the buffer is not
contiguous an error will be raised. In this circumstance, the user
can use PyBUF_UPDATEIFCOPY to ensure that a writable temporary
contiguous buffer is returned. The contents of this contiguous buffer
will be copied back into the original object after the memoryview
object is deleted as long as the original object is writable. If this
is not allowed by the original object, then a BufferError is raised.
If the object is multi-dimensional, then if fortran is ‘F’, the first
dimension of the underlying array will vary the fastest in the buffer.
If fortran is ‘C’, then the last dimension will vary the fastest
(C-style contiguous). If fortran is ‘A’, then it does not matter and
you will get whatever the object decides is more efficient. If a copy
is made, then the memory must be freed by calling PyMem_Free.
You receive a new reference to the memoryview object.
int PyObject_CopyToObject(PyObject *obj, void *buf, Py_ssize_t len,
char fortran)
Copy len bytes of data pointed to by the contiguous chunk of
memory pointed to by buf into the buffer exported by obj. Return
0 on success and return -1 and raise an error on failure. If the
object does not have a writable buffer, then an error is raised. If
fortran is ‘F’, then if the object is multi-dimensional, then the data
will be copied into the array in Fortran-style (first dimension varies
the fastest). If fortran is ‘C’, then the data will be copied into
the array in C-style (last dimension varies the fastest). If fortran
is ‘A’, then it does not matter and the copy will be made in whatever
way is more efficient.
int PyObject_CopyData(PyObject *dest, PyObject *src)
These last three C-API calls allow a standard way of getting data in and
out of Python objects into contiguous memory areas no matter how it is
actually stored. These calls use the extended buffer interface to perform
their work.
int PyBuffer_IsContiguous(Py_buffer *view, char fortran)
Return 1 if the memory defined by the view object is C-style (fortran
= ‘C’) or Fortran-style (fortran = ‘F’) contiguous or either one
(fortran = ‘A’). Return 0 otherwise.
void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape,
Py_ssize_t *strides, Py_ssize_t itemsize,
char fortran)
Fill the strides array with byte-strides of a contiguous (C-style if
fortran is ‘C’ or Fortran-style if fortran is ‘F’ array of the given
shape with the given number of bytes per element.
int PyBuffer_FillInfo(Py_buffer *view, void *buf,
Py_ssize_t len, int readonly, int infoflags)
Fills in a buffer-info structure correctly for an exporter that can
only share a contiguous chunk of memory of “unsigned bytes” of the
given length. Returns 0 on success and -1 (with raising an error) on
error.
PyExc_BufferError
A new error object for returning buffer errors which arise because an
exporter cannot provide the kind of buffer that a consumer expects.
This will also be raised when a consumer requests a buffer from an
object that does not provide the protocol.
Additions to the struct string-syntax
The struct string-syntax is missing some characters to fully
implement data-format descriptions already available elsewhere (in
ctypes and NumPy for example). The Python 2.5 specification is
at http://docs.python.org/library/struct.html.
Here are the proposed additions:
Character
Description
‘t’
bit (number before states how many bits)
‘?’
platform _Bool type
‘g’
long double
‘c’
ucs-1 (latin-1) encoding
‘u’
ucs-2
‘w’
ucs-4
‘O’
pointer to Python Object
‘Z’
complex (whatever the next specifier is)
‘&’
specific pointer (prefix before another character)
‘T{}’
structure (detailed layout inside {})
‘(k1,k2,…,kn)’
multi-dimensional array of whatever follows
‘:name:’
optional name of the preceding element
‘X{}’
pointer to a function (optional functionsignature inside {} with any return value
preceded by -> and placed at the end)
The struct module will be changed to understand these as well and
return appropriate Python objects on unpacking. Unpacking a
long-double will return a decimal object or a ctypes long-double.
Unpacking ‘u’ or ‘w’ will return Python unicode. Unpacking a
multi-dimensional array will return a list (of lists if >1d).
Unpacking a pointer will return a ctypes pointer object. Unpacking a
function pointer will return a ctypes call-object (perhaps). Unpacking
a bit will return a Python Bool. White-space in the struct-string
syntax will be ignored if it isn’t already. Unpacking a named-object
will return some kind of named-tuple-like object that acts like a
tuple but whose entries can also be accessed by name. Unpacking a
nested structure will return a nested tuple.
Endian-specification (‘!’, ‘@’,’=’,’>’,’<’, ‘^’) is also allowed
inside the string so that it can change if needed. The
previously-specified endian string is in force until changed. The
default endian is ‘@’ which means native data-types and alignment. If
un-aligned, native data-types are requested, then the endian
specification is ‘^’.
According to the struct-module, a number can precede a character
code to specify how many of that type there are. The
(k1,k2,...,kn) extension also allows specifying if the data is
supposed to be viewed as a (C-style contiguous, last-dimension
varies the fastest) multi-dimensional array of a particular format.
Functions should be added to ctypes to create a ctypes object from
a struct description, and add long-double, and ucs-2 to ctypes.
Examples of Data-Format Descriptions
Here are some examples of C-structures and how they would be
represented using the struct-style syntax.
<named> is the constructor for a named-tuple (not-specified yet).
float'd' <–> Python float
complex double'Zd' <–> Python complex
RGB Pixel data'BBB' <–> (int, int, int)
'B:r: B:g: B:b:' <–> <named>((int, int, int), (‘r’,’g’,’b’))
Mixed endian (weird but possible)'>i:big: <i:little:' <–> <named>((int, int), (‘big’, ‘little’))
Nested structurestruct {
int ival;
struct {
unsigned short sval;
unsigned char bval;
unsigned char cval;
} sub;
}
"""i:ival:
T{
H:sval:
B:bval:
B:cval:
}:sub:
"""
Nested arraystruct {
int ival;
double data[16*4];
}
"""i:ival:
(16,4)d:data:
"""
Note, that in the last example, the C-structure compared against is
intentionally a 1-d array and not a 2-d array data[16][4]. The reason
for this is to avoid the confusions between static multi-dimensional
arrays in C (which are laid out contiguously) and dynamic
multi-dimensional arrays which use the same syntax to access elements,
data[0][1], but whose memory is not necessarily contiguous. The
struct-syntax always uses contiguous memory and the
multi-dimensional character is information about the memory to be
communicated by the exporter.
In other words, the struct-syntax description does not have to match
the C-syntax exactly as long as it describes the same memory layout.
The fact that a C-compiler would think of the memory as a 1-d array of
doubles is irrelevant to the fact that the exporter wanted to
communicate to the consumer that this field of the memory should be
thought of as a 2-d array where a new dimension is considered after
every 4 elements.
Code to be affected
All objects and modules in Python that export or consume the old
buffer interface will be modified. Here is a partial list.
buffer object
bytes object
string object
unicode object
array module
struct module
mmap module
ctypes module
Anything else using the buffer API.
Issues and Details
It is intended that this PEP will be back-ported to Python 2.6 by
adding the C-API and the two functions to the existing buffer
protocol.
Previous versions of this PEP proposed a read/write locking scheme,
but it was later perceived as a) too complicated for common simple use
cases that do not require any locking and b) too simple for use cases
that required concurrent read/write access to a buffer with changing,
short-living locks. It is therefore left to users to implement their
own specific locking scheme around buffer objects if they require
consistent views across concurrent read/write access. A future PEP
may be proposed which includes a separate locking API after some
experience with these user-schemes is obtained
The sharing of strided memory and suboffsets is new and can be seen as
a modification of the multiple-segment interface. It is motivated by
NumPy and the PIL. NumPy objects should be able to share their
strided memory with code that understands how to manage strided memory
because strided memory is very common when interfacing with compute
libraries.
Also, with this approach it should be possible to write generic code
that works with both kinds of memory without copying.
Memory management of the format string, the shape array, the strides
array, and the suboffsets array in the bufferinfo structure is always
the responsibility of the exporting object. The consumer should not
set these pointers to any other memory or try to free them.
Several ideas were discussed and rejected:
Having a “releaser” object whose release-buffer was called. This
was deemed unacceptable because it caused the protocol to be
asymmetric (you called release on something different than you
“got” the buffer from). It also complicated the protocol without
providing a real benefit.Passing all the struct variables separately into the function.
This had the advantage that it allowed one to set NULL to
variables that were not of interest, but it also made the function
call more difficult. The flags variable allows the same
ability of consumers to be “simple” in how they call the protocol.
Code
The authors of the PEP promise to contribute and maintain the code for
this proposal but will welcome any help.
Examples
Ex. 1
This example shows how an image object that uses contiguous lines might expose its buffer:
struct rgba {
unsigned char r, g, b, a;
};
struct ImageObject {
PyObject_HEAD;
...
struct rgba** lines;
Py_ssize_t height;
Py_ssize_t width;
Py_ssize_t shape_array[2];
Py_ssize_t stride_array[2];
Py_ssize_t view_count;
};
“lines” points to malloced 1-D array of (struct rgba*). Each pointer
in THAT block points to a separately malloced array of (struct rgba).
In order to access, say, the red value of the pixel at x=30, y=50, you’d use “lines[50][30].r”.
So what does ImageObject’s getbuffer do? Leaving error checking out:
int Image_getbuffer(PyObject *self, Py_buffer *view, int flags) {
static Py_ssize_t suboffsets[2] = { 0, -1};
view->buf = self->lines;
view->len = self->height*self->width;
view->readonly = 0;
view->ndims = 2;
self->shape_array[0] = height;
self->shape_array[1] = width;
view->shape = &self->shape_array;
self->stride_array[0] = sizeof(struct rgba*);
self->stride_array[1] = sizeof(struct rgba);
view->strides = &self->stride_array;
view->suboffsets = suboffsets;
self->view_count ++;
return 0;
}
int Image_releasebuffer(PyObject *self, Py_buffer *view) {
self->view_count--;
return 0;
}
Ex. 2
This example shows how an object that wants to expose a contiguous
chunk of memory (which will never be re-allocated while the object is
alive) would do that.
int myobject_getbuffer(PyObject *self, Py_buffer *view, int flags) {
void *buf;
Py_ssize_t len;
int readonly=0;
buf = /* Point to buffer */
len = /* Set to size of buffer */
readonly = /* Set to 1 if readonly */
return PyObject_FillBufferInfo(view, buf, len, readonly, flags);
}
/* No releasebuffer is necessary because the memory will never
be re-allocated
*/
Ex. 3
A consumer that wants to only get a simple contiguous chunk of bytes
from a Python object, obj would do the following:
Py_buffer view;
int ret;
if (PyObject_GetBuffer(obj, &view, Py_BUF_SIMPLE) < 0) {
/* error return */
}
/* Now, view.buf is the pointer to memory
view.len is the length
view.readonly is whether or not the memory is read-only.
*/
/* After using the information and you don't need it anymore */
if (PyBuffer_Release(obj, &view) < 0) {
/* error return */
}
Ex. 4
A consumer that wants to be able to use any object’s memory but is
writing an algorithm that only handle contiguous memory could do the following:
void *buf;
Py_ssize_t len;
char *format;
int copy;
copy = PyObject_GetContiguous(obj, &buf, &len, &format, 0, 'A');
if (copy < 0) {
/* error return */
}
/* process memory pointed to by buffer if format is correct */
/* Optional:
if, after processing, we want to copy data from buffer back
into the object
we could do
*/
if (PyObject_CopyToObject(obj, buf, len, 'A') < 0) {
/* error return */
}
/* Make sure that if a copy was made, the memory is freed */
if (copy == 1) PyMem_Free(buf);
Copyright
This PEP is placed in the public domain.
| Final | PEP 3118 – Revising the buffer protocol | Standards Track | This PEP proposes re-designing the buffer interface (PyBufferProcs
function pointers) to improve the way Python allows memory sharing in
Python 3.0 |
PEP 3119 – Introducing Abstract Base Classes
Author:
Guido van Rossum <guido at python.org>, Talin <viridia at gmail.com>
Status:
Final
Type:
Standards Track
Created:
18-Apr-2007
Python-Version:
3.0
Post-History:
26-Apr-2007, 11-May-2007
Table of Contents
Abstract
Acknowledgements
Rationale
Specification
Overloading isinstance() and issubclass()
The abc Module: an ABC Support Framework
ABCs for Containers and Iterators
One Trick Ponies
Sets
Mappings
Sequences
Strings
ABCs vs. Alternatives
ABCs vs. Duck Typing
ABCs vs. Generic Functions
ABCs vs. Interfaces
References
Copyright
Abstract
This is a proposal to add Abstract Base Class (ABC) support to Python
3000. It proposes:
A way to overload isinstance() and issubclass().
A new module abc which serves as an “ABC support framework”. It
defines a metaclass for use with ABCs and a decorator that can be
used to define abstract methods.
Specific ABCs for containers and iterators, to be added to the
collections module.
Much of the thinking that went into the proposal is not about the
specific mechanism of ABCs, as contrasted with Interfaces or Generic
Functions (GFs), but about clarifying philosophical issues like “what
makes a set”, “what makes a mapping” and “what makes a sequence”.
There’s also a companion PEP 3141, which defines ABCs for numeric
types.
Acknowledgements
Talin wrote the Rationale below [1] as well as most of the section on
ABCs vs. Interfaces. For that alone he deserves co-authorship. The
rest of the PEP uses “I” referring to the first author.
Rationale
In the domain of object-oriented programming, the usage patterns for
interacting with an object can be divided into two basic categories,
which are ‘invocation’ and ‘inspection’.
Invocation means interacting with an object by invoking its methods.
Usually this is combined with polymorphism, so that invoking a given
method may run different code depending on the type of an object.
Inspection means the ability for external code (outside of the
object’s methods) to examine the type or properties of that object,
and make decisions on how to treat that object based on that
information.
Both usage patterns serve the same general end, which is to be able to
support the processing of diverse and potentially novel objects in a
uniform way, but at the same time allowing processing decisions to be
customized for each different type of object.
In classical OOP theory, invocation is the preferred usage pattern,
and inspection is actively discouraged, being considered a relic of an
earlier, procedural programming style. However, in practice this view
is simply too dogmatic and inflexible, and leads to a kind of design
rigidity that is very much at odds with the dynamic nature of a
language like Python.
In particular, there is often a need to process objects in a way that
wasn’t anticipated by the creator of the object class. It is not
always the best solution to build in to every object methods that
satisfy the needs of every possible user of that object. Moreover,
there are many powerful dispatch philosophies that are in direct
contrast to the classic OOP requirement of behavior being strictly
encapsulated within an object, examples being rule or pattern-match
driven logic.
On the other hand, one of the criticisms of inspection by classic
OOP theorists is the lack of formalisms and the ad hoc nature of what
is being inspected. In a language such as Python, in which almost any
aspect of an object can be reflected and directly accessed by external
code, there are many different ways to test whether an object conforms
to a particular protocol or not. For example, if asking ‘is this
object a mutable sequence container?’, one can look for a base class
of ‘list’, or one can look for a method named ‘__getitem__’. But note
that although these tests may seem obvious, neither of them are
correct, as one generates false negatives, and the other false
positives.
The generally agreed-upon remedy is to standardize the tests, and
group them into a formal arrangement. This is most easily done by
associating with each class a set of standard testable properties,
either via the inheritance mechanism or some other means. Each test
carries with it a set of promises: it contains a promise about the
general behavior of the class, and a promise as to what other class
methods will be available.
This PEP proposes a particular strategy for organizing these tests
known as Abstract Base Classes, or ABC. ABCs are simply Python
classes that are added into an object’s inheritance tree to signal
certain features of that object to an external inspector. Tests are
done using isinstance(), and the presence of a particular ABC
means that the test has passed.
In addition, the ABCs define a minimal set of methods that establish
the characteristic behavior of the type. Code that discriminates
objects based on their ABC type can trust that those methods will
always be present. Each of these methods are accompanied by an
generalized abstract semantic definition that is described in the
documentation for the ABC. These standard semantic definitions are
not enforced, but are strongly recommended.
Like all other things in Python, these promises are in the nature of a
friendly agreement, which in this case means that while the
language does enforce some of the promises made in the ABC, it is up
to the implementer of the concrete class to insure that the remaining
ones are kept.
Specification
The specification follows the categories listed in the abstract:
A way to overload isinstance() and issubclass().
A new module abc which serves as an “ABC support framework”. It
defines a metaclass for use with ABCs and a decorator that can be
used to define abstract methods.
Specific ABCs for containers and iterators, to be added to the
collections module.
Overloading isinstance() and issubclass()
During the development of this PEP and of its companion, PEP 3141, we
repeatedly faced the choice between standardizing more, fine-grained
ABCs or fewer, coarse-grained ones. For example, at one stage, PEP
3141 introduced the following stack of base classes used for complex
numbers: MonoidUnderPlus, AdditiveGroup, Ring, Field, Complex (each
derived from the previous). And the discussion mentioned several
other algebraic categorizations that were left out: Algebraic,
Transcendental, and IntegralDomain, and PrincipalIdealDomain. In
earlier versions of the current PEP, we considered the use cases for
separate classes like Set, ComposableSet, MutableSet, HashableSet,
MutableComposableSet, HashableComposableSet.
The dilemma here is that we’d rather have fewer ABCs, but then what
should a user do who needs a less refined ABC? Consider e.g. the
plight of a mathematician who wants to define their own kind of
Transcendental numbers, but also wants float and int to be considered
Transcendental. PEP 3141 originally proposed to patch float.__bases__
for that purpose, but there are some good reasons to keep the built-in
types immutable (for one, they are shared between all Python
interpreters running in the same address space, as is used by
mod_python [16]).
Another example would be someone who wants to define a generic
function (PEP 3124) for any sequence that has an append() method.
The Sequence ABC (see below) doesn’t promise the append()
method, while MutableSequence requires not only append() but
also various other mutating methods.
To solve these and similar dilemmas, the next section will propose a
metaclass for use with ABCs that will allow us to add an ABC as a
“virtual base class” (not the same concept as in C++) to any class,
including to another ABC. This allows the standard library to define
ABCs Sequence and MutableSequence and register these as
virtual base classes for built-in types like basestring, tuple
and list, so that for example the following conditions are all
true:
isinstance([], Sequence)
issubclass(list, Sequence)
issubclass(list, MutableSequence)
isinstance((), Sequence)
not issubclass(tuple, MutableSequence)
isinstance("", Sequence)
issubclass(bytearray, MutableSequence)
The primary mechanism proposed here is to allow overloading the
built-in functions isinstance() and issubclass(). The
overloading works as follows: The call isinstance(x, C) first
checks whether C.__instancecheck__ exists, and if so, calls
C.__instancecheck__(x) instead of its normal implementation.
Similarly, the call issubclass(D, C) first checks whether
C.__subclasscheck__ exists, and if so, calls
C.__subclasscheck__(D) instead of its normal implementation.
Note that the magic names are not __isinstance__ and
__issubclass__; this is because the reversal of the arguments
could cause confusion, especially for the issubclass() overloader.
A prototype implementation of this is given in [12].
Here is an example with (naively simple) implementations of
__instancecheck__ and __subclasscheck__:
class ABCMeta(type):
def __instancecheck__(cls, inst):
"""Implement isinstance(inst, cls)."""
return any(cls.__subclasscheck__(c)
for c in {type(inst), inst.__class__})
def __subclasscheck__(cls, sub):
"""Implement issubclass(sub, cls)."""
candidates = cls.__dict__.get("__subclass__", set()) | {cls}
return any(c in candidates for c in sub.mro())
class Sequence(metaclass=ABCMeta):
__subclass__ = {list, tuple}
assert issubclass(list, Sequence)
assert issubclass(tuple, Sequence)
class AppendableSequence(Sequence):
__subclass__ = {list}
assert issubclass(list, AppendableSequence)
assert isinstance([], AppendableSequence)
assert not issubclass(tuple, AppendableSequence)
assert not isinstance((), AppendableSequence)
The next section proposes a full-fledged implementation.
The abc Module: an ABC Support Framework
The new standard library module abc, written in pure Python,
serves as an ABC support framework. It defines a metaclass
ABCMeta and decorators @abstractmethod and
@abstractproperty. A sample implementation is given by [13].
The ABCMeta class overrides __instancecheck__ and
__subclasscheck__ and defines a register method. The
register method takes one argument, which must be a class; after
the call B.register(C), the call issubclass(C, B) will return
True, by virtue of B.__subclasscheck__(C) returning True.
Also, isinstance(x, B) is equivalent to issubclass(x.__class__,
B) or issubclass(type(x), B). (It is possible type(x) and
x.__class__ are not the same object, e.g. when x is a proxy
object.)
These methods are intended to be called on classes whose metaclass
is (derived from) ABCMeta; for example:
from abc import ABCMeta
class MyABC(metaclass=ABCMeta):
pass
MyABC.register(tuple)
assert issubclass(tuple, MyABC)
assert isinstance((), MyABC)
The last two asserts are equivalent to the following two:
assert MyABC.__subclasscheck__(tuple)
assert MyABC.__instancecheck__(())
Of course, you can also directly subclass MyABC:
class MyClass(MyABC):
pass
assert issubclass(MyClass, MyABC)
assert isinstance(MyClass(), MyABC)
Also, of course, a tuple is not a MyClass:
assert not issubclass(tuple, MyClass)
assert not isinstance((), MyClass)
You can register another class as a subclass of MyClass:
MyClass.register(list)
assert issubclass(list, MyClass)
assert issubclass(list, MyABC)
You can also register another ABC:
class AnotherClass(metaclass=ABCMeta):
pass
AnotherClass.register(basestring)
MyClass.register(AnotherClass)
assert isinstance(str, MyABC)
That last assert requires tracing the following superclass-subclass
relationships:
MyABC -> MyClass (using regular subclassing)
MyClass -> AnotherClass (using registration)
AnotherClass -> basestring (using registration)
basestring -> str (using regular subclassing)
The abc module also defines a new decorator, @abstractmethod,
to be used to declare abstract methods. A class containing at least
one method declared with this decorator that hasn’t been overridden
yet cannot be instantiated. Such methods may be called from the
overriding method in the subclass (using super or direct
invocation). For example:
from abc import ABCMeta, abstractmethod
class A(metaclass=ABCMeta):
@abstractmethod
def foo(self): pass
A() # raises TypeError
class B(A):
pass
B() # raises TypeError
class C(A):
def foo(self): print(42)
C() # works
Note: The @abstractmethod decorator should only be used
inside a class body, and only for classes whose metaclass is (derived
from) ABCMeta. Dynamically adding abstract methods to a class, or
attempting to modify the abstraction status of a method or class once
it is created, are not supported. The @abstractmethod only
affects subclasses derived using regular inheritance; “virtual
subclasses” registered with the register() method are not affected.
Implementation: The @abstractmethod decorator sets the
function attribute __isabstractmethod__ to the value True.
The ABCMeta.__new__ method computes the type attribute
__abstractmethods__ as the set of all method names that have an
__isabstractmethod__ attribute whose value is true. It does this
by combining the __abstractmethods__ attributes of the base
classes, adding the names of all methods in the new class dict that
have a true __isabstractmethod__ attribute, and removing the names
of all methods in the new class dict that don’t have a true
__isabstractmethod__ attribute. If the resulting
__abstractmethods__ set is non-empty, the class is considered
abstract, and attempts to instantiate it will raise TypeError.
(If this were implemented in CPython, an internal flag
Py_TPFLAGS_ABSTRACT could be used to speed up this check [6].)
Discussion: Unlike Java’s abstract methods or C++’s pure abstract
methods, abstract methods as defined here may have an implementation.
This implementation can be called via the super mechanism from the
class that overrides it. This could be useful as an end-point for a
super-call in framework using cooperative multiple-inheritance [7],
[8].
A second decorator, @abstractproperty, is defined in order to
define abstract data attributes. Its implementation is a subclass of
the built-in property class that adds an __isabstractmethod__
attribute:
class abstractproperty(property):
__isabstractmethod__ = True
It can be used in two ways:
class C(metaclass=ABCMeta):
# A read-only property:
@abstractproperty
def readonly(self):
return self.__x
# A read-write property (cannot use decorator syntax):
def getx(self):
return self.__x
def setx(self, value):
self.__x = value
x = abstractproperty(getx, setx)
Similar to abstract methods, a subclass inheriting an abstract
property (declared using either the decorator syntax or the longer
form) cannot be instantiated unless it overrides that abstract
property with a concrete property.
ABCs for Containers and Iterators
The collections module will define ABCs necessary and sufficient
to work with sets, mappings, sequences, and some helper types such as
iterators and dictionary views. All ABCs have the above-mentioned
ABCMeta as their metaclass.
The ABCs provide implementations of their abstract methods that are
technically valid but fairly useless; e.g. __hash__ returns 0, and
__iter__ returns an empty iterator. In general, the abstract
methods represent the behavior of an empty container of the indicated
type.
Some ABCs also provide concrete (i.e. non-abstract) methods; for
example, the Iterator class has an __iter__ method returning
itself, fulfilling an important invariant of iterators (which in
Python 2 has to be implemented anew by each iterator class). These
ABCs can be considered “mix-in” classes.
No ABCs defined in the PEP override __init__, __new__,
__str__ or __repr__. Defining a standard constructor
signature would unnecessarily constrain custom container types, for
example Patricia trees or gdbm files. Defining a specific string
representation for a collection is similarly left up to individual
implementations.
Note: There are no ABCs for ordering operations (__lt__,
__le__, __ge__, __gt__). Defining these in a base class
(abstract or not) runs into problems with the accepted type for the
second operand. For example, if class Ordering defined
__lt__, one would assume that for any Ordering instances x
and y, x < y would be defined (even if it just defines a
partial ordering). But this cannot be the case: If both list and
str derived from Ordering, this would imply that [1, 2] <
(1, 2) should be defined (and presumably return False), while in
fact (in Python 3000!) such “mixed-mode comparisons” operations are
explicitly forbidden and raise TypeError. See PEP 3100 and [14]
for more information. (This is a special case of a more general issue
with operations that take another argument of the same type).
One Trick Ponies
These abstract classes represent single methods like __iter__ or
__len__.
HashableThe base class for classes defining __hash__. The
__hash__ method should return an integer. The abstract
__hash__ method always returns 0, which is a valid (albeit
inefficient) implementation. Invariant: If classes C1 and
C2 both derive from Hashable, the condition o1 == o2
must imply hash(o1) == hash(o2) for all instances o1 of
C1 and all instances o2 of C2. In other words, two
objects should never compare equal if they have different hash
values.Another constraint is that hashable objects, once created, should
never change their value (as compared by ==) or their hash
value. If a class cannot guarantee this, it should not derive
from Hashable; if it cannot guarantee this for certain
instances, __hash__ for those instances should raise a
TypeError exception.
Note: being an instance of this class does not imply that an
object is immutable; e.g. a tuple containing a list as a member is
not immutable; its __hash__ method raises TypeError.
(This is because it recursively tries to compute the hash of each
member; if a member is unhashable it raises TypeError.)
IterableThe base class for classes defining __iter__. The
__iter__ method should always return an instance of
Iterator (see below). The abstract __iter__ method
returns an empty iterator.
IteratorThe base class for classes defining __next__. This derives
from Iterable. The abstract __next__ method raises
StopIteration. The concrete __iter__ method returns
self. Note the distinction between Iterable and
Iterator: an Iterable can be iterated over, i.e. supports
the __iter__ methods; an Iterator is what the built-in
function iter() returns, i.e. supports the __next__
method.
SizedThe base class for classes defining __len__. The __len__
method should return an Integer (see “Numbers” below) >= 0.
The abstract __len__ method returns 0. Invariant: If a
class C derives from Sized as well as from Iterable,
the invariant sum(1 for x in c) == len(c) should hold for any
instance c of C.
ContainerThe base class for classes defining __contains__. The
__contains__ method should return a bool. The abstract
__contains__ method returns False. Invariant: If a
class C derives from Container as well as from
Iterable, then (x in c for x in c) should be a generator
yielding only True values for any instance c of C.
Open issues: Conceivably, instead of using the ABCMeta metaclass,
these classes could override __instancecheck__ and
__subclasscheck__ to check for the presence of the applicable
special method; for example:
class Sized(metaclass=ABCMeta):
@abstractmethod
def __hash__(self):
return 0
@classmethod
def __instancecheck__(cls, x):
return hasattr(x, "__len__")
@classmethod
def __subclasscheck__(cls, C):
return hasattr(C, "__bases__") and hasattr(C, "__len__")
This has the advantage of not requiring explicit registration.
However, the semantics are hard to get exactly right given the confusing
semantics of instance attributes vs. class attributes, and that a
class is an instance of its metaclass; the check for __bases__ is
only an approximation of the desired semantics. Strawman: Let’s
do it, but let’s arrange it in such a way that the registration API
also works.
Sets
These abstract classes represent read-only sets and mutable sets. The
most fundamental set operation is the membership test, written as x
in s and implemented by s.__contains__(x). This operation is
already defined by the Container class defined above. Therefore,
we define a set as a sized, iterable container for which certain
invariants from mathematical set theory hold.
The built-in type set derives from MutableSet. The built-in
type frozenset derives from Set and Hashable.
SetThis is a sized, iterable container, i.e., a subclass of
Sized, Iterable and Container. Not every subclass of
those three classes is a set though! Sets have the additional
invariant that each element occurs only once (as can be determined
by iteration), and in addition sets define concrete operators that
implement the inequality operations as subset/superset tests.
In general, the invariants for finite sets in mathematics
hold. [11]Sets with different implementations can be compared safely,
(usually) efficiently and correctly using the mathematical
definitions of the subset/supeset operations for finite sets.
The ordering operations have concrete implementations; subclasses
may override these for speed but should maintain the semantics.
Because Set derives from Sized, __eq__ may take a
shortcut and return False immediately if two sets of unequal
length are compared. Similarly, __le__ may return False
immediately if the first set has more members than the second set.
Note that set inclusion implements only a partial ordering;
e.g. {1, 2} and {1, 3} are not ordered (all three of
<, == and > return False for these arguments).
Sets cannot be ordered relative to mappings or sequences, but they
can be compared to those for equality (and then they always
compare unequal).
This class also defines concrete operators to compute union,
intersection, symmetric and asymmetric difference, respectively
__or__, __and__, __xor__ and __sub__. These
operators should return instances of Set. The default
implementations call the overridable class method
_from_iterable() with an iterable argument. This factory
method’s default implementation returns a frozenset instance;
it may be overridden to return another appropriate Set
subclass.
Finally, this class defines a concrete method _hash which
computes the hash value from the elements. Hashable subclasses of
Set can implement __hash__ by calling _hash or they
can reimplement the same algorithm more efficiently; but the
algorithm implemented should be the same. Currently the algorithm
is fully specified only by the source code [15].
Note: the issubset and issuperset methods found on the
set type in Python 2 are not supported, as these are mostly just
aliases for __le__ and __ge__.
MutableSetThis is a subclass of Set implementing additional operations
to add and remove elements. The supported methods have the
semantics known from the set type in Python 2 (except for
discard, which is modeled after Java):
.add(x)Abstract method returning a bool that adds the element
x if it isn’t already in the set. It should return
True if x was added, False if it was already
there. The abstract implementation raises
NotImplementedError.
.discard(x)Abstract method returning a bool that removes the element
x if present. It should return True if the element
was present and False if it wasn’t. The abstract
implementation raises NotImplementedError.
.pop()Concrete method that removes and returns an arbitrary item.
If the set is empty, it raises KeyError. The default
implementation removes the first item returned by the set’s
iterator.
.toggle(x)Concrete method returning a bool that adds x to the set if
it wasn’t there, but removes it if it was there. It should
return True if x was added, False if it was
removed.
.clear()Concrete method that empties the set. The default
implementation repeatedly calls self.pop() until
KeyError is caught. (Note: this is likely much slower
than simply creating a new set, even if an implementation
overrides it with a faster approach; but in some cases object
identity is important.)
This also supports the in-place mutating operations |=,
&=, ^=, -=. These are concrete methods whose right
operand can be an arbitrary Iterable, except for &=, whose
right operand must be a Container. This ABC does not provide
the named methods present on the built-in concrete set type
that perform (almost) the same operations.
Mappings
These abstract classes represent read-only mappings and mutable
mappings. The Mapping class represents the most common read-only
mapping API.
The built-in type dict derives from MutableMapping.
MappingA subclass of Container, Iterable and Sized. The keys
of a mapping naturally form a set. The (key, value) pairs (which
must be tuples) are also referred to as items. The items also
form a set. Methods:
.__getitem__(key)Abstract method that returns the value corresponding to
key, or raises KeyError. The implementation always
raises KeyError.
.get(key, default=None)Concrete method returning self[key] if this does not raise
KeyError, and the default value if it does.
.__contains__(key)Concrete method returning True if self[key] does not
raise KeyError, and False if it does.
.__len__()Abstract method returning the number of distinct keys (i.e.,
the length of the key set).
.__iter__()Abstract method returning each key in the key set exactly once.
.keys()Concrete method returning the key set as a Set. The
default concrete implementation returns a “view” on the key
set (meaning if the underlying mapping is modified, the view’s
value changes correspondingly); subclasses are not required to
return a view but they should return a Set.
.items()Concrete method returning the items as a Set. The default
concrete implementation returns a “view” on the item set;
subclasses are not required to return a view but they should
return a Set.
.values()Concrete method returning the values as a sized, iterable
container (not a set!). The default concrete implementation
returns a “view” on the values of the mapping; subclasses are
not required to return a view but they should return a sized,
iterable container.
The following invariants should hold for any mapping m:
len(m.values()) == len(m.keys()) == len(m.items()) == len(m)
[value for value in m.values()] == [m[key] for key in m.keys()]
[item for item in m.items()] == [(key, m[key]) for key in m.keys()]
i.e. iterating over the items, keys and values should return
results in the same order.
MutableMappingA subclass of Mapping that also implements some standard
mutating methods. Abstract methods include __setitem__,
__delitem__. Concrete methods include pop, popitem,
clear, update. Note: setdefault is not included.
Open issues: Write out the specs for the methods.
Sequences
These abstract classes represent read-only sequences and mutable
sequences.
The built-in list and bytes types derive from
MutableSequence. The built-in tuple and str types derive
from Sequence and Hashable.
SequenceA subclass of Iterable, Sized, Container. It
defines a new abstract method __getitem__ that has a somewhat
complicated signature: when called with an integer, it returns an
element of the sequence or raises IndexError; when called with
a slice object, it returns another Sequence. The concrete
__iter__ method iterates over the elements using
__getitem__ with integer arguments 0, 1, and so on, until
IndexError is raised. The length should be equal to the
number of values returned by the iterator.Open issues: Other candidate methods, which can all have
default concrete implementations that only depend on __len__
and __getitem__ with an integer argument: __reversed__,
index, count, __add__, __mul__.
MutableSequenceA subclass of Sequence adding some standard mutating methods.
Abstract mutating methods: __setitem__ (for integer indices as
well as slices), __delitem__ (ditto), insert. Concrete
mutating methods: append, reverse, extend, pop,
remove. Concrete mutating operators: +=, *= (these
mutate the object in place). Note: this does not define
sort() – that is only required to exist on genuine list
instances.
Strings
Python 3000 will likely have at least two built-in string types: byte
strings (bytes), deriving from MutableSequence, and (Unicode)
character strings (str), deriving from Sequence and
Hashable.
Open issues: define the base interfaces for these so alternative
implementations and subclasses know what they are in for. This may be
the subject of a new PEP or PEPs (PEP 358 should be co-opted for the
bytes type).
ABCs vs. Alternatives
In this section I will attempt to compare and contrast ABCs to other
approaches that have been proposed.
ABCs vs. Duck Typing
Does the introduction of ABCs mean the end of Duck Typing? I don’t
think so. Python will not require that a class derives from
BasicMapping or Sequence when it defines a __getitem__
method, nor will the x[y] syntax require that x is an instance
of either ABC. You will still be able to assign any “file-like”
object to sys.stdout, as long as it has a write method.
Of course, there will be some carrots to encourage users to derive
from the appropriate base classes; these vary from default
implementations for certain functionality to an improved ability to
distinguish between mappings and sequences. But there are no sticks.
If hasattr(x, "__len__") works for you, great! ABCs are intended to
solve problems that don’t have a good solution at all in Python 2,
such as distinguishing between mappings and sequences.
ABCs vs. Generic Functions
ABCs are compatible with Generic Functions (GFs). For example, my own
Generic Functions implementation [4] uses the classes (types) of the
arguments as the dispatch key, allowing derived classes to override
base classes. Since (from Python’s perspective) ABCs are quite
ordinary classes, using an ABC in the default implementation for a GF
can be quite appropriate. For example, if I have an overloaded
prettyprint function, it would make total sense to define
pretty-printing of sets like this:
@prettyprint.register(Set)
def pp_set(s):
return "{" + ... + "}" # Details left as an exercise
and implementations for specific subclasses of Set could be added
easily.
I believe ABCs also won’t present any problems for RuleDispatch,
Phillip Eby’s GF implementation in PEAK [5].
Of course, GF proponents might claim that GFs (and concrete, or
implementation, classes) are all you need. But even they will not
deny the usefulness of inheritance; and one can easily consider the
ABCs proposed in this PEP as optional implementation base classes;
there is no requirement that all user-defined mappings derive from
BasicMapping.
ABCs vs. Interfaces
ABCs are not intrinsically incompatible with Interfaces, but there is
considerable overlap. For now, I’ll leave it to proponents of
Interfaces to explain why Interfaces are better. I expect that much
of the work that went into e.g. defining the various shades of
“mapping-ness” and the nomenclature could easily be adapted for a
proposal to use Interfaces instead of ABCs.
“Interfaces” in this context refers to a set of proposals for
additional metadata elements attached to a class which are not part of
the regular class hierarchy, but do allow for certain types of
inheritance testing.
Such metadata would be designed, at least in some proposals, so as to
be easily mutable by an application, allowing application writers to
override the normal classification of an object.
The drawback to this idea of attaching mutable metadata to a class is
that classes are shared state, and mutating them may lead to conflicts
of intent. Additionally, the need to override the classification of
an object can be done more cleanly using generic functions: In the
simplest case, one can define a “category membership” generic function
that simply returns False in the base implementation, and then provide
overrides that return True for any classes of interest.
References
[1]
An Introduction to ABC’s, by Talin
(https://mail.python.org/pipermail/python-3000/2007-April/006614.html)
[2] Incomplete implementation prototype, by GvR
(https://web.archive.org/web/20170223133820/http://svn.python.org/view/sandbox/trunk/abc/)
[3] Possible Python 3K Class Tree?, wiki page created by Bill Janssen
(https://wiki.python.org/moin/AbstractBaseClasses)
[4]
Generic Functions implementation, by GvR
(https://web.archive.org/web/20170223135019/http://svn.python.org/view/sandbox/trunk/overload/)
[5]
Charming Python: Scaling a new PEAK, by David Mertz
(https://web.archive.org/web/20070515125102/http://www-128.ibm.com/developerworks/library/l-cppeak2/)
[6]
Implementation of @abstractmethod
(https://github.com/python/cpython/issues/44895)
[7]
Unifying types and classes in Python 2.2, by GvR
(https://www.python.org/download/releases/2.2.3/descrintro/)
[8]
Putting Metaclasses to Work: A New Dimension in Object-Oriented
Programming, by Ira R. Forman and Scott H. Danforth
(https://archive.org/details/PuttingMetaclassesToWork)
[9] Partial order, in Wikipedia
(https://en.wikipedia.org/wiki/Partial_order)
[10] Total order, in Wikipedia
(https://en.wikipedia.org/wiki/Total_order)
[11]
Finite set, in Wikipedia
(https://en.wikipedia.org/wiki/Finite_set)
[12]
Make isinstance/issubclass overloadable
(https://bugs.python.org/issue1708353)
[13]
ABCMeta sample implementation
(https://web.archive.org/web/20170224195724/http://svn.python.org/view/sandbox/trunk/abc/xyz.py)
[14]
python-dev email (“Comparing heterogeneous types”)
https://mail.python.org/pipermail/python-dev/2004-June/045111.html
[15]
Function frozenset_hash() in Object/setobject.c
(https://web.archive.org/web/20170224204758/http://svn.python.org/view/python/trunk/Objects/setobject.c)
[16]
Multiple interpreters in mod_python
(https://web.archive.org/web/20070515132123/http://www.modpython.org/live/current/doc-html/pyapi-interps.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 3119 – Introducing Abstract Base Classes | Standards Track | This is a proposal to add Abstract Base Class (ABC) support to Python
3000. It proposes: |
PEP 3121 – Extension Module Initialization and Finalization
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
27-Apr-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Problems
Module Finalization
Entry point name conflicts
Entry point signature
Multiple Interpreters
Specification
Example
Discussion
References
Copyright
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at PyInit_modulename() and
PyModuleDef.
×
See PEP 1 for how to propose changes.
Abstract
Extension module initialization currently has a few deficiencies.
There is no cleanup for modules, the entry point name might give
naming conflicts, the entry functions don’t follow the usual calling
convention, and multiple interpreters are not supported well. This PEP
addresses these issues.
Problems
Module Finalization
Currently, extension modules are initialized usually once and then
“live” forever. The only exception is when Py_Finalize() is called:
then the initialization routine is invoked a second time. This is bad
from a resource management point of view: memory and other resources
might get allocated each time initialization is called, but there is
no way to reclaim them. As a result, there is currently no way to
completely release all resources Python has allocated.
Entry point name conflicts
The entry point is currently called init<module>. This might conflict
with other symbols also called init<something>. In particular,
initsocket is known to have conflicted in the past (this specific
problem got resolved as a side effect of renaming the module to
_socket).
Entry point signature
The entry point is currently a procedure (returning void). This
deviates from the usual calling conventions; callers can find out
whether there was an error during initialization only by checking
PyErr_Occurred. The entry point should return a PyObject*, which will
be the module created, or NULL in case of an exception.
Multiple Interpreters
Currently, extension modules share their state across all
interpreters. This allows for undesirable information leakage across
interpreters: one script could permanently corrupt objects in an
extension module, possibly breaking all scripts in other interpreters.
Specification
The module initialization routines change their signature
to:
PyObject *PyInit_<modulename>()
The initialization routine will be invoked once per
interpreter, when the module is imported. It should
return a new module object each time.
In order to store per-module state in C variables,
each module object will contain a block of memory
that is interpreted only by the module. The amount
of memory used for the module is specified at
the point of creation of the module.
In addition to the initialization function, a module
may implement a number of additional callback
functions, which are invoked when the module’s
tp_traverse, tp_clear, and tp_free functions are
invoked, and when the module is reloaded.
The entire module definition is combined in a struct
PyModuleDef:
struct PyModuleDef{
PyModuleDef_Base m_base; /* To be filled out by the interpreter */
Py_ssize_t m_size; /* Size of per-module data */
PyMethodDef *m_methods;
inquiry m_reload;
traverseproc m_traverse;
inquiry m_clear;
freefunc m_free;
};
Creation of a module is changed to expect an optional
PyModuleDef*. The module state will be
null-initialized.
Each module method will be passed the module object
as the first parameter. To access the module data,
a function:
void* PyModule_GetState(PyObject*);
will be provided. In addition, to lookup a module
more efficiently than going through sys.modules,
a function:
PyObject* PyState_FindModule(struct PyModuleDef*);
will be provided. This lookup function will use an
index located in the m_base field, to find the
module by index, not by name.
As all Python objects should be controlled through
the Python memory management, usage of “static”
type objects is discouraged, unless the type object
itself has no memory-managed state. To simplify
definition of heap types, a new method:
PyTypeObject* PyType_Copy(PyTypeObject*);
is added.
Example
xxmodule.c would be changed to remove the initxx
function, and add the following code instead:
struct xxstate{
PyObject *ErrorObject;
PyObject *Xxo_Type;
};
#define xxstate(o) ((struct xxstate*)PyModule_GetState(o))
static int xx_traverse(PyObject *m, visitproc v,
void *arg)
{
Py_VISIT(xxstate(m)->ErrorObject);
Py_VISIT(xxstate(m)->Xxo_Type);
return 0;
}
static int xx_clear(PyObject *m)
{
Py_CLEAR(xxstate(m)->ErrorObject);
Py_CLEAR(xxstate(m)->Xxo_Type);
return 0;
}
static struct PyModuleDef xxmodule = {
{}, /* m_base */
sizeof(struct xxstate),
&xx_methods,
0, /* m_reload */
xx_traverse,
xx_clear,
0, /* m_free - not needed, since all is done in m_clear */
}
PyObject*
PyInit_xx()
{
PyObject *res = PyModule_New("xx", &xxmodule);
if (!res) return NULL;
xxstate(res)->ErrorObject = PyErr_NewException("xx.error", NULL, NULL);
if (!xxstate(res)->ErrorObject) {
Py_DECREF(res);
return NULL;
}
xxstate(res)->XxoType = PyType_Copy(&Xxo_Type);
if (!xxstate(res)->Xxo_Type) {
Py_DECREF(res);
return NULL;
}
return res;
}
Discussion
Tim Peters reports in [1] that PythonLabs considered such a feature
at one point, and lists the following additional hooks which aren’t
currently supported in this PEP:
when the module object is deleted from sys.modules
when Py_Finalize is called
when Python exits
when the Python DLL is unloaded (Windows only)
References
[1]
Tim Peters, reporting earlier conversation about such a feature
https://mail.python.org/pipermail/python-3000/2006-April/000726.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3121 – Extension Module Initialization and Finalization | Standards Track | Extension module initialization currently has a few deficiencies.
There is no cleanup for modules, the entry point name might give
naming conflicts, the entry functions don’t follow the usual calling
convention, and multiple interpreters are not supported well. This PEP
addresses these issues. |
PEP 3122 – Delineation of the main module
Author:
Brett Cannon
Status:
Rejected
Type:
Standards Track
Created:
27-Apr-2007
Post-History:
Table of Contents
Abstract
The Problem
The Solution
Implementation
Transition Plan
Rejected Ideas
__main__ built-in
__main__ module attribute
Use __file__ instead of __name__
Special string subclass for __name__ that overrides __eq__
References
Copyright
Attention
This PEP has been rejected. Guido views running scripts within a
package as an anti-pattern [3].
Abstract
Because of how name resolution works for relative imports in a world
where PEP 328 is implemented, the ability to execute modules within a
package ceases being possible. This failing stems from the fact that
the module being executed as the “main” module replaces its
__name__ attribute with "__main__" instead of leaving it as
the absolute name of the module. This breaks import’s ability
to resolve relative imports from the main module into absolute names.
In order to resolve this issue, this PEP proposes to change how the
main module is delineated. By leaving the __name__ attribute in
a module alone and setting sys.main to the name of the main
module this will allow at least some instances of executing a module
within a package that uses relative imports.
This PEP does not address the idea of introducing a module-level
function that is automatically executed like PEP 299 proposes.
The Problem
With the introduction of PEP 328, relative imports became dependent on
the __name__ attribute of the module performing the import. This
is because the use of dots in a relative import are used to strip away
parts of the calling module’s name to calculate where in the package
hierarchy an import should fall (prior to PEP 328 relative
imports could fail and would fall back on absolute imports which had a
chance of succeeding).
For instance, consider the import from .. import spam made from the
bacon.ham.beans module (bacon.ham.beans is not a package
itself, i.e., does not define __path__). Name resolution of the
relative import takes the caller’s name (bacon.ham.beans), splits
on dots, and then slices off the last n parts based on the level
(which is 2). In this example both ham and beans are dropped
and spam is joined with what is left (bacon). This leads to
the proper import of the module bacon.spam.
This reliance on the __name__ attribute of a module when handling
relative imports becomes an issue when executing a script within a
package. Because the executing script has its name set to
'__main__', import cannot resolve any relative imports, leading to
an ImportError.
For example, assume we have a package named bacon with an
__init__.py file containing:
from . import spam
Also create a module named spam within the bacon package (it
can be an empty file). Now if you try to execute the bacon
package (either through python bacon/__init__.py or
python -m bacon) you will get an ImportError about trying to
do a relative import from within a non-package. Obviously the import
is valid, but because of the setting of __name__ to '__main__'
import thinks that bacon/__init__.py is not in a package since no
dots exist in __name__. To see how the algorithm works in more
detail, see importlib.Import._resolve_name() in the sandbox
[2].
Currently a work-around is to remove all relative imports in the
module being executed and make them absolute. This is unfortunate,
though, as one should not be required to use a specific type of
resource in order to make a module in a package be able to be
executed.
The Solution
The solution to the problem is to not change the value of __name__
in modules. But there still needs to be a way to let executing code
know it is being executed as a script. This is handled with a new
attribute in the sys module named main.
When a module is being executed as a script, sys.main will be set
to the name of the module. This changes the current idiom of:
if __name__ == '__main__':
...
to:
import sys
if __name__ == sys.main:
...
The newly proposed solution does introduce an added line of
boilerplate which is a module import. But as the solution does not
introduce a new built-in or module attribute (as discussed in
Rejected Ideas) it has been deemed worth the extra line.
Another issue with the proposed solution (which also applies to all
rejected ideas as well) is that it does not directly solve the problem
of discovering the name of a file. Consider python bacon/spam.py.
By the file name alone it is not obvious whether bacon is a
package. In order to properly find this out both the current
direction must exist on sys.path as well as bacon/__init__.py
existing.
But this is the simple example. Consider python ../spam.py. From
the file name alone it is not at all clear if spam.py is in a
package or not. One possible solution is to find out what the
absolute name of .., check if a file named __init__.py exists,
and then look if the directory is on sys.path. If it is not, then
continue to walk up the directory until no more __init__.py files
are found or the directory is found on sys.path.
This could potentially be an expensive process. If the package depth
happens to be deep then it could require a large amount of disk access
to discover where the package is anchored on sys.path, if at all.
The stat calls alone can be expensive if the file system the executed
script is on is something like NFS.
Because of these issues, only when the -m command-line argument
(introduced by PEP 338) is used will __name__ be set. Otherwise
the fallback semantics of setting __name__ to "__main__" will
occur. sys.main will still be set to the proper value,
regardless of what __name__ is set to.
Implementation
When the -m option is used, sys.main will be set to the
argument passed in. sys.argv will be adjusted as it is currently.
Then the equivalent of __import__(self.main) will occur. This
differs from current semantics as the runpy module fetches the
code object for the file specified by the module name in order to
explicitly set __name__ and other attributes. This is no longer
needed as import can perform its normal operation in this situation.
If a file name is specified, then sys.main will be set to
"__main__". The specified file will then be read and have a code
object created and then be executed with __name__ set to
"__main__". This mirrors current semantics.
Transition Plan
In order for Python 2.6 to be able to support both the current
semantics and the proposed semantics, sys.main will always be set
to "__main__". Otherwise no change will occur for Python 2.6.
This unfortunately means that no benefit from this change will occur
in Python 2.6, but it maximizes compatibility for code that is to
work as much as possible with 2.6 and 3.0.
To help transition to the new idiom, 2to3 [1] will gain a rule to
transform the current if __name__ == '__main__': ... idiom to the
new one. This will not help with code that checks __name__
outside of the idiom, though.
Rejected Ideas
__main__ built-in
A counter-proposal to introduce a built-in named __main__.
The value of the built-in would be the name of the module being
executed (just like the proposed sys.main). This would lead to a
new idiom of:
if __name__ == __main__:
...
A drawback is that the syntactic difference is subtle; the dropping
of quotes around “__main__”. Some believe that for existing Python
programmers bugs will be introduced where the quotation marks will be
put on by accident. But one could argue that the bug would be
discovered quickly through testing as it is a very shallow bug.
While the name of built-in could obviously be different (e.g.,
main) the other drawback is that it introduces a new built-in.
With a simple solution such as sys.main being possible without
adding another built-in to Python, this proposal was rejected.
__main__ module attribute
Another proposal was to add a __main__ attribute to every module.
For the one that was executing as the main module, the attribute would
have a true value while all other modules had a false value. This has
a nice consequence of simplify the main module idiom to:
if __main__:
...
The drawback was the introduction of a new module attribute. It also
required more integration with the import machinery than the proposed
solution.
Use __file__ instead of __name__
Any of the proposals could be changed to use the __file__
attribute on modules instead of __name__, including the current
semantics. The problem with this is that with the proposed solutions
there is the issue of modules having no __file__ attribute defined
or having the same value as other modules.
The problem that comes up with the current semantics is you still have
to try to resolve the file path to a module name for the import to
work.
Special string subclass for __name__ that overrides __eq__
One proposal was to define a subclass of str that overrode the
__eq__ method so that it would compare equal to "__main__" as
well as the actual name of the module. In all other respects the
subclass would be the same as str.
This was rejected as it seemed like too much of a hack.
References
[1]
2to3 tool
(http://svn.python.org/view/sandbox/trunk/2to3/) [ViewVC]
[2]
importlib
(http://svn.python.org/view/sandbox/trunk/import_in_py/importlib.py?view=markup)
[ViewVC]
[3]
Python-Dev email: “PEP to change how the main module is delineated”
(https://mail.python.org/pipermail/python-3000/2007-April/006793.html)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3122 – Delineation of the main module | Standards Track | Because of how name resolution works for relative imports in a world
where PEP 328 is implemented, the ability to execute modules within a
package ceases being possible. This failing stems from the fact that
the module being executed as the “main” module replaces its
__name__ attribute with "__main__" instead of leaving it as
the absolute name of the module. This breaks import’s ability
to resolve relative imports from the main module into absolute names. |
PEP 3123 – Making PyObject_HEAD conform to standard C
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
27-Apr-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Specification
Compatibility with Python 2.6
Copyright
Abstract
Python currently relies on undefined C behavior, with its
usage of PyObject_HEAD. This PEP proposes to change that
into standard C.
Rationale
Standard C defines that an object must be accessed only through a
pointer of its type, and that all other accesses are undefined
behavior, with a few exceptions. In particular, the following
code has undefined behavior:
struct FooObject{
PyObject_HEAD
int data;
};
PyObject *foo(struct FooObject*f){
return (PyObject*)f;
}
int bar(){
struct FooObject *f = malloc(sizeof(struct FooObject));
struct PyObject *o = foo(f);
f->ob_refcnt = 0;
o->ob_refcnt = 1;
return f->ob_refcnt;
}
The problem here is that the storage is both accessed as
if it where struct PyObject, and as struct FooObject.
Historically, compilers did not have any problems with this
code. However, modern compilers use that clause as an
optimization opportunity, finding that f->ob_refcnt and
o->ob_refcnt cannot possibly refer to the same memory, and
that therefore the function should return 0, without having
to fetch the value of ob_refcnt at all in the return
statement. For GCC, Python now uses -fno-strict-aliasing
to work around that problem; with other compilers, it
may just see undefined behavior. Even with GCC, using
-fno-strict-aliasing may pessimize the generated code
unnecessarily.
Specification
Standard C has one specific exception to its aliasing rules precisely
designed to support the case of Python: a value of a struct type may
also be accessed through a pointer to the first field. E.g. if a
struct starts with an int, the struct * may also be cast to
an int *, allowing to write int values into the first field.
For Python, PyObject_HEAD and PyObject_VAR_HEAD will be changed
to not list all fields anymore, but list a single field of type
PyObject/PyVarObject:
typedef struct _object {
_PyObject_HEAD_EXTRA
Py_ssize_t ob_refcnt;
struct _typeobject *ob_type;
} PyObject;
typedef struct {
PyObject ob_base;
Py_ssize_t ob_size;
} PyVarObject;
#define PyObject_HEAD PyObject ob_base;
#define PyObject_VAR_HEAD PyVarObject ob_base;
Types defined as fixed-size structure will then include PyObject
as its first field, PyVarObject for variable-sized objects. E.g.:
typedef struct {
PyObject ob_base;
PyObject *start, *stop, *step;
} PySliceObject;
typedef struct {
PyVarObject ob_base;
PyObject **ob_item;
Py_ssize_t allocated;
} PyListObject;
The above definitions of PyObject_HEAD are normative, so extension
authors MAY either use the macro, or put the ob_base field explicitly
into their structs.
As a convention, the base field SHOULD be called ob_base. However, all
accesses to ob_refcnt and ob_type MUST cast the object pointer to
PyObject* (unless the pointer is already known to have that type), and
SHOULD use the respective accessor macros. To simplify access to
ob_type, ob_refcnt, and ob_size, macros:
#define Py_TYPE(o) (((PyObject*)(o))->ob_type)
#define Py_REFCNT(o) (((PyObject*)(o))->ob_refcnt)
#define Py_SIZE(o) (((PyVarObject*)(o))->ob_size)
are added. E.g. the code blocks
#define PyList_CheckExact(op) ((op)->ob_type == &PyList_Type)
return func->ob_type->tp_name;
needs to be changed to:
#define PyList_CheckExact(op) (Py_TYPE(op) == &PyList_Type)
return Py_TYPE(func)->tp_name;
For initialization of type objects, the current sequence
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
becomes incorrect, and must be replaced with
PyVarObject_HEAD_INIT(NULL, 0)
Compatibility with Python 2.6
To support modules that compile with both Python 2.6 and Python 3.0,
the Py_* macros are added to Python 2.6. The macros Py_INCREF
and Py_DECREF will be changed to cast their argument to PyObject *,
so that module authors can also explicitly declare the ob_base
field in modules designed for Python 2.6.
Copyright
This document has been placed in the public domain.
| Final | PEP 3123 – Making PyObject_HEAD conform to standard C | Standards Track | Python currently relies on undefined C behavior, with its
usage of PyObject_HEAD. This PEP proposes to change that
into standard C. |
PEP 3125 – Remove Backslash Continuation
Author:
Jim J. Jewett <JimJJewett at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
29-Apr-2007
Post-History:
29-Apr-2007, 30-Apr-2007, 04-May-2007
Table of Contents
Rejection Notice
Abstract
Motivation
Existing Line Continuation Methods
Parenthetical Expression - ([{}])
Triple-Quoted Strings
Terminal \ in the general case
Terminal \ within a string
Alternate Proposals
Open Issues
References
Copyright
Rejection Notice
This PEP is rejected. There wasn’t enough support in favor, the
feature to be removed isn’t all that harmful, and there are some use
cases that would become harder.
Abstract
Python initially inherited its parsing from C. While this has been
generally useful, there are some remnants which have been less useful
for Python, and should be eliminated.
This PEP proposes elimination of terminal \ as a marker for line
continuation.
Motivation
One goal for Python 3000 should be to simplify the language by
removing unnecessary or duplicated features. There are currently
several ways to indicate that a logical line is continued on the
following physical line.
The other continuation methods are easily explained as a logical
consequence of the semantics they provide; \ is simply an escape
character that needs to be memorized.
Existing Line Continuation Methods
Parenthetical Expression - ([{}])
Open a parenthetical expression. It doesn’t matter whether people
view the “line” as continuing; they do immediately recognize that the
expression needs to be closed before the statement can end.
Examples using each of (), [], and {}:
def fn(long_argname1,
long_argname2):
settings = {"background": "random noise",
"volume": "barely audible"}
restrictions = ["Warrantee void if used",
"Notice must be received by yesterday",
"Not responsible for sales pitch"]
Note that it is always possible to parenthesize an expression, but it
can seem odd to parenthesize an expression that needs parentheses only
for the line break:
assert val>4, (
"val is too small")
Triple-Quoted Strings
Open a triple-quoted string; again, people recognize that the string
needs to finish before the next statement starts.
banner_message = """
Satisfaction Guaranteed,
or DOUBLE YOUR MONEY BACK!!!
some minor restrictions apply"""
Terminal \ in the general case
A terminal \ indicates that the logical line is continued on the
following physical line (after whitespace). There are no particular
semantics associated with this. This form is never required, although
it may look better (particularly for people with a C language
background) in some cases:
>>> assert val>4, \
"val is too small"
Also note that the \ must be the final character in the line. If
your editor navigation can add whitespace to the end of a line, that
invisible change will alter the semantics of the program.
Fortunately, the typical result is only a syntax error, rather than a
runtime bug:
>>> assert val>4, \
"val is too small"
SyntaxError: unexpected character after line continuation character
This PEP proposes to eliminate this redundant and potentially
confusing alternative.
Terminal \ within a string
A terminal \ within a single-quoted string, at the end of the
line. This is arguably a special case of the terminal \, but it
is a special case that may be worth keeping.
>>> "abd\
def"
'abd def'
Pro: Many of the objections to removing \ termination were
really just objections to removing it within literal strings;
several people clarified that they want to keep this literal-string
usage, but don’t mind losing the general case.
Pro: The use of \ for an escape character within strings is well
known.
Contra: But note that this particular usage is odd, because the
escaped character (the newline) is invisible, and the special
treatment is to delete the character. That said, the \ of
\(newline) is still an escape which changes the meaning of the
following character.
Alternate Proposals
Several people have suggested alternative ways of marking the line
end. Most of these were rejected for not actually simplifying things.
The one exception was to let any unfinished expression signify a line
continuation, possibly in conjunction with increased indentation.
This is attractive because it is a generalization of the rule for
parentheses.
The initial objections to this were:
The amount of whitespace may be contentious; expression continuation
should not be confused with opening a new suite.
The “expression continuation” markers are not as clearly marked in
Python as the grouping punctuation “(), [], {}” marks are:# Plus needs another operand, so the line continues
"abc" +
"def"
# String ends an expression, so the line does not
# not continue. The next line is a syntax error because
# unary plus does not apply to strings.
"abc"
+ "def"
Guido objected for technical reasons. [1] The most obvious
implementation would require allowing INDENT or DEDENT tokens
anywhere, or at least in a widely expanded (and ill-defined) set of
locations. While this is of concern only for the internal parsing
mechanism (rather than for users), it would be a major new source of
complexity.
Andrew Koenig then pointed out [2] a better implementation
strategy, and said that it had worked quite well in other
languages. [3] The improved suggestion boiled down to:
The whitespace that follows an (operator or) open bracket or
parenthesis can include newline characters.It would be implemented at a very low lexical level – even before
the decision is made to turn a newline followed by spaces into an
INDENT or DEDENT token.
There is still some concern that it could mask bugs, as in this
example [4]:
# Used to be y+1, the 1 got dropped. Syntax Error (today)
# would become nonsense.
x = y+
f(x)
Requiring that the continuation be indented more than the initial line
would add both safety and complexity.
Open Issues
Should \-continuation be removed even inside strings?
Should the continuation markers be expanded from just ([{}]) to
include lines ending with an operator?
As a safety measure, should the continuation line be required to be
more indented than the initial line?
References
[1]
(email subject) PEP 30XZ: Simplified Parsing, van Rossum
https://mail.python.org/pipermail/python-3000/2007-April/007063.html
[2]
(email subject) PEP 3125 – remove backslash
continuation, Koenig
https://mail.python.org/pipermail/python-3000/2007-May/007237.html
[3]
The Snocone Programming Language, Koenig
http://www.snobol4.com/report.htm
[4]
(email subject) PEP 3125 – remove backslash
continuation, van Rossum
https://mail.python.org/pipermail/python-3000/2007-May/007244.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3125 – Remove Backslash Continuation | Standards Track | Python initially inherited its parsing from C. While this has been
generally useful, there are some remnants which have been less useful
for Python, and should be eliminated. |
PEP 3126 – Remove Implicit String Concatenation
Author:
Jim J. Jewett <JimJJewett at gmail.com>,
Raymond Hettinger <python at rcn.com>
Status:
Rejected
Type:
Standards Track
Created:
29-Apr-2007
Post-History:
29-Apr-2007, 30-Apr-2007, 07-May-2007
Table of Contents
Rejection Notice
Abstract
Motivation
History or Future
Problem
Solution
Concerns
Operator Precedence
Long Commands
Regular Expressions
Internationalization
Transition
Open Issues
References
Copyright
Rejection Notice
This PEP is rejected. There wasn’t enough support in favor, the
feature to be removed isn’t all that harmful, and there are some use
cases that would become harder.
Abstract
Python inherited many of its parsing rules from C. While this has
been generally useful, there are some individual rules which are less
useful for python, and should be eliminated.
This PEP proposes to eliminate implicit string concatenation based
only on the adjacency of literals.
Instead of:
"abc" "def" == "abcdef"
authors will need to be explicit, and either add the strings:
"abc" + "def" == "abcdef"
or join them:
"".join(["abc", "def"]) == "abcdef"
Motivation
One goal for Python 3000 should be to simplify the language by
removing unnecessary features. Implicit string concatenation should
be dropped in favor of existing techniques. This will simplify the
grammar and simplify a user’s mental picture of Python. The latter is
important for letting the language “fit in your head”. A large group
of current users do not even know about implicit concatenation. Of
those who do know about it, a large portion never use it or habitually
avoid it. Of those who both know about it and use it, very few could
state with confidence the implicit operator precedence and under what
circumstances it is computed when the definition is compiled versus
when it is run.
History or Future
Many Python parsing rules are intentionally compatible with C. This
is a useful default, but Special Cases need to be justified based on
their utility in Python. We should no longer assume that python
programmers will also be familiar with C, so compatibility between
languages should be treated as a tie-breaker, rather than a
justification.
In C, implicit concatenation is the only way to join strings without
using a (run-time) function call to store into a variable. In Python,
the strings can be joined (and still recognized as immutable) using
more standard Python idioms, such + or "".join.
Problem
Implicit String concatenation leads to tuples and lists which are
shorter than they appear; this is turn can lead to confusing, or even
silent, errors. For example, given a function which accepts several
parameters, but offers a default value for some of them:
def f(fmt, *args):
print fmt % args
This looks like a valid call, but isn’t:
>>> f("User %s got a message %s",
"Bob"
"Time for dinner")
Traceback (most recent call last):
File "<pyshell#8>", line 2, in <module>
"Bob"
File "<pyshell#3>", line 2, in f
print fmt % args
TypeError: not enough arguments for format string
Calls to this function can silently do the wrong thing:
def g(arg1, arg2=None):
...
# silently transformed into the possibly very different
# g("arg1 on this linearg2 on this line", None)
g("arg1 on this line"
"arg2 on this line")
To quote Jason Orendorff [#Orendorff]
Oh. I just realized this happens a lot out here. Where I work,
we use scons, and each SConscript has a long list of filenames:sourceFiles = [
'foo.c'
'bar.c',
#...many lines omitted...
'q1000x.c']
It’s a common mistake to leave off a comma, and then scons
complains that it can’t find ‘foo.cbar.c’. This is pretty
bewildering behavior even if you are a Python programmer,
and not everyone here is.
Solution
In Python, strings are objects and they support the __add__ operator,
so it is possible to write:
"abc" + "def"
Because these are literals, this addition can still be optimized away
by the compiler; the CPython compiler already does so.
[2]
Other existing alternatives include multiline (triple-quoted) strings,
and the join method:
"""This string
extends across
multiple lines, but you may want to use something like
Textwrap.dedent
to clear out the leading spaces
and/or reformat.
"""
>>> "".join(["empty", "string", "joiner"]) == "emptystringjoiner"
True
>>> " ".join(["space", "string", "joiner"]) == "space string joiner"
>>> "\n".join(["multiple", "lines"]) == "multiple\nlines" == (
"""multiple
lines""")
True
Concerns
Operator Precedence
Guido indicated [2] that this change should be
handled by PEP, because there were a few edge cases with other string
operators, such as the %. (Assuming that str % stays – it may be
eliminated in favor of PEP 3101 – Advanced String Formatting.
[3])
The resolution is to use parentheses to enforce precedence – the same
solution that can be used today:
# Clearest, works today, continues to work, optimization is
# already possible.
("abc %s def" + "ghi") % var
# Already works today; precedence makes the optimization more
# difficult to recognize, but does not change the semantics.
"abc" + "def %s ghi" % var
as opposed to:
# Already fails because modulus (%) is higher precedence than
# addition (+)
("abc %s def" + "ghi" % var)
# Works today only because adjacency is higher precedence than
# modulus. This will no longer be available.
"abc %s" "def" % var
# So the 2-to-3 translator can automatically replace it with the
# (already valid):
("abc %s" + "def") % var
Long Commands
… build up (what I consider to be) readable SQL queries [4]:rows = self.executesql("select cities.city, state, country"
" from cities, venues, events, addresses"
" where cities.city like %s"
" and events.active = 1"
" and venues.address = addresses.id"
" and addresses.city = cities.id"
" and events.venue = venues.id",
(city,))
Alternatives again include triple-quoted strings, +, and .join:
query="""select cities.city, state, country
from cities, venues, events, addresses
where cities.city like %s
and events.active = 1"
and venues.address = addresses.id
and addresses.city = cities.id
and events.venue = venues.id"""
query=( "select cities.city, state, country"
+ " from cities, venues, events, addresses"
+ " where cities.city like %s"
+ " and events.active = 1"
+ " and venues.address = addresses.id"
+ " and addresses.city = cities.id"
+ " and events.venue = venues.id"
)
query="\n".join(["select cities.city, state, country",
" from cities, venues, events, addresses",
" where cities.city like %s",
" and events.active = 1",
" and venues.address = addresses.id",
" and addresses.city = cities.id",
" and events.venue = venues.id"])
# And yes, you *could* inline any of the above querystrings
# the same way the original was inlined.
rows = self.executesql(query, (city,))
Regular Expressions
Complex regular expressions are sometimes stated in terms of several
implicitly concatenated strings with each regex component on a
different line and followed by a comment. The plus operator can be
inserted here but it does make the regex harder to read. One
alternative is to use the re.VERBOSE option. Another alternative is
to build-up the regex with a series of += lines:
# Existing idiom which relies on implicit concatenation
r = ('a{20}' # Twenty A's
'b{5}' # Followed by Five B's
)
# Mechanical replacement
r = ('a{20}' +# Twenty A's
'b{5}' # Followed by Five B's
)
# already works today
r = '''a{20} # Twenty A's
b{5} # Followed by Five B's
''' # Compiled with the re.VERBOSE flag
# already works today
r = 'a{20}' # Twenty A's
r += 'b{5}' # Followed by Five B's
Internationalization
Some internationalization tools – notably xgettext – have already
been special-cased for implicit concatenation, but not for Python’s
explicit concatenation. [5]
These tools will fail to extract the (already legal):
_("some string" +
" and more of it")
but often have a special case for:
_("some string"
" and more of it")
It should also be possible to just use an overly long line (xgettext
limits messages to 2048 characters [7], which is less
than Python’s enforced limit) or triple-quoted strings, but these
solutions sacrifice some readability in the code:
# Lines over a certain length are unpleasant.
_("some string and more of it")
# Changing whitespace is not ideal.
_("""Some string
and more of it""")
_("""Some string
and more of it""")
_("Some string \
and more of it")
I do not see a good short-term resolution for this.
Transition
The proposed new constructs are already legal in current Python, and
can be used immediately.
The 2 to 3 translator can be made to mechanically change:
"str1" "str2"
("line1" #comment
"line2")
into:
("str1" + "str2")
("line1" +#comments
"line2")
If users want to use one of the other idioms, they can; as these
idioms are all already legal in python 2, the edits can be made
to the original source, rather than patching up the translator.
Open Issues
Is there a better way to support external text extraction tools, or at
least xgettext [6] in particular?
References
[1]
Implicit String Concatenation, Orendorff
https://mail.python.org/pipermail/python-ideas/2007-April/000397.html
[2] (1, 2)
Reminder: Py3k PEPs due by April, Hettinger,
van Rossum
https://mail.python.org/pipermail/python-3000/2007-April/006563.html
[3]
ps to question Re: Need help completing ABC pep,
van Rossum
https://mail.python.org/pipermail/python-3000/2007-April/006737.html
[4]
(email Subject) PEP 30XZ: Simplified Parsing, Skip,
https://mail.python.org/pipermail/python-3000/2007-May/007261.html
[5]
(email Subject) PEP 30XZ: Simplified Parsing
https://mail.python.org/pipermail/python-3000/2007-May/007305.html
[6]
GNU gettext manual
http://www.gnu.org/software/gettext/
[7]
Unix man page for xgettext – Notes section
http://www.scit.wlv.ac.uk/cgi-bin/mansec?1+xgettext
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3126 – Remove Implicit String Concatenation | Standards Track | Python inherited many of its parsing rules from C. While this has
been generally useful, there are some individual rules which are less
useful for python, and should be eliminated. |
PEP 3129 – Class Decorators
Author:
Collin Winter <collinwinter at google.com>
Status:
Final
Type:
Standards Track
Created:
01-May-2007
Python-Version:
3.0
Post-History:
07-May-2007
Table of Contents
Abstract
Rationale
Semantics
Implementation
Acceptance
References
Copyright
Abstract
This PEP proposes class decorators, an extension to the function
and method decorators introduced in PEP 318.
Rationale
When function decorators were originally debated for inclusion in
Python 2.4, class decorators were seen as
obscure and unnecessary
thanks to metaclasses. After several years’ experience
with the Python 2.4.x series of releases and an increasing
familiarity with function decorators and their uses, the BDFL and
the community re-evaluated class decorators and recommended their
inclusion in Python 3.0 [1].
The motivating use-case was to make certain constructs more easily
expressed and less reliant on implementation details of the CPython
interpreter. While it is possible to express class decorator-like
functionality using metaclasses, the results are generally
unpleasant and the implementation highly fragile [2]. In
addition, metaclasses are inherited, whereas class decorators are not,
making metaclasses unsuitable for some, single class-specific uses of
class decorators. The fact that large-scale Python projects like Zope
were going through these wild contortions to achieve something like
class decorators won over the BDFL.
Semantics
The semantics and design goals of class decorators are the same as
for function decorators (PEP 318, PEP 318);
the only
difference is that you’re decorating a class instead of a function.
The following two snippets are semantically identical:
class A:
pass
A = foo(bar(A))
@foo
@bar
class A:
pass
For a detailed examination of decorators, please refer to PEP 318.
Implementation
Adapting Python’s grammar to support class decorators requires
modifying two rules and adding a new rule:
funcdef: [decorators] 'def' NAME parameters ['->' test] ':' suite
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt |
with_stmt | funcdef | classdef
need to be changed to
decorated: decorators (classdef | funcdef)
funcdef: 'def' NAME parameters ['->' test] ':' suite
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt |
with_stmt | funcdef | classdef | decorated
Adding decorated is necessary to avoid an ambiguity in the
grammar.
The Python AST and bytecode must be modified accordingly.
A reference implementation [3] has been provided by
Jack Diederich.
Acceptance
There was virtually no discussion following the posting of this PEP,
meaning that everyone agreed it should be accepted.
The patch was committed to Subversion as revision 55430.
References
[1]
https://mail.python.org/pipermail/python-dev/2006-March/062942.html
[2]
https://mail.python.org/pipermail/python-dev/2006-March/062888.html
[3]
https://bugs.python.org/issue1671208
Copyright
This document has been placed in the public domain.
| Final | PEP 3129 – Class Decorators | Standards Track | This PEP proposes class decorators, an extension to the function
and method decorators introduced in PEP 318. |
PEP 3130 – Access to Current Module/Class/Function
Author:
Jim J. Jewett <jimjjewett at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
22-Apr-2007
Python-Version:
3.0
Post-History:
22-Apr-2007
Table of Contents
Rejection Notice
Abstract
Rationale for __module__
Rationale for __class__
Rationale for __function__
Backwards Compatibility
Implementation
Open Issues
References
Copyright
Rejection Notice
This PEP is rejected. It is not clear how it should be
implemented or what the precise semantics should be in edge cases,
and there aren’t enough important use cases given. response has
been lukewarm at best.
Abstract
It is common to need a reference to the current module, class,
or function, but there is currently no entirely correct way to
do this. This PEP proposes adding the keywords __module__,
__class__, and __function__.
Rationale for __module__
Many modules export various functions, classes, and other objects,
but will perform additional activities (such as running unit
tests) when run as a script. The current idiom is to test whether
the module’s name has been set to magic value.
if __name__ == "__main__": ...
More complicated introspection requires a module to (attempt to)
import itself. If importing the expected name actually produces
a different module, there is no good workaround.
# __import__ lets you use a variable, but... it gets more
# complicated if the module is in a package.
__import__(__name__)
# So just go to sys modules... and hope that the module wasn't
# hidden/removed (perhaps for security), that __name__ wasn't
# changed, and definitely hope that no other module with the
# same name is now available.
class X(object):
pass
import sys
mod = sys.modules[__name__]
mod = sys.modules[X.__class__.__module__]
Proposal: Add a __module__ keyword which refers to the module
currently being defined (executed). (But see open issues.)
# XXX sys.main is still changing as draft progresses. May
# really need sys.modules[sys.main]
if __module__ is sys.main: # assumes PEP (3122), Cannon
...
Rationale for __class__
Class methods are passed the current instance; from this they can
determine self.__class__ (or cls, for class methods).
Unfortunately, this reference is to the object’s actual class,
which may be a subclass of the defining class. The current
workaround is to repeat the name of the class, and assume that the
name will not be rebound.
class C(B):
def meth(self):
super(C, self).meth() # Hope C is never rebound.
class D(C):
def meth(self):
# ?!? issubclass(D,C), so it "works":
super(C, self).meth()
Proposal: Add a __class__ keyword which refers to the class
currently being defined (executed). (But see open issues.)
class C(B):
def meth(self):
super(__class__, self).meth()
Note that super calls may be further simplified by the “New Super”
PEP (Spealman). The __class__ (or __this_class__) attribute came
up in attempts to simplify the explanation and/or implementation
of that PEP, but was separated out as an independent decision.
Note that __class__ (or __this_class__) is not quite the same as
the __thisclass__ property on bound super objects. The existing
super.__thisclass__ property refers to the class from which the
Method Resolution Order search begins. In the above class D, it
would refer to (the current reference of name) C.
Rationale for __function__
Functions (including methods) often want access to themselves,
usually for a private storage location or true recursion. While
there are several workarounds, all have their drawbacks.
def counter(_total=[0]):
# _total shouldn't really appear in the
# signature at all; the list wrapping and
# [0] unwrapping obscure the code
_total[0] += 1
return _total[0]
@annotate(total=0)
def counter():
# Assume name counter is never rebound:
counter.total += 1
return counter.total
# class exists only to provide storage:
class _wrap(object):
__total = 0
def f(self):
self.__total += 1
return self.__total
# set module attribute to a bound method:
accum = _wrap().f
# This function calls "factorial", which should be itself --
# but the same programming styles that use heavy recursion
# often have a greater willingness to rebind function names.
def factorial(n):
return (n * factorial(n-1) if n else 1)
Proposal: Add a __function__ keyword which refers to the function
(or method) currently being defined (executed). (But see open
issues.)
@annotate(total=0)
def counter():
# Always refers to this function obj:
__function__.total += 1
return __function__.total
def factorial(n):
return (n * __function__(n-1) if n else 1)
Backwards Compatibility
While a user could be using these names already, double-underscore
names ( __anything__ ) are explicitly reserved to the interpreter.
It is therefore acceptable to introduce special meaning to these
names within a single feature release.
Implementation
Ideally, these names would be keywords treated specially by the
bytecode compiler.
Guido has suggested [1] using a cell variable filled in by the
metaclass.
Michele Simionato has provided a prototype using bytecode hacks [2].
This does not require any new bytecode operators; it just
modifies the which specific sequence of existing operators gets
run.
Open Issues
Are __module__, __class__, and __function__ the right names? In
particular, should the names include the word “this”, either as
__this_module__, __this_class__, and __this_function__, (format
discussed on the python-3000 and python-ideas lists) or as
__thismodule__, __thisclass__, and __thisfunction__ (inspired
by, but conflicting with, current usage of super.``__thisclass__``).
Are all three keywords needed, or should this enhancement be
limited to a subset of the objects? Should methods be treated
separately from other functions?
References
[1]
Fixing super anyone? Guido van Rossum
https://mail.python.org/pipermail/python-3000/2007-April/006671.html
[2]
Descriptor/Decorator challenge, Michele Simionato
http://groups.google.com/group/comp.lang.python/browse_frm/thread/a6010c7494871bb1/62a2da68961caeb6?lnk=gst&q=simionato+challenge&rnum=1&hl=en#62a2da68961caeb6
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3130 – Access to Current Module/Class/Function | Standards Track | It is common to need a reference to the current module, class,
or function, but there is currently no entirely correct way to
do this. This PEP proposes adding the keywords __module__,
__class__, and __function__. |
PEP 3131 – Supporting Non-ASCII Identifiers
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
01-May-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Common Objections
Specification of Language Changes
Policy Specification
Implementation
Open Issues
Discussion
References
Copyright
Abstract
This PEP suggests to support non-ASCII letters (such as accented characters,
Cyrillic, Greek, Kanji, etc.) in Python identifiers.
Rationale
Python code is written by many people in the world who are not
familiar with the English language, or even well-acquainted with the
Latin writing system. Such developers often desire to define classes
and functions with names in their native languages, rather than having
to come up with an (often incorrect) English translation of the
concept they want to name. By using identifiers in their native
language, code clarity and maintainability of the code among
speakers of that language improves.
For some languages, common transliteration systems exist (in particular, for the
Latin-based writing systems). For other languages, users have larger
difficulties to use Latin to write their native words.
Common Objections
Some objections are often raised against proposals similar to this one.
People claim that they will not be able to use a library if to do so they have
to use characters they cannot type on their keyboards. However, it is the
choice of the designer of the library to decide on various constraints for using
the library: people may not be able to use the library because they cannot get
physical access to the source code (because it is not published), or because
licensing prohibits usage, or because the documentation is in a language they
cannot understand. A developer wishing to make a library widely available needs
to make a number of explicit choices (such as publication, licensing, language
of documentation, and language of identifiers). It should always be the choice
of the author to make these decisions - not the choice of the language
designers.
In particular, projects wishing to have wide usage probably might want to
establish a policy that all identifiers, comments, and documentation is written
in English (see the GNU coding style guide for an example of such a policy).
Restricting the language to ASCII-only identifiers does not enforce comments and
documentation to be English, or the identifiers actually to be English words, so
an additional policy is necessary, anyway.
Specification of Language Changes
The syntax of identifiers in Python will be based on the Unicode standard annex
UAX-31 [1], with elaboration and changes as defined below.
Within the ASCII range (U+0001..U+007F), the valid characters for identifiers
are the same as in Python 2.5. This specification only introduces additional
characters from outside the ASCII range. For other characters, the
classification uses the version of the Unicode Character Database as included in
the unicodedata module.
The identifier syntax is <XID_Start> <XID_Continue>*.
The exact specification of what characters have the XID_Start or
XID_Continue properties can be found in the DerivedCoreProperties
file of the Unicode data in use by Python (4.1 at the time this
PEP was written), see [6]. For reference, the construction rules
for these sets are given below. The XID_* properties are derived
from ID_Start/ID_Continue, which are derived themselves.
ID_Start is defined as all characters having one of the general
categories uppercase letters (Lu), lowercase letters (Ll), titlecase
letters (Lt), modifier letters (Lm), other letters (Lo), letter
numbers (Nl), the underscore, and characters carrying the
Other_ID_Start property. XID_Start then closes this set under
normalization, by removing all characters whose NFKC normalization
is not of the form ID_Start ID_Continue* anymore.
ID_Continue is defined as all characters in ID_Start, plus
nonspacing marks (Mn), spacing combining marks (Mc), decimal number
(Nd), connector punctuations (Pc), and characters carrying the
Other_ID_Continue property. Again, XID_Continue closes this set
under NFKC-normalization; it also adds U+00B7 to support Catalan.
All identifiers are converted into the normal form NFKC while parsing;
comparison of identifiers is based on NFKC.
A non-normative HTML file listing all valid identifier characters for
Unicode 4.1 can be found at
http://www.dcl.hpi.uni-potsdam.de/home/loewis/table-3131.html.
Policy Specification
As an addition to the Python Coding style, the following policy is
prescribed: All identifiers in the Python standard library MUST use
ASCII-only identifiers, and SHOULD use English words wherever feasible
(in many cases, abbreviations and technical terms are used which
aren’t English). In addition, string literals and comments must also
be in ASCII. The only exceptions are (a) test cases testing the
non-ASCII features, and (b) names of authors. Authors whose names are
not based on the Latin alphabet MUST provide a Latin transliteration
of their names.
As an option, this specification can be applied to Python 2.x. In
that case, ASCII-only identifiers would continue to be represented as
byte string objects in namespace dictionaries; identifiers with
non-ASCII characters would be represented as Unicode strings.
Implementation
The following changes will need to be made to the parser:
If a non-ASCII character is found in the UTF-8 representation of
the source code, a forward scan is made to find the first ASCII
non-identifier character (e.g. a space or punctuation character)
The entire UTF-8 string is passed to a function to normalize the
string to NFKC, and then verify that it follows the identifier
syntax. No such callout is made for pure-ASCII identifiers, which
continue to be parsed the way they are today. The Unicode database
must start including the Other_ID_{Start|Continue} property.
If this specification is implemented for 2.x, reflective libraries
(such as pydoc) must be verified to continue to work when Unicode
strings appear in __dict__ slots as keys.
Open Issues
John Nagle suggested consideration of Unicode Technical Standard #39,
[2], which discusses security mechanisms for Unicode identifiers.
It’s not clear how that can precisely apply to this PEP; possible
consequences are
warn about characters listed as “restricted” in xidmodifications.txt
warn about identifiers using mixed scripts
somehow perform Confusable Detection
In the latter two approaches, it’s not clear how precisely the
algorithm should work. For mixed scripts, certain kinds of mixing
should probably allowed - are these the “Common” and “Inherited”
scripts mentioned in section 5? For Confusable Detection, it seems one
needs two identifiers to compare them for confusion - is it possible
to somehow apply it to a single identifier only, and warn?
In follow-up discussion, it turns out that John Nagle actually
meant to suggest UTR#36, level “Highly Restrictive”, [3].
Several people suggested to allow and ignore formatting control
characters (general category Cf), as is done in Java, JavaScript, and
C#. It’s not clear whether this would improve things (it might
for RTL languages); if there is a need, these can be added
later.
Some people would like to see an option on selecting support
for this PEP at run-time; opinions vary on what precisely
that option should be, and what precisely its default value
should be. Guido van Rossum commented in [5] that a global
flag passed to the interpreter is not acceptable, as it would
apply to all modules.
Discussion
Ka-Ping Yee summarizes discussion and further objection
in [4] as such:
Should identifiers be allowed to contain any Unicode letter?Drawbacks of allowing non-ASCII identifiers wholesale:
Python will lose the ability to make a reliable round trip to
a human-readable display on screen or on paper.
Python will become vulnerable to a new class of security exploits;
code and submitted patches will be much harder to inspect.
Humans will no longer be able to validate Python syntax.
Unicode is young; its problems are not yet well understood and
solved; tool support is weak.
Languages with non-ASCII identifiers use different character sets
and normalization schemes; PEP 3131’s choices are non-obvious.
The Unicode bidi algorithm yields an extremely confusing display
order for RTL text when digits or operators are nearby.
Should the default behaviour accept only ASCII identifiers, or
should it accept identifiers containing non-ASCII characters?Arguments for ASCII only by default:
Non-ASCII identifiers by default makes common practice/assumptions
subtly/unknowingly wrong; rarely wrong is worse than obviously wrong.
Better to raise a warning than to fail silently when encountering
a probably unexpected situation.
All of current usage is ASCII-only; the vast majority of future
usage will be ASCII-only.
It is the pockets of Unicode adoption that are parochial, not the
ASCII advocates.
Python should audit for ASCII-only identifiers for the same
reasons that it audits for tab-space consistency
Incremental change is safer.
An ASCII-only default favors open-source development and sharing
of source code.
Existing projects won’t have to waste any brainpower worrying
about the implications of Unicode identifiers.
Should non-ASCII identifiers be optional?Various voices in support of a flag (although there’s been debate
over which should be the default, no one seems to be saying that
there shouldn’t be an off switch)
Should the identifier character set be configurable?Various voices proposing and supporting a selectable character set,
so that users can get all the benefits of using their own language
without the drawbacks of confusable/unfamiliar characters
Which identifier characters should be allowed?
What to do about bidi format control characters?
What about other ID_Continue characters? What about characters
that look like punctuation? What about other recommendations
in UTS #39? What about mixed-script identifiers?
Which normalization form should be used, NFC or NFKC?
Should source code be required to be in normalized form?
References
[1]
http://www.unicode.org/reports/tr31/
[2]
http://www.unicode.org/reports/tr39/
[3]
http://www.unicode.org/reports/tr36/
[4]
https://mail.python.org/pipermail/python-3000/2007-June/008161.html
[5]
https://mail.python.org/pipermail/python-3000/2007-May/007925.html
[6]
http://www.unicode.org/Public/4.1.0/ucd/DerivedCoreProperties.txt
Copyright
This document has been placed in the public domain.
| Final | PEP 3131 – Supporting Non-ASCII Identifiers | Standards Track | This PEP suggests to support non-ASCII letters (such as accented characters,
Cyrillic, Greek, Kanji, etc.) in Python identifiers. |
PEP 3132 – Extended Iterable Unpacking
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Standards Track
Created:
30-Apr-2007
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Specification
Implementation
Grammar change
Changes to the Compiler
Changes to the Bytecode Interpreter
Acceptance
References
Copyright
Abstract
This PEP proposes a change to iterable unpacking syntax, allowing to
specify a “catch-all” name which will be assigned a list of all items
not assigned to a “regular” name.
An example says more than a thousand words:
>>> a, *b, c = range(5)
>>> a
0
>>> c
4
>>> b
[1, 2, 3]
Rationale
Many algorithms require splitting a sequence in a “first, rest” pair.
With the new syntax,
first, rest = seq[0], seq[1:]
is replaced by the cleaner and probably more efficient:
first, *rest = seq
For more complex unpacking patterns, the new syntax looks even
cleaner, and the clumsy index handling is not necessary anymore.
Also, if the right-hand value is not a list, but an iterable, it
has to be converted to a list before being able to do slicing; to
avoid creating this temporary list, one has to resort to
it = iter(seq)
first = it.next()
rest = list(it)
Specification
A tuple (or list) on the left side of a simple assignment (unpacking
is not defined for augmented assignment) may contain at most one
expression prepended with a single asterisk (which is henceforth
called a “starred” expression, while the other expressions in the
list are called “mandatory”). This designates a subexpression that
will be assigned a list of all items from the iterable being unpacked
that are not assigned to any of the mandatory expressions, or an
empty list if there are no such items.
For example, if seq is a sliceable sequence, all the following
assignments are equivalent if seq has at least two elements:
a, b, c = seq[0], list(seq[1:-1]), seq[-1]
a, *b, c = seq
[a, *b, c] = seq
It is an error (as it is currently) if the iterable doesn’t contain
enough items to assign to all the mandatory expressions.
It is also an error to use the starred expression as a lone
assignment target, as in
*a = range(5)
This, however, is valid syntax:
*a, = range(5)
Note that this proposal also applies to tuples in implicit assignment
context, such as in a for statement:
for a, *b in [(1, 2, 3), (4, 5, 6, 7)]:
print(b)
would print out
[2, 3]
[5, 6, 7]
Starred expressions are only allowed as assignment targets, using them
anywhere else (except for star-args in function calls, of course) is an
error.
Implementation
Grammar change
This feature requires a new grammar rule:
star_expr: ['*'] expr
In these two rules, expr is changed to star_expr:
comparison: star_expr (comp_op star_expr)*
exprlist: star_expr (',' star_expr)* [',']
Changes to the Compiler
A new ASDL expression type Starred is added which represents a
starred expression. Note that the starred expression element
introduced here is universal and could later be used for other
purposes in non-assignment context, such as the yield *iterable
proposal.
The compiler is changed to recognize all cases where a starred
expression is invalid and flag them with syntax errors.
A new bytecode instruction, UNPACK_EX, is added, whose argument
has the number of mandatory targets before the starred target in the
lower 8 bits and the number of mandatory targets after the starred
target in the upper 8 bits. For unpacking sequences without starred
expressions, the old UNPACK_ITERABLE opcode is kept.
Changes to the Bytecode Interpreter
The function unpack_iterable() in ceval.c is changed to handle
the extended unpacking, via an argcntafter parameter. In the
UNPACK_EX case, the function will do the following:
collect all items for mandatory targets before the starred one
collect all remaining items from the iterable in a list
pop items for mandatory targets after the starred one from the list
push the single items and the resized list on the stack
Shortcuts for unpacking iterables of known types, such as lists or
tuples, can be added.
The current implementation can be found at the SourceForge Patch
tracker [SFPATCH]. It now includes a minimal test case.
Acceptance
After a short discussion on the python-3000 list [1], the PEP was
accepted by Guido in its current form. Possible changes discussed
were:
Only allow a starred expression as the last item in the exprlist.
This would simplify the unpacking code a bit and allow for the
starred expression to be assigned an iterator. This behavior was
rejected because it would be too surprising.
Try to give the starred target the same type as the source
iterable, for example, b in a, *b = 'hello' would be
assigned the string 'ello'. This may seem nice, but is
impossible to get right consistently with all iterables.
Make the starred target a tuple instead of a list. This would be
consistent with a function’s *args, but make further processing
of the result harder.
References
[SFPATCH]
https://bugs.python.org/issue1711529
[1]
https://mail.python.org/pipermail/python-3000/2007-May/007198.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3132 – Extended Iterable Unpacking | Standards Track | This PEP proposes a change to iterable unpacking syntax, allowing to
specify a “catch-all” name which will be assigned a list of all items
not assigned to a “regular” name. |
PEP 3133 – Introducing Roles
Author:
Collin Winter <collinwinter at google.com>
Status:
Rejected
Type:
Standards Track
Requires:
3115, 3129
Created:
01-May-2007
Python-Version:
3.0
Post-History:
13-May-2007
Table of Contents
Rejection Notice
Abstract
Rationale
A Note on Syntax
Performing Your Role
Static Role Assignment
Assigning Roles at Runtime
Asking Questions About Roles
Defining New Roles
Empty Roles
Composing Roles via Inheritance
Requiring Concrete Methods
Mechanism
Relationship to Abstract Base Classes
Open Issues
Allowing Instances to Perform Different Roles Than Their Class
Requiring Attributes
Roles of Roles
class_performs()
Prettier Dynamic Role Assignment
Syntax Support
Implementation
Acknowledgements
References
Copyright
Rejection Notice
This PEP has helped push PEP 3119 towards a saner, more minimalistic
approach. But given the latest version of PEP 3119 I much prefer
that. GvR.
Abstract
Python’s existing object model organizes objects according to their
implementation. It is often desirable – especially in
duck typing-based language like Python – to organize objects by
the part they play in a larger system (their intent), rather than by
how they fulfill that part (their implementation). This PEP
introduces the concept of roles, a mechanism for organizing
objects according to their intent rather than their implementation.
Rationale
In the beginning were objects. They allowed programmers to marry
function and state, and to increase code reusability through concepts
like polymorphism and inheritance, and lo, it was good. There came
a time, however, when inheritance and polymorphism weren’t enough.
With the invention of both dogs and trees, we were no longer able to
be content with knowing merely, “Does it understand ‘bark’?”
We now needed to know what a given object thought that “bark” meant.
One solution, the one detailed here, is that of roles, a mechanism
orthogonal and complementary to the traditional class/instance system.
Whereas classes concern themselves with state and implementation, the
roles mechanism deals exclusively with the behaviours embodied in a
given class.
This system was originally called “traits” and implemented for Squeak
Smalltalk [4]. It has since been adapted for use in
Perl 6 [3] where it is called “roles”, and it is primarily
from there that the concept is now being interpreted for Python 3.
Python 3 will preserve the name “roles”.
In a nutshell: roles tell you what an object does, classes tell you
how an object does it.
In this PEP, I will outline a system for Python 3 that will make it
possible to easily determine whether a given object’s understanding
of “bark” is tree-like or dog-like. (There might also be more
serious examples.)
A Note on Syntax
A syntax proposals in this PEP are tentative and should be
considered to be strawmen. The necessary bits that this PEP depends
on – namely PEP 3115’s class definition syntax and PEP 3129’s class
decorators – are still being formalized and may change. Function
names will, of course, be subject to lengthy bikeshedding debates.
Performing Your Role
Static Role Assignment
Let’s start out by defining Tree and Dog classes
class Tree(Vegetable):
def bark(self):
return self.is_rough()
class Dog(Animal):
def bark(self):
return self.goes_ruff()
While both implement a bark() method with the same signature,
they do wildly different things. We need some way of differentiating
what we’re expecting. Relying on inheritance and a simple
isinstance() test will limit code reuse and/or force any dog-like
classes to inherit from Dog, whether or not that makes sense.
Let’s see if roles can help.
@perform_role(Doglike)
class Dog(Animal):
...
@perform_role(Treelike)
class Tree(Vegetable):
...
@perform_role(SitThere)
class Rock(Mineral):
...
We use class decorators from PEP 3129 to associate a particular role
or roles with a class. Client code can now verify that an incoming
object performs the Doglike role, allowing it to handle Wolf,
LaughingHyena and Aibo [1] instances, too.
Roles can be composed via normal inheritance:
@perform_role(Guard, MummysLittleDarling)
class GermanShepherd(Dog):
def guard(self, the_precious):
while True:
if intruder_near(the_precious):
self.growl()
def get_petted(self):
self.swallow_pride()
Here, GermanShepherd instances perform three roles: Guard and
MummysLittleDarling are applied directly, whereas Doglike
is inherited from Dog.
Assigning Roles at Runtime
Roles can be assigned at runtime, too, by unpacking the syntactic
sugar provided by decorators.
Say we import a Robot class from another module, and since we
know that Robot already implements our Guard interface,
we’d like it to play nicely with guard-related code, too.
>>> perform(Guard)(Robot)
This takes effect immediately and impacts all instances of Robot.
Asking Questions About Roles
Just because we’ve told our robot army that they’re guards, we’d
like to check in on them occasionally and make sure they’re still at
their task.
>>> performs(our_robot, Guard)
True
What about that one robot over there?
>>> performs(that_robot_over_there, Guard)
True
The performs() function is used to ask if a given object
fulfills a given role. It cannot be used, however, to ask a
class if its instances fulfill a role:
>>> performs(Robot, Guard)
False
This is because the Robot class is not interchangeable
with a Robot instance.
Defining New Roles
Empty Roles
Roles are defined like a normal class, but use the Role
metaclass.
class Doglike(metaclass=Role):
...
Metaclasses are used to indicate that Doglike is a Role in
the same way 5 is an int and tuple is a type.
Composing Roles via Inheritance
Roles may inherit from other roles; this has the effect of composing
them. Here, instances of Dog will perform both the
Doglike and FourLegs roles.
class FourLegs(metaclass=Role):
pass
class Doglike(FourLegs, Carnivor):
pass
@perform_role(Doglike)
class Dog(Mammal):
pass
Requiring Concrete Methods
So far we’ve only defined empty roles – not very useful things.
Let’s now require that all classes that claim to fulfill the
Doglike role define a bark() method:
class Doglike(FourLegs):
def bark(self):
pass
No decorators are required to flag the method as “abstract”, and the
method will never be called, meaning whatever code it contains (if any)
is irrelevant. Roles provide only abstract methods; concrete
default implementations are left to other, better-suited mechanisms
like mixins.
Once you have defined a role, and a class has claimed to perform that
role, it is essential that that claim be verified. Here, the
programmer has misspelled one of the methods required by the role.
@perform_role(FourLegs)
class Horse(Mammal):
def run_like_teh_wind(self)
...
This will cause the role system to raise an exception, complaining
that you’re missing a run_like_the_wind() method. The role
system carries out these checks as soon as a class is flagged as
performing a given role.
Concrete methods are required to match exactly the signature demanded
by the role. Here, we’ve attempted to fulfill our role by defining a
concrete version of bark(), but we’ve missed the mark a bit.
@perform_role(Doglike)
class Coyote(Mammal):
def bark(self, target=moon):
pass
This method’s signature doesn’t match exactly with what the
Doglike role was expecting, so the role system will throw a bit
of a tantrum.
Mechanism
The following are strawman proposals for how roles might be expressed
in Python. The examples here are phrased in a way that the roles
mechanism may be implemented without changing the Python interpreter.
(Examples adapted from an article on Perl 6 roles by Curtis Poe
[2].)
Static class role assignment@perform_role(Thieving)
class Elf(Character):
...
perform_role() accepts multiple arguments, such that this is
also legal:
@perform_role(Thieving, Spying, Archer)
class Elf(Character):
...
The Elf class now performs both the Thieving, Spying,
and Archer roles.
Querying instancesif performs(my_elf, Thieving):
...
The second argument to performs() may also be anything with a
__contains__() method, meaning the following is legal:
if performs(my_elf, set([Thieving, Spying, BoyScout])):
...
Like isinstance(), the object needs only to perform a single
role out of the set in order for the expression to be true.
Relationship to Abstract Base Classes
Early drafts of this PEP [5] envisioned roles as competing
with the abstract base classes proposed in PEP 3119. After further
discussion and deliberation, a compromise and a delegation of
responsibilities and use-cases has been worked out as follows:
Roles provide a way of indicating an object’s semantics and abstract
capabilities. A role may define abstract methods, but only as a
way of delineating an interface through which a particular set of
semantics are accessed. An Ordering role might require that
some set of ordering operators be defined.class Ordering(metaclass=Role):
def __ge__(self, other):
pass
def __le__(self, other):
pass
def __ne__(self, other):
pass
# ...and so on
In this way, we’re able to indicate an object’s role or function
within a larger system without constraining or concerning ourselves
with a particular implementation.
Abstract base classes, by contrast, are a way of reusing common,
discrete units of implementation. For example, one might define an
OrderingMixin that implements several ordering operators in
terms of other operators.class OrderingMixin:
def __ge__(self, other):
return self > other or self == other
def __le__(self, other):
return self < other or self == other
def __ne__(self, other):
return not self == other
# ...and so on
Using this abstract base class - more properly, a concrete
mixin - allows a programmer to define a limited set of operators
and let the mixin in effect “derive” the others.
By combining these two orthogonal systems, we’re able to both
a) provide functionality, and b) alert consumer systems to the
presence and availability of this functionality. For example,
since the OrderingMixin class above satisfies the interface
and semantics expressed in the Ordering role, we say the mixin
performs the role:
@perform_role(Ordering)
class OrderingMixin:
def __ge__(self, other):
return self > other or self == other
def __le__(self, other):
return self < other or self == other
def __ne__(self, other):
return not self == other
# ...and so on
Now, any class that uses the mixin will automatically – that is,
without further programmer effort – be tagged as performing the
Ordering role.
The separation of concerns into two distinct, orthogonal systems
is desirable because it allows us to use each one separately.
Take, for example, a third-party package providing a
RecursiveHash role that indicates a container takes its
contents into account when determining its hash value. Since
Python’s built-in tuple and frozenset classes follow this
semantic, the RecursiveHash role can be applied to them.
>>> perform_role(RecursiveHash)(tuple)
>>> perform_role(RecursiveHash)(frozenset)
Now, any code that consumes RecursiveHash objects will now be
able to consume tuples and frozensets.
Open Issues
Allowing Instances to Perform Different Roles Than Their Class
Perl 6 allows instances to perform different roles than their class.
These changes are local to the single instance and do not affect
other instances of the class. For example:
my_elf = Elf()
my_elf.goes_on_quest()
my_elf.becomes_evil()
now_performs(my_elf, Thieving) # Only this one elf is a thief
my_elf.steals(["purses", "candy", "kisses"])
In Perl 6, this is done by creating an anonymous class that
inherits from the instance’s original parent and performs the
additional role(s). This is possible in Python 3, though whether it
is desirable is still is another matter.
Inclusion of this feature would, of course, make it much easier to
express the works of Charles Dickens in Python:
>>> from literature import role, BildungsRoman
>>> from dickens import Urchin, Gentleman
>>>
>>> with BildungsRoman() as OliverTwist:
... mr_brownlow = Gentleman()
... oliver, artful_dodger = Urchin(), Urchin()
... now_performs(artful_dodger, [role.Thief, role.Scoundrel])
...
... oliver.has_adventures_with(ArtfulDodger)
... mr_brownlow.adopt_orphan(oliver)
... now_performs(oliver, role.RichWard)
Requiring Attributes
Neal Norwitz has requested the ability to make assertions about
the presence of attributes using the same mechanism used to require
methods. Since roles take effect at class definition-time, and
since the vast majority of attributes are defined at runtime by a
class’s __init__() method, there doesn’t seem to be a good way
to check for attributes at the same time as methods.
It may still be desirable to include non-enforced attributes in the
role definition, if only for documentation purposes.
Roles of Roles
Under the proposed semantics, it is possible for roles to
have roles of their own.
@perform_role(Y)
class X(metaclass=Role):
...
While this is possible, it is meaningless, since roles
are generally not instantiated. There has been some
off-line discussion about giving meaning to this expression, but so
far no good ideas have emerged.
class_performs()
It is currently not possible to ask a class if its instances perform
a given role. It may be desirable to provide an analogue to
performs() such that
>>> isinstance(my_dwarf, Dwarf)
True
>>> performs(my_dwarf, Surly)
True
>>> performs(Dwarf, Surly)
False
>>> class_performs(Dwarf, Surly)
True
Prettier Dynamic Role Assignment
An early draft of this PEP included a separate mechanism for
dynamically assigning a role to a class. This was spelled
>>> now_perform(Dwarf, GoldMiner)
This same functionality already exists by unpacking the syntactic
sugar provided by decorators:
>>> perform_role(GoldMiner)(Dwarf)
At issue is whether dynamic role assignment is sufficiently important
to warrant a dedicated spelling.
Syntax Support
Though the phrasings laid out in this PEP are designed so that the
roles system could be shipped as a stand-alone package, it may be
desirable to add special syntax for defining, assigning and
querying roles. One example might be a role keyword, which would
translate
class MyRole(metaclass=Role):
...
into
role MyRole:
...
Assigning a role could take advantage of the class definition
arguments proposed in PEP 3115:
class MyClass(performs=MyRole):
...
Implementation
A reference implementation is forthcoming.
Acknowledgements
Thanks to Jeffery Yasskin, Talin and Guido van Rossum for several
hours of in-person discussion to iron out the differences, overlap
and finer points of roles and abstract base classes.
References
[1]
http://en.wikipedia.org/wiki/AIBO
[2]
http://www.perlmonks.org/?node_id=384858
[3]
http://dev.perl.org/perl6/doc/design/syn/S12.html
[4]
http://www.iam.unibe.ch/~scg/Archive/Papers/Scha03aTraits.pdf
[5]
https://mail.python.org/pipermail/python-3000/2007-April/007026.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3133 – Introducing Roles | Standards Track | Python’s existing object model organizes objects according to their
implementation. It is often desirable – especially in
duck typing-based language like Python – to organize objects by
the part they play in a larger system (their intent), rather than by
how they fulfill that part (their implementation). This PEP
introduces the concept of roles, a mechanism for organizing
objects according to their intent rather than their implementation. |
PEP 3134 – Exception Chaining and Embedded Tracebacks
Author:
Ka-Ping Yee
Status:
Final
Type:
Standards Track
Created:
12-May-2005
Python-Version:
3.0
Post-History:
Table of Contents
Numbering Note
Abstract
Motivation
History
Rationale
Implicit Exception Chaining
Explicit Exception Chaining
Traceback Attribute
Enhanced Reporting
C API
Compatibility
Open Issue: Extra Information
Open Issue: Suppressing Context
Open Issue: Limiting Exception Types
Open Issue: yield
Open Issue: Garbage Collection
Possible Future Compatible Changes
Possible Future Incompatible Changes
Implementation
Acknowledgements
References
Copyright
Numbering Note
This PEP started its life as PEP 344. Since it is now targeted for Python
3000, it has been moved into the 3xxx space.
Abstract
This PEP proposes three standard attributes on exception instances: the
__context__ attribute for implicitly chained exceptions, the __cause__
attribute for explicitly chained exceptions, and the __traceback__
attribute for the traceback. A new raise ... from statement sets the
__cause__ attribute.
Motivation
During the handling of one exception (exception A), it is possible that another
exception (exception B) may occur. In today’s Python (version 2.4), if this
happens, exception B is propagated outward and exception A is lost. In order
to debug the problem, it is useful to know about both exceptions. The
__context__ attribute retains this information automatically.
Sometimes it can be useful for an exception handler to intentionally re-raise
an exception, either to provide extra information or to translate an exception
to another type. The __cause__ attribute provides an explicit way to
record the direct cause of an exception.
In today’s Python implementation, exceptions are composed of three parts: the
type, the value, and the traceback. The sys module, exposes the current
exception in three parallel variables, exc_type, exc_value, and
exc_traceback, the sys.exc_info() function returns a tuple of these
three parts, and the raise statement has a three-argument form accepting
these three parts. Manipulating exceptions often requires passing these three
things in parallel, which can be tedious and error-prone. Additionally, the
except statement can only provide access to the value, not the traceback.
Adding the __traceback__ attribute to exception values makes all the
exception information accessible from a single place.
History
Raymond Hettinger [1] raised the issue of masked exceptions on Python-Dev in
January 2003 and proposed a PyErr_FormatAppend() function that C modules
could use to augment the currently active exception with more information.
Brett Cannon [2] brought up chained exceptions again in June 2003, prompting
a long discussion.
Greg Ewing [3] identified the case of an exception occurring in a finally
block during unwinding triggered by an original exception, as distinct from
the case of an exception occurring in an except block that is handling the
original exception.
Greg Ewing [4] and Guido van Rossum [5], and probably others, have
previously mentioned adding a traceback attribute to Exception instances.
This is noted in PEP 3000.
This PEP was motivated by yet another recent Python-Dev reposting of the same
ideas [6] [7].
Rationale
The Python-Dev discussions revealed interest in exception chaining for two
quite different purposes. To handle the unexpected raising of a secondary
exception, the exception must be retained implicitly. To support intentional
translation of an exception, there must be a way to chain exceptions
explicitly. This PEP addresses both.
Several attribute names for chained exceptions have been suggested on
Python-Dev [2], including cause, antecedent, reason, original,
chain, chainedexc, exc_chain, excprev, previous, and
precursor. For an explicitly chained exception, this PEP suggests
__cause__ because of its specific meaning. For an implicitly chained
exception, this PEP proposes the name __context__ because the intended
meaning is more specific than temporal precedence but less specific than
causation: an exception occurs in the context of handling another exception.
This PEP suggests names with leading and trailing double-underscores for these
three attributes because they are set by the Python VM. Only in very special
cases should they be set by normal assignment.
This PEP handles exceptions that occur during except blocks and finally
blocks in the same way. Reading the traceback makes it clear where the
exceptions occurred, so additional mechanisms for distinguishing the two cases
would only add unnecessary complexity.
This PEP proposes that the outermost exception object (the one exposed for
matching by except clauses) be the most recently raised exception for
compatibility with current behaviour.
This PEP proposes that tracebacks display the outermost exception last, because
this would be consistent with the chronological order of tracebacks (from
oldest to most recent frame) and because the actual thrown exception is easier
to find on the last line.
To keep things simpler, the C API calls for setting an exception will not
automatically set the exception’s __context__. Guido van Rossum has
expressed concerns with making such changes [8].
As for other languages, Java and Ruby both discard the original exception when
another exception occurs in a catch/rescue or finally/ensure
clause. Perl 5 lacks built-in structured exception handling. For Perl 6, RFC
number 88 [9] proposes an exception mechanism that implicitly retains chained
exceptions in an array named @@. In that RFC, the most recently raised
exception is exposed for matching, as in this PEP; also, arbitrary expressions
(possibly involving @@) can be evaluated for exception matching.
Exceptions in C# contain a read-only InnerException property that may point
to another exception. Its documentation [10] says that “When an exception X
is thrown as a direct result of a previous exception Y, the InnerException
property of X should contain a reference to Y.” This property is not set by
the VM automatically; rather, all exception constructors take an optional
innerException argument to set it explicitly. The __cause__ attribute
fulfills the same purpose as InnerException, but this PEP proposes a new
form of raise rather than extending the constructors of all exceptions. C#
also provides a GetBaseException method that jumps directly to the end of
the InnerException chain; this PEP proposes no analog.
The reason all three of these attributes are presented together in one proposal
is that the __traceback__ attribute provides convenient access to the
traceback on chained exceptions.
Implicit Exception Chaining
Here is an example to illustrate the __context__ attribute:
def compute(a, b):
try:
a/b
except Exception, exc:
log(exc)
def log(exc):
file = open('logfile.txt') # oops, forgot the 'w'
print >>file, exc
file.close()
Calling compute(0, 0) causes a ZeroDivisionError. The compute()
function catches this exception and calls log(exc), but the log()
function also raises an exception when it tries to write to a file that wasn’t
opened for writing.
In today’s Python, the caller of compute() gets thrown an IOError. The
ZeroDivisionError is lost. With the proposed change, the instance of
IOError has an additional __context__ attribute that retains the
ZeroDivisionError.
The following more elaborate example demonstrates the handling of a mixture of
finally and except clauses:
def main(filename):
file = open(filename) # oops, forgot the 'w'
try:
try:
compute()
except Exception, exc:
log(file, exc)
finally:
file.clos() # oops, misspelled 'close'
def compute():
1/0
def log(file, exc):
try:
print >>file, exc # oops, file is not writable
except:
display(exc)
def display(exc):
print ex # oops, misspelled 'exc'
Calling main() with the name of an existing file will trigger four
exceptions. The ultimate result will be an AttributeError due to the
misspelling of clos, whose __context__ points to a NameError due
to the misspelling of ex, whose __context__ points to an IOError
due to the file being read-only, whose __context__ points to a
ZeroDivisionError, whose __context__ attribute is None.
The proposed semantics are as follows:
Each thread has an exception context initially set to None.
Whenever an exception is raised, if the exception instance does not already
have a __context__ attribute, the interpreter sets it equal to the
thread’s exception context.
Immediately after an exception is raised, the thread’s exception context is
set to the exception.
Whenever the interpreter exits an except block by reaching the end or
executing a return, yield, continue, or break statement, the
thread’s exception context is set to None.
Explicit Exception Chaining
The __cause__ attribute on exception objects is always initialized to
None. It is set by a new form of the raise statement:
raise EXCEPTION from CAUSE
which is equivalent to:
exc = EXCEPTION
exc.__cause__ = CAUSE
raise exc
In the following example, a database provides implementations for a few
different kinds of storage, with file storage as one kind. The database
designer wants errors to propagate as DatabaseError objects so that the
client doesn’t have to be aware of the storage-specific details, but doesn’t
want to lose the underlying error information.
class DatabaseError(Exception):
pass
class FileDatabase(Database):
def __init__(self, filename):
try:
self.file = open(filename)
except IOError, exc:
raise DatabaseError('failed to open') from exc
If the call to open() raises an exception, the problem will be reported as
a DatabaseError, with a __cause__ attribute that reveals the
IOError as the original cause.
Traceback Attribute
The following example illustrates the __traceback__ attribute.
def do_logged(file, work):
try:
work()
except Exception, exc:
write_exception(file, exc)
raise exc
from traceback import format_tb
def write_exception(file, exc):
...
type = exc.__class__
message = str(exc)
lines = format_tb(exc.__traceback__)
file.write(... type ... message ... lines ...)
...
In today’s Python, the do_logged() function would have to extract the
traceback from sys.exc_traceback or sys.exc_info() [2] and pass both
the value and the traceback to write_exception(). With the proposed
change, write_exception() simply gets one argument and obtains the
exception using the __traceback__ attribute.
The proposed semantics are as follows:
Whenever an exception is caught, if the exception instance does not already
have a __traceback__ attribute, the interpreter sets it to the newly
caught traceback.
Enhanced Reporting
The default exception handler will be modified to report chained exceptions.
The chain of exceptions is traversed by following the __cause__ and
__context__ attributes, with __cause__ taking priority. In keeping
with the chronological order of tracebacks, the most recently raised exception
is displayed last; that is, the display begins with the description of the
innermost exception and backs up the chain to the outermost exception. The
tracebacks are formatted as usual, with one of the lines:
The above exception was the direct cause of the following exception:
or
During handling of the above exception, another exception occurred:
between tracebacks, depending whether they are linked by __cause__ or
__context__ respectively. Here is a sketch of the procedure:
def print_chain(exc):
if exc.__cause__:
print_chain(exc.__cause__)
print '\nThe above exception was the direct cause...'
elif exc.__context__:
print_chain(exc.__context__)
print '\nDuring handling of the above exception, ...'
print_exc(exc)
In the traceback module, the format_exception, print_exception,
print_exc, and print_last functions will be updated to accept an
optional chain argument, True by default. When this argument is
True, these functions will format or display the entire chain of exceptions
as just described. When it is False, these functions will format or
display only the outermost exception.
The cgitb module should also be updated to display the entire chain of
exceptions.
C API
The PyErr_Set* calls for setting exceptions will not set the
__context__ attribute on exceptions. PyErr_NormalizeException will
always set the traceback attribute to its tb argument and the
__context__ and __cause__ attributes to None.
A new API function, PyErr_SetContext(context), will help C programmers
provide chained exception information. This function will first normalize the
current exception so it is an instance, then set its __context__ attribute.
A similar API function, PyErr_SetCause(cause), will set the __cause__
attribute.
Compatibility
Chained exceptions expose the type of the most recent exception, so they will
still match the same except clauses as they do now.
The proposed changes should not break any code unless it sets or uses
attributes named __context__, __cause__, or __traceback__ on
exception instances. As of 2005-05-12, the Python standard library contains no
mention of such attributes.
Open Issue: Extra Information
Walter Dörwald [11] expressed a desire to attach extra information to an
exception during its upward propagation without changing its type. This could
be a useful feature, but it is not addressed by this PEP. It could conceivably
be addressed by a separate PEP establishing conventions for other informational
attributes on exceptions.
Open Issue: Suppressing Context
As written, this PEP makes it impossible to suppress __context__, since
setting exc.__context__ to None in an except or finally clause
will only result in it being set again when exc is raised.
Open Issue: Limiting Exception Types
To improve encapsulation, library implementors may want to wrap all
implementation-level exceptions with an application-level exception. One could
try to wrap exceptions by writing this:
try:
... implementation may raise an exception ...
except:
import sys
raise ApplicationError from sys.exc_value
or this:
try:
... implementation may raise an exception ...
except Exception, exc:
raise ApplicationError from exc
but both are somewhat flawed. It would be nice to be able to name the current
exception in a catch-all except clause, but that isn’t addressed here.
Such a feature would allow something like this:
try:
... implementation may raise an exception ...
except *, exc:
raise ApplicationError from exc
Open Issue: yield
The exception context is lost when a yield statement is executed; resuming
the frame after the yield does not restore the context. Addressing this
problem is out of the scope of this PEP; it is not a new problem, as
demonstrated by the following example:
>>> def gen():
... try:
... 1/0
... except:
... yield 3
... raise
...
>>> g = gen()
>>> g.next()
3
>>> g.next()
TypeError: exceptions must be classes, instances, or strings
(deprecated), not NoneType
Open Issue: Garbage Collection
The strongest objection to this proposal has been that it creates cycles
between exceptions and stack frames [12]. Collection of cyclic garbage (and
therefore resource release) can be greatly delayed.
>>> try:
>>> 1/0
>>> except Exception, err:
>>> pass
will introduce a cycle from err -> traceback -> stack frame -> err, keeping all
locals in the same scope alive until the next GC happens.
Today, these locals would go out of scope. There is lots of code which assumes
that “local” resources – particularly open files – will be closed quickly.
If closure has to wait for the next GC, a program (which runs fine today) may
run out of file handles.
Making the __traceback__ attribute a weak reference would avoid the
problems with cyclic garbage. Unfortunately, it would make saving the
Exception for later (as unittest does) more awkward, and it would not
allow as much cleanup of the sys module.
A possible alternate solution, suggested by Adam Olsen, would be to instead
turn the reference from the stack frame to the err variable into a weak
reference when the variable goes out of scope [13].
Possible Future Compatible Changes
These changes are consistent with the appearance of exceptions as a single
object rather than a triple at the interpreter level.
If PEP 340 or PEP 343 is accepted, replace the three (type, value,
traceback) arguments to __exit__ with a single exception argument.
Deprecate sys.exc_type, sys.exc_value, sys.exc_traceback, and
sys.exc_info() in favour of a single member, sys.exception.
Deprecate sys.last_type, sys.last_value, and sys.last_traceback
in favour of a single member, sys.last_exception.
Deprecate the three-argument form of the raise statement in favour of the
one-argument form.
Upgrade cgitb.html() to accept a single value as its first argument as an
alternative to a (type, value, traceback) tuple.
Possible Future Incompatible Changes
These changes might be worth considering for Python 3000.
Remove sys.exc_type, sys.exc_value, sys.exc_traceback, and
sys.exc_info().
Remove sys.last_type, sys.last_value, and sys.last_traceback.
Replace the three-argument sys.excepthook with a one-argument API, and
changing the cgitb module to match.
Remove the three-argument form of the raise statement.
Upgrade traceback.print_exception to accept an exception argument
instead of the type, value, and traceback arguments.
Implementation
The __traceback__ and __cause__ attributes and the new raise syntax
were implemented in revision 57783 [14].
Acknowledgements
Brett Cannon, Greg Ewing, Guido van Rossum, Jeremy Hylton, Phillip J. Eby,
Raymond Hettinger, Walter Dörwald, and others.
References
[1]
Raymond Hettinger, “Idea for avoiding exception masking”
https://mail.python.org/pipermail/python-dev/2003-January/032492.html
[2] (1, 2, 3)
Brett Cannon explains chained exceptions
https://mail.python.org/pipermail/python-dev/2003-June/036063.html
[3]
Greg Ewing points out masking caused by exceptions during finally
https://mail.python.org/pipermail/python-dev/2003-June/036290.html
[4]
Greg Ewing suggests storing the traceback in the exception object
https://mail.python.org/pipermail/python-dev/2003-June/036092.html
[5]
Guido van Rossum mentions exceptions having a traceback attribute
https://mail.python.org/pipermail/python-dev/2005-April/053060.html
[6]
Ka-Ping Yee, “Tidier Exceptions”
https://mail.python.org/pipermail/python-dev/2005-May/053671.html
[7]
Ka-Ping Yee, “Chained Exceptions”
https://mail.python.org/pipermail/python-dev/2005-May/053672.html
[8]
Guido van Rossum discusses automatic chaining in PyErr_Set*
https://mail.python.org/pipermail/python-dev/2003-June/036180.html
[9]
Tony Olensky, “Omnibus Structured Exception/Error Handling Mechanism”
http://dev.perl.org/perl6/rfc/88.html
[10]
MSDN .NET Framework Library, “Exception.InnerException Property”
http://msdn.microsoft.com/library/en-us/cpref/html/frlrfsystemexceptionclassinnerexceptiontopic.asp
[11]
Walter Dörwald suggests wrapping exceptions to add details
https://mail.python.org/pipermail/python-dev/2003-June/036148.html
[12]
Guido van Rossum restates the objection to cyclic trash
https://mail.python.org/pipermail/python-3000/2007-January/005322.html
[13]
Adam Olsen suggests using a weakref from stack frame to exception
https://mail.python.org/pipermail/python-3000/2007-January/005363.html
[14]
Patch to implement the bulk of the PEP
http://svn.python.org/view/python/branches/py3k/Include/?rev=57783&view=rev
Copyright
This document has been placed in the public domain.
| Final | PEP 3134 – Exception Chaining and Embedded Tracebacks | Standards Track | This PEP proposes three standard attributes on exception instances: the
__context__ attribute for implicitly chained exceptions, the __cause__
attribute for explicitly chained exceptions, and the __traceback__
attribute for the traceback. A new raise ... from statement sets the
__cause__ attribute. |
PEP 3135 – New Super
Author:
Calvin Spealman <ironfroggy at gmail.com>,
Tim Delaney <timothy.c.delaney at gmail.com>,
Lie Ryan <lie.1296 at gmail.com>
Status:
Final
Type:
Standards Track
Created:
28-Apr-2007
Python-Version:
3.0
Post-History:
28-Apr-2007,
29-Apr-2007,
29-Apr-2007,
14-May-2007,
12-Mar-2009
Table of Contents
Numbering Note
Abstract
Rationale
Specification
Closed Issues
Determining the class object to use
Should super actually become a keyword?
super used with __call__ attributes
Alternative Proposals
No Changes
Dynamic attribute on super type
self.__super__.foo(*args)
super(self, *args) or __super__(self, *args)
super.foo(self, *args)
super(*p, **kw)
History
References
Copyright
Numbering Note
This PEP started its life as PEP 367. Since it is now targeted
for Python 3000, it has been moved into the 3xxx space.
Abstract
This PEP proposes syntactic sugar for use of the super type to automatically
construct instances of the super type binding to the class that a method was
defined in, and the instance (or class object for classmethods) that the method
is currently acting upon.
The premise of the new super usage suggested is as follows:
super().foo(1, 2)
to replace the old:
super(Foo, self).foo(1, 2)
Rationale
The current usage of super requires an explicit passing of both the class and
instance it must operate from, requiring a breaking of the DRY (Don’t Repeat
Yourself) rule. This hinders any change in class name, and is often considered
a wart by many.
Specification
Within the specification section, some special terminology will be used to
distinguish similar and closely related concepts. “super class” will refer to
the actual builtin class named “super”. A “super instance” is simply an
instance of the super class, which is associated with another class and
possibly with an instance of that class.
The new super semantics are only available in Python 3.0.
Replacing the old usage of super, calls to the next class in the MRO (method
resolution order) can be made without explicitly passing the class object
(although doing so will still be supported). Every function
will have a cell named __class__ that contains the class object that the
function is defined in.
The new syntax:
super()
is equivalent to:
super(__class__, <firstarg>)
where __class__ is the class that the method was defined in, and
<firstarg> is the first parameter of the method (normally self
for instance methods, and cls for class methods). For functions
defined outside a class body, __class__ is not defined, and will
result in runtime SystemError.
While super is not a reserved word, the parser recognizes the use
of super in a method definition and only passes in the
__class__ cell when this is found. Thus, calling a global alias
of super without arguments will not necessarily work.
Closed Issues
Determining the class object to use
The class object is taken from a cell named __class__.
Should super actually become a keyword?
No. It is not necessary for super to become a keyword.
super used with __call__ attributes
It was considered that it might be a problem that instantiating super instances
the classic way, because calling it would lookup the __call__ attribute and
thus try to perform an automatic super lookup to the next class in the MRO.
However, this was found to be false, because calling an object only looks up
the __call__ method directly on the object’s type. The following example shows
this in action.
class A(object):
def __call__(self):
return '__call__'
def __getattribute__(self, attr):
if attr == '__call__':
return lambda: '__getattribute__'
a = A()
assert a() == '__call__'
assert a.__call__() == '__getattribute__'
In any case, this issue goes away entirely because classic calls to
super(<class>, <instance>) are still supported with the same meaning.
Alternative Proposals
No Changes
Although its always attractive to just keep things how they are, people have
sought a change in the usage of super calling for some time, and for good
reason, all mentioned previously.
Decoupling from the class name (which might not even be bound to the
right class anymore!)
Simpler looking, cleaner super calls would be better
Dynamic attribute on super type
The proposal adds a dynamic attribute lookup to the super type, which will
automatically determine the proper class and instance parameters. Each super
attribute lookup identifies these parameters and performs the super lookup on
the instance, as the current super implementation does with the explicit
invocation of a super instance upon a class and instance.
This proposal relies on sys._getframe(), which is not appropriate for anything
except a prototype implementation.
self.__super__.foo(*args)
The __super__ attribute is mentioned in this PEP in several places, and could
be a candidate for the complete solution, actually using it explicitly instead
of any super usage directly. However, double-underscore names are usually an
internal detail, and attempted to be kept out of everyday code.
super(self, *args) or __super__(self, *args)
This solution only solves the problem of the type indication, does not handle
differently named super methods, and is explicit about the name of the
instance. It is less flexible without being able to enacted on other method
names, in cases where that is needed. One use case this fails is where a
base-class has a factory classmethod and a subclass has two factory
classmethods,both of which needing to properly make super calls to the one
in the base-class.
super.foo(self, *args)
This variation actually eliminates the problems with locating the proper
instance, and if any of the alternatives were pushed into the spotlight, I
would want it to be this one.
super(*p, **kw)
There has been the proposal that directly calling super(*p, **kw) would
be equivalent to calling the method on the super object with the same name
as the method currently being executed i.e. the following two methods would be
equivalent:
def f(self, *p, **kw):
super.f(*p, **kw)
def f(self, *p, **kw):
super(*p, **kw)
There is strong sentiment for and against this, but implementation and style
concerns are obvious. Guido has suggested that this should be excluded from
this PEP on the principle of KISS (Keep It Simple Stupid).
History
29-Apr-2007
Changed title from “Super As A Keyword” to “New Super”
Updated much of the language and added a terminology section
for clarification in confusing places.
Added reference implementation and history sections.
06-May-2007
Updated by Tim Delaney to reflect discussions on the python-3000
and python-dev mailing lists.
12-Mar-2009
Updated to reflect the current state of implementation.
References
[1] Fixing super anyone?
(https://mail.python.org/pipermail/python-3000/2007-April/006667.html)
[2] PEP 3130: Access to Module/Class/Function Currently Being Defined (this)
(https://mail.python.org/pipermail/python-ideas/2007-April/000542.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 3135 – New Super | Standards Track | This PEP proposes syntactic sugar for use of the super type to automatically
construct instances of the super type binding to the class that a method was
defined in, and the instance (or class object for classmethods) that the method
is currently acting upon. |
PEP 3136 – Labeled break and continue
Author:
Matt Chisholm <matt-python at theory.org>
Status:
Rejected
Type:
Standards Track
Created:
30-Jun-2007
Python-Version:
3.1
Post-History:
Table of Contents
Rejection Notice
Abstract
Introduction
Motivation
Other languages
What this PEP is not
Specification
Proposal A - Explicit labels
Proposal B - Numeric break & continue
Proposal C - The reduplicative method
Proposal D - Explicit iterators
Proposal E - Explicit iterators and iterator methods
Implementation
Footnotes
Resources
Copyright
Rejection Notice
This PEP is rejected.
See https://mail.python.org/pipermail/python-3000/2007-July/008663.html.
Abstract
This PEP proposes support for labels in Python’s break and
continue statements. It is inspired by labeled break and
continue in other languages, and the author’s own infrequent but
persistent need for such a feature.
Introduction
The break statement allows the programmer to terminate a loop
early, and the continue statement allows the programmer to move to
the next iteration of a loop early. In Python currently, break
and continue can apply only to the innermost enclosing loop.
Adding support for labels to the break and continue statements
is a logical extension to the existing behavior of the break and
continue statements. Labeled break and continue can
improve the readability and flexibility of complex code which uses
nested loops.
For brevity’s sake, the examples and discussion in this PEP usually
refers to the break statement. However, all of the examples and
motivations apply equally to labeled continue.
Motivation
If the programmer wishes to move to the next iteration of an outer
enclosing loop, or terminate multiple loops at once, he or she has a
few less-than elegant options.
Here’s one common way of imitating labeled break in Python (For
this and future examples, ... denotes an arbitrary number of
intervening lines of code):
for a in a_list:
time_to_break_out_of_a = False
...
for b in b_list:
...
if condition_one(a, b):
break
...
if condition_two(a, b):
time_to_break_out_of_a = True
break
...
if time_to_break_out_of_a:
break
...
This requires five lines and an extra variable,
time_to_break_out_of_a, to keep track of when to break out of the
outer (a) loop. And those five lines are spread across many lines of
code, making the control flow difficult to understand.
This technique is also error-prone. A programmer modifying this code
might inadvertently put new code after the end of the inner (b) loop
but before the test for time_to_break_out_of_a, instead of after
the test. This means that code which should have been skipped by
breaking out of the outer loop gets executed incorrectly.
This could also be written with an exception. The programmer would
declare a special exception, wrap the inner loop in a try, and catch
the exception and break when you see it:
class BreakOutOfALoop(Exception): pass
for a in a_list:
...
try:
for b in b_list:
...
if condition_one(a, b):
break
...
if condition_two(a, b):
raise BreakOutOfALoop
...
except BreakOutOfALoop:
break
...
Again, though; this requires five lines and a new, single-purpose
exception class (instead of a new variable), and spreads basic control
flow out over many lines. And it breaks out of the inner loop with
break and out of the other loop with an exception, which is
inelegant. [1]
This next strategy might be the most elegant solution, assuming
condition_two() is inexpensive to compute:
for a in a_list:
...
for b in b_list:
...
if condition_one(a, b):
break
...
if condition_two(a, b):
break
...
if condition_two(a, b)
break
...
Breaking twice is still inelegant. This implementation also relies on
the fact that the inner (b) loop bleeds b into the outer for loop,
which (although explicitly supported) is both surprising to novices,
and in my opinion counter-intuitive and poor practice.
The programmer must also still remember to put in both breaks on
condition two and not insert code before the second break. A single
conceptual action, breaking out of both loops on condition_two(),
requires four lines of code at two indentation levels, possibly
separated by many intervening lines at the end of the inner (b) loop.
Other languages
Now, put aside whatever dislike you may have for other programming
languages, and consider the syntax of labeled break and
continue. In Perl:
ALOOP: foreach $a (@a_array){
...
BLOOP: foreach $b (@b_array){
...
if (condition_one($a,$b)){
last BLOOP; # same as plain old last;
}
...
if (condition_two($a,$b)){
last ALOOP;
}
...
}
...
}
(Notes: Perl uses last instead of break. The BLOOP labels
could be omitted; last and continue apply to the innermost
loop by default.)
PHP uses a number denoting the number of loops to break out of, rather
than a label:
foreach ($a_array as $a){
....
foreach ($b_array as $b){
....
if (condition_one($a, $b)){
break 1; # same as plain old break
}
....
if (condition_two($a, $b)){
break 2;
}
....
}
...
}
C/C++, Java, and Ruby all have similar constructions.
The control flow regarding when to break out of the outer (a) loop is
fully encapsulated in the break statement which gets executed when
the break condition is satisfied. The depth of the break statement
does not matter. Control flow is not spread out. No extra variables,
exceptions, or re-checking or storing of control conditions is
required. There is no danger that code will get inadvertently
inserted after the end of the inner (b) loop and before the break
condition is re-checked inside the outer (a) loop. These are the
benefits that labeled break and continue would bring to
Python.
What this PEP is not
This PEP is not a proposal to add GOTO to Python. GOTO allows a
programmer to jump to an arbitrary block or line of code, and
generally makes control flow more difficult to follow. Although
break and continue (with or without support for labels) can be
considered a type of GOTO, it is much more restricted. Another Python
construct, yield, could also be considered a form of GOTO – an
even less restrictive one. The goal of this PEP is to propose an
extension to the existing control flow tools break and
continue, to make control flow easier to understand, not more
difficult.
Labeled break and continue cannot transfer control to another
function or method. They cannot even transfer control to an arbitrary
line of code in the current scope. Currently, they can only affect
the behavior of a loop, and are quite different and much more
restricted than GOTO. This extension allows them to affect any
enclosing loop in the current name-space, but it does not change their
behavior to that of GOTO.
Specification
Under all of these proposals, break and continue by themselves
will continue to behave as they currently do, applying to the
innermost loop by default.
Proposal A - Explicit labels
The for and while loop syntax will be followed by an optional as
or label (contextual) keyword [2] and then an identifier,
which may be used to identify the loop out of which to break (or which
should be continued).
The break (and continue) statements will be followed by an
optional identifier that refers to the loop out of which to break (or
which should be continued). Here is an example using the as
keyword:
for a in a_list as a_loop:
...
for b in b_list as b_loop:
...
if condition_one(a, b):
break b_loop # same as plain old break
...
if condition_two(a, b):
break a_loop
...
...
Or, with label instead of as:
for a in a_list label a_loop:
...
for b in b_list label b_loop:
...
if condition_one(a, b):
break b_loop # same as plain old break
...
if condition_two(a, b):
break a_loop
...
...
This has all the benefits outlined above. It requires modifications
to the language syntax: the syntax of break and continue
syntax statements and for and while statements. It requires either a
new conditional keyword label or an extension to the conditional
keyword as. [3] It is unlikely to require any changes to
existing Python programs. Passing an identifier not defined in the
local scope to break or continue would raise a NameError.
Proposal B - Numeric break & continue
Rather than altering the syntax of for and while loops,
break and continue would take a numeric argument denoting the
enclosing loop which is being controlled, similar to PHP.
It seems more Pythonic to me for break and continue to refer
to loops indexing from zero, as opposed to indexing from one as PHP
does.
for a in a_list:
...
for b in b_list:
...
if condition_one(a,b):
break 0 # same as plain old break
...
if condition_two(a,b):
break 1
...
...
Passing a number that was too large, or less than zero, or non-integer
to break or continue would (probably) raise an IndexError.
This proposal would not require any changes to existing Python
programs.
Proposal C - The reduplicative method
The syntax of break and continue would be altered to allow
multiple break and continue statements on the same line. Thus,
break break would break out of the first and second enclosing
loops.
for a in a_list:
...
for b in b_list:
...
if condition_one(a,b):
break # plain old break
...
if condition_two(a,b):
break break
...
...
This would also allow the programmer to break out of the inner loop
and continue the next outermost simply by writing break continue,
[4] and so on. I’m not sure what exception would be
raised if the programmer used more break or continue
statements than existing loops (perhaps a SyntaxError?).
I expect this proposal to get rejected because it will be judged too
difficult to understand.
This proposal would not require any changes to existing Python
programs.
Proposal D - Explicit iterators
Rather than embellishing for and while loop syntax with labels, the
programmer wishing to use labeled breaks would be required to create
the iterator explicitly and assign it to an identifier if he or she
wanted to break out of or continue that loop from within a
deeper loop.
a_iter = iter(a_list)
for a in a_iter:
...
b_iter = iter(b_list)
for b in b_iter:
...
if condition_one(a,b):
break b_iter # same as plain old break
...
if condition_two(a,b):
break a_iter
...
...
Passing a non-iterator object to break or continue would raise
a TypeError; and a nonexistent identifier would raise a NameError.
This proposal requires only one extra line to create a labeled loop,
and no extra lines to break out of a containing loop, and no changes
to existing Python programs.
Proposal E - Explicit iterators and iterator methods
This is a variant of Proposal D. Iterators would need be created
explicitly if anything other that the most basic use of break and
continue was required. Instead of modifying the syntax of
break and continue, .break() and .continue() methods
could be added to the Iterator type.
a_iter = iter(a_list)
for a in a_iter:
...
b_iter = iter(b_list)
for b in b_iter:
...
if condition_one(a,b):
b_iter.break() # same as plain old break
...
if condition_two(a,b):
a_iter.break()
...
...
I expect that this proposal will get rejected on the grounds of sheer
ugliness. However, it requires no changes to the language syntax
whatsoever, nor does it require any changes to existing Python
programs.
Implementation
I have never looked at the Python language implementation itself, so I
have no idea how difficult this would be to implement. If this PEP is
accepted, but no one is available to write the feature, I will try to
implement it myself.
Footnotes
[1]
Breaking some loops with exceptions is inelegant because
it’s a violation of There’s Only One Way To Do It.
[2]
Or really any new contextual keyword that the community
likes: as, label, labeled, loop, name, named,
walrus, whatever.
[3]
The use of as in a similar context has been proposed here,
http://sourceforge.net/tracker/index.php?func=detail&aid=1714448&group_id=5470&atid=355470
but to my knowledge this idea has not been written up as a PEP.
[4]
To continue the Nth outer loop, you would write
break N-1 times and then continue. Only one continue would be
allowed, and only at the end of a sequence of breaks. continue
break or continue continue makes no sense.
Resources
This issue has come up before, although it has never been resolved, to
my knowledge.
labeled breaks, on comp.lang.python, in the context of
do...while loops
break LABEL vs. exceptions + PROPOSAL, on python-list, as
compared to using Exceptions for flow control
Named code blocks on python-list, a suggestion motivated by the
desire for labeled break / continue
mod_python bug fix An example of someone setting a flag inside
an inner loop that triggers a continue in the containing loop, to
work around the absence of labeled break and continue
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3136 – Labeled break and continue | Standards Track | This PEP proposes support for labels in Python’s break and
continue statements. It is inspired by labeled break and
continue in other languages, and the author’s own infrequent but
persistent need for such a feature. |
PEP 3138 – String representation in Python 3000
Author:
Atsuo Ishimoto <ishimoto at gembook.org>
Status:
Final
Type:
Standards Track
Created:
05-May-2008
Python-Version:
3.0
Post-History:
05-May-2008, 05-Jun-2008
Table of Contents
Abstract
Motivation
Specification
Rationale
Alternate Solutions
Backwards Compatibility
Rejected Proposals
Implementation
References
Copyright
Abstract
This PEP proposes a new string representation form for Python 3000.
In Python prior to Python 3000, the repr() built-in function converted
arbitrary objects to printable ASCII strings for debugging and
logging. For Python 3000, a wider range of characters, based on the
Unicode standard, should be considered ‘printable’.
Motivation
The current repr() converts 8-bit strings to ASCII using following
algorithm.
Convert CR, LF, TAB and ‘\’ to ‘\r’, ‘\n’, ‘\t’, ‘\\’.
Convert other non-printable characters(0x00-0x1f, 0x7f) and
non-ASCII characters (>= 0x80) to ‘\xXX’.
Backslash-escape quote characters (apostrophe, ‘) and add the quote
character at the beginning and the end.
For Unicode strings, the following additional conversions are done.
Convert leading surrogate pair characters without trailing character
(0xd800-0xdbff, but not followed by 0xdc00-0xdfff) to ‘\uXXXX’.
Convert 16-bit characters (>= 0x100) to ‘\uXXXX’.
Convert 21-bit characters (>= 0x10000) and surrogate pair characters
to ‘\U00xxxxxx’.
This algorithm converts any string to printable ASCII, and repr() is
used as a handy and safe way to print strings for debugging or for
logging. Although all non-ASCII characters are escaped, this does not
matter when most of the string’s characters are ASCII. But for other
languages, such as Japanese where most characters in a string are not
ASCII, this is very inconvenient.
We can use print(aJapaneseString) to get a readable string, but we
don’t have a similar workaround for printing strings from collections
such as lists or tuples. print(listOfJapaneseStrings) uses repr()
to build the string to be printed, so the resulting strings are always
hex-escaped. Or when open(japaneseFilename) raises an exception,
the error message is something like IOError: [Errno 2] No such file
or directory: '\u65e5\u672c\u8a9e', which isn’t helpful.
Python 3000 has a lot of nice features for non-Latin users such as
non-ASCII identifiers, so it would be helpful if Python could also
progress in a similar way for printable output.
Some users might be concerned that such output will mess up their
console if they print binary data like images. But this is unlikely
to happen in practice because bytes and strings are different types in
Python 3000, so printing an image to the console won’t mess it up.
This issue was once discussed by Hye-Shik Chang [1], but was rejected.
Specification
Add a new function to the Python C API int Py_UNICODE_ISPRINTABLE
(Py_UNICODE ch). This function returns 0 if repr() should escape
the Unicode character ch; otherwise it returns 1. Characters
that should be escaped are defined in the Unicode character database
as:
Cc (Other, Control)
Cf (Other, Format)
Cs (Other, Surrogate)
Co (Other, Private Use)
Cn (Other, Not Assigned)
Zl (Separator, Line), refers to LINE SEPARATOR (’\u2028’).
Zp (Separator, Paragraph), refers to PARAGRAPH SEPARATOR
(’\u2029’).
Zs (Separator, Space) other than ASCII space (’\x20’). Characters
in this category should be escaped to avoid ambiguity.
The algorithm to build repr() strings should be changed to:
Convert CR, LF, TAB and ‘\’ to ‘\r’, ‘\n’, ‘\t’, ‘\\’.
Convert non-printable ASCII characters (0x00-0x1f, 0x7f) to
‘\xXX’.
Convert leading surrogate pair characters without trailing
character (0xd800-0xdbff, but not followed by 0xdc00-0xdfff) to
‘\uXXXX’.
Convert non-printable characters (Py_UNICODE_ISPRINTABLE() returns
0) to ‘\xXX’, ‘\uXXXX’ or ‘\U00xxxxxx’.
Backslash-escape quote characters (apostrophe, 0x27) and add a
quote character at the beginning and the end.
Set the Unicode error-handler for sys.stderr to ‘backslashreplace’
by default.
Add a new function to the Python C API PyObject *PyObject_ASCII
(PyObject *o). This function converts any python object to a
string using PyObject_Repr() and then hex-escapes all non-ASCII
characters. PyObject_ASCII() generates the same string as
PyObject_Repr() in Python 2.
Add a new built-in function, ascii(). This function converts
any python object to a string using repr() and then hex-escapes all
non-ASCII characters. ascii() generates the same string as
repr() in Python 2.
Add a '%a' string format operator. '%a' converts any python
object to a string using repr() and then hex-escapes all non-ASCII
characters. The '%a' format operator generates the same string
as '%r' in Python 2. Also, add '!a' conversion flags to the
string.format() method and add '%A' operator to the
PyUnicode_FromFormat(). They convert any object to an ASCII string
as '%a' string format operator.
Add an isprintable() method to the string type.
str.isprintable() returns False if repr() would escape any
character in the string; otherwise returns True. The
isprintable() method calls the Py_UNICODE_ISPRINTABLE()
function internally.
Rationale
The repr() in Python 3000 should be Unicode, not ASCII based, just
like Python 3000 strings. Also, conversion should not be affected by
the locale setting, because the locale is not necessarily the same as
the output device’s locale. For example, it is common for a daemon
process to be invoked in an ASCII setting, but writes UTF-8 to its log
files. Also, web applications might want to report the error
information in more readable form based on the HTML page’s encoding.
Characters not supported by the user’s console could be hex-escaped on
printing, by the Unicode encoder’s error-handler. If the
error-handler of the output file is ‘backslashreplace’, such
characters are hex-escaped without raising UnicodeEncodeError. For
example, if the default encoding is ASCII, print('Hello ¢') will
print ‘Hello \xa2’. If the encoding is ISO-8859-1, ‘Hello ¢’ will be
printed.
The default error-handler for sys.stdout is ‘strict’. Other
applications reading the output might not understand hex-escaped
characters, so unsupported characters should be trapped when writing.
If unsupported characters must be escaped, the error-handler should be
changed explicitly. Unlike sys.stdout, sys.stderr doesn’t raise
UnicodeEncodingError by default, because the default error-handler is
‘backslashreplace’. So printing error messages containing non-ASCII
characters to sys.stderr will not raise an exception. Also,
information about uncaught exceptions (exception object, traceback) is
printed by the interpreter without raising exceptions.
Alternate Solutions
To help debugging in non-Latin languages without changing repr(),
other suggestions were made.
Supply a tool to print lists or dicts.Strings to be printed for debugging are not only contained by lists
or dicts, but also in many other types of object. File objects
contain a file name in Unicode, exception objects contain a message
in Unicode, etc. These strings should be printed in readable form
when repr()ed. It is unlikely to be possible to implement a tool to
print all possible object types.
Use sys.displayhook and sys.excepthook.For interactive sessions, we can write hooks to restore hex escaped
characters to the original characters. But these hooks are called
only when printing the result of evaluating an expression entered in
an interactive Python session, and don’t work for the print()
function, for non-interactive sessions or for logging.debug("%r",
...), etc.
Subclass sys.stdout and sys.stderr.It is difficult to implement a subclass to restore hex-escaped
characters since there isn’t enough information left by the time
it’s a string to undo the escaping correctly in all cases. For
example, print("\\"+"u0041") should be printed as ‘\u0041’, not
‘A’. But there is no chance to tell file objects apart.
Make the encoding used by unicode_repr() adjustable, and make the
existing repr() the default.With adjustable repr(), the result of using repr() is unpredictable
and would make it impossible to write correct code involving repr().
And if current repr() is the default, then the old convention
remains intact and users may expect ASCII strings as the result of
repr(). Third party applications or libraries could be confused
when a custom repr() function is used.
Backwards Compatibility
Changing repr() may break some existing code, especially testing code.
Five of Python’s regression tests fail with this modification. If you
need repr() strings without non-ASCII character as Python 2, you can
use the following function.
def repr_ascii(obj):
return str(repr(obj).encode("ASCII", "backslashreplace"), "ASCII")
For logging or for debugging, the following code can raise
UnicodeEncodeError.
log = open("logfile", "w")
log.write(repr(data)) # UnicodeEncodeError will be raised
# if data contains unsupported characters.
To avoid exceptions being raised, you can explicitly specify the
error-handler.
log = open("logfile", "w", errors="backslashreplace")
log.write(repr(data)) # Unsupported characters will be escaped.
For a console that uses a Unicode-based encoding, for example,
en_US.utf8 or de_DE.utf8, the backslashreplace trick doesn’t work and
all printable characters are not escaped. This will cause a problem
of similarly drawing characters in Western, Greek and Cyrillic
languages. These languages use similar (but different) alphabets
(descended from a common ancestor) and contain letters that look
similar but have different character codes. For example, it is hard
to distinguish Latin ‘a’, ‘e’ and ‘o’ from Cyrillic ‘а’, ‘е’ and ‘о’.
(The visual representation, of course, very much depends on the fonts
used but usually these letters are almost indistinguishable.) To
avoid the problem, the user can adjust the terminal encoding to get a
result suitable for their environment.
Rejected Proposals
Add encoding and errors arguments to the builtin print() function,
with defaults of sys.getfilesystemencoding() and ‘backslashreplace’.Complicated to implement, and in general, this is not seen as a good
idea. [2]
Use character names to escape characters, instead of hex character
codes. For example, repr('\u03b1') can be converted to
"\N{GREEK SMALL LETTER ALPHA}".Using character names can be very verbose compared to hex-escape.
e.g., repr("\ufbf9") is converted to "\N{ARABIC LIGATURE
UIGHUR KIRGHIZ YEH WITH HAMZA ABOVE WITH ALEF MAKSURA ISOLATED
FORM}".
Default error-handler of sys.stdout should be ‘backslashreplace’.Stuff written to stdout might be consumed by another program that
might misinterpret the \ escapes. For interactive sessions, it is
possible to make the ‘backslashreplace’ error-handler the default,
but this may add confusion of the kind “it works in interactive mode
but not when redirecting to a file”.
Implementation
The author wrote a patch in http://bugs.python.org/issue2630; this was
committed to the Python 3.0 branch in revision 64138 on 06-11-2008.
References
[1]
Multibyte string on string::string_print
(http://bugs.python.org/issue479898)
[2]
[Python-3000] Displaying strings containing unicode escapes
(https://mail.python.org/pipermail/python-3000/2008-April/013366.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 3138 – String representation in Python 3000 | Standards Track | This PEP proposes a new string representation form for Python 3000.
In Python prior to Python 3000, the repr() built-in function converted
arbitrary objects to printable ASCII strings for debugging and
logging. For Python 3000, a wider range of characters, based on the
Unicode standard, should be considered ‘printable’. |
PEP 3139 – Cleaning out sys and the “interpreter” module
Author:
Benjamin Peterson <benjamin at python.org>
Status:
Rejected
Type:
Standards Track
Created:
04-Apr-2008
Python-Version:
3.0
Table of Contents
Rejection Notice
Abstract
Rationale
Specification
Transition Plan
Open Issues
What should move?
dont_write_bytecode
Move to some to imp?
Naming
References
Copyright
Rejection Notice
Guido’s -0.5 put an end to this PEP. See
https://mail.python.org/pipermail/python-3000/2008-April/012977.html.
Abstract
This PEP proposes a new low-level module for CPython-specific interpreter
functions in order to clean out the sys module and separate general Python
functionality from implementation details.
Rationale
The sys module currently contains functions and data that can be put into two
major groups:
Data and functions that are available in all Python implementations and deal
with the general running of a Python virtual machine.
argv
byteorder
path, path_hooks, meta_path, path_importer_cache, and modules
copyright, hexversion, version, and version_info
displayhook, __displayhook__
excepthook, __excepthook__, exc_info, and exc_clear
exec_prefix and prefix
executable
exit
flags, py3kwarning, dont_write_bytecode, and warn_options
getfilesystemencoding
get/setprofile
get/settrace, call_tracing
getwindowsversion
maxint and maxunicode
platform
ps1 and ps2
stdin, stderr, stdout, __stdin__, __stderr__, __stdout__
tracebacklimit
Data and functions that affect the CPython interpreter.
get/setrecursionlimit
get/setcheckinterval
_getframe and _current_frame
getrefcount
get/setdlopenflags
settscdumps
api_version
winver
dllhandle
float_info
_compact_freelists
_clear_type_cache
subversion
builtin_module_names
callstats
intern
The second collections of items has been steadily increasing over the years
causing clutter in sys. Guido has even said he doesn’t recognize some of things
in it [1]!
Moving these items off to another module would send a clear message to
other Python implementations about what functions need and need not be
implemented.
It has also been proposed that the contents of types module be distributed
across the standard library [2]; the interpreter module would
provide an excellent resting place for internal types like frames and code
objects.
Specification
A new builtin module named “interpreter” (see Naming) will be added.
The second list of items above will be split into the stdlib as follows:
The interpreter module
get/setrecursionlimit
get/setcheckinterval
_getframe and _current_frame
get/setdlopenflags
settscdumps
api_version
winver
dllhandle
float_info
_clear_type_cache
subversion
builtin_module_names
callstats
intern
The gc module:
getrefcount
_compact_freelists
Transition Plan
Once implemented in 3.x, the interpreter module will be back-ported to 2.6.
Py3k warnings will be added to the sys functions it replaces.
Open Issues
What should move?
dont_write_bytecode
Some believe that the writing of bytecode is an implementation detail and should
be moved [3]. The counterargument is that all current, complete
Python implementations do write some sort of bytecode, so it is valuable to be
able to disable it. Also, if it is moved, some wish to put it in the imp
module.
Move to some to imp?
It was noted that dont_write_bytecode or maybe builtin_module_names might fit
nicely in the imp module.
Naming
The author proposes the name “interpreter” for the new module. “pyvm” has also
been suggested [4]. The name “cpython” was well liked
[5].
References
[1]
http://bugs.python.org/issue1522
[2]
https://mail.python.org/pipermail/stdlib-sig/2008-April/000172.html
[3]
https://mail.python.org/pipermail/stdlib-sig/2008-April/000217.html
[4]
https://mail.python.org/pipermail/python-3000/2007-November/011351.html
[5]
https://mail.python.org/pipermail/stdlib-sig/2008-April/000223.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3139 – Cleaning out sys and the “interpreter” module | Standards Track | This PEP proposes a new low-level module for CPython-specific interpreter
functions in order to clean out the sys module and separate general Python
functionality from implementation details. |
PEP 3141 – A Type Hierarchy for Numbers
Author:
Jeffrey Yasskin <jyasskin at google.com>
Status:
Final
Type:
Standards Track
Created:
23-Apr-2007
Python-Version:
3.0
Post-History:
25-Apr-2007, 16-May-2007, 02-Aug-2007
Table of Contents
Abstract
Rationale
Specification
Numeric Classes
Changes to operations and __magic__ methods
Notes for type implementors
Adding More Numeric ABCs
Implementing the arithmetic operations
Rejected Alternatives
The Decimal Type
References
Acknowledgements
Copyright
Abstract
This proposal defines a hierarchy of Abstract Base Classes (ABCs) (PEP
3119) to represent number-like classes. It proposes a hierarchy of
Number :> Complex :> Real :> Rational :> Integral where A :> B
means “A is a supertype of B”. The hierarchy is inspired by Scheme’s
numeric tower [3].
Rationale
Functions that take numbers as arguments should be able to determine
the properties of those numbers, and if and when overloading based on
types is added to the language, should be overloadable based on the
types of the arguments. For example, slicing requires its arguments to
be Integrals, and the functions in the math module require
their arguments to be Real.
Specification
This PEP specifies a set of Abstract Base Classes, and suggests a
general strategy for implementing some of the methods. It uses
terminology from PEP 3119, but the hierarchy is intended to be
meaningful for any systematic method of defining sets of classes.
The type checks in the standard library should use these classes
instead of the concrete built-ins.
Numeric Classes
We begin with a Number class to make it easy for people to be fuzzy
about what kind of number they expect. This class only helps with
overloading; it doesn’t provide any operations.
class Number(metaclass=ABCMeta): pass
Most implementations of complex numbers will be hashable, but if you
need to rely on that, you’ll have to check it explicitly: mutable
numbers are supported by this hierarchy.
class Complex(Number):
"""Complex defines the operations that work on the builtin complex type.
In short, those are: conversion to complex, bool(), .real, .imag,
+, -, *, /, **, abs(), .conjugate(), ==, and !=.
If it is given heterogeneous arguments, and doesn't have special
knowledge about them, it should fall back to the builtin complex
type as described below.
"""
@abstractmethod
def __complex__(self):
"""Return a builtin complex instance."""
def __bool__(self):
"""True if self != 0."""
return self != 0
@abstractproperty
def real(self):
"""Retrieve the real component of this number.
This should subclass Real.
"""
raise NotImplementedError
@abstractproperty
def imag(self):
"""Retrieve the imaginary component of this number.
This should subclass Real.
"""
raise NotImplementedError
@abstractmethod
def __add__(self, other):
raise NotImplementedError
@abstractmethod
def __radd__(self, other):
raise NotImplementedError
@abstractmethod
def __neg__(self):
raise NotImplementedError
def __pos__(self):
"""Coerces self to whatever class defines the method."""
raise NotImplementedError
def __sub__(self, other):
return self + -other
def __rsub__(self, other):
return -self + other
@abstractmethod
def __mul__(self, other):
raise NotImplementedError
@abstractmethod
def __rmul__(self, other):
raise NotImplementedError
@abstractmethod
def __div__(self, other):
"""a/b; should promote to float or complex when necessary."""
raise NotImplementedError
@abstractmethod
def __rdiv__(self, other):
raise NotImplementedError
@abstractmethod
def __pow__(self, exponent):
"""a**b; should promote to float or complex when necessary."""
raise NotImplementedError
@abstractmethod
def __rpow__(self, base):
raise NotImplementedError
@abstractmethod
def __abs__(self):
"""Returns the Real distance from 0."""
raise NotImplementedError
@abstractmethod
def conjugate(self):
"""(x+y*i).conjugate() returns (x-y*i)."""
raise NotImplementedError
@abstractmethod
def __eq__(self, other):
raise NotImplementedError
# __ne__ is inherited from object and negates whatever __eq__ does.
The Real ABC indicates that the value is on the real line, and
supports the operations of the float builtin. Real numbers are
totally ordered except for NaNs (which this PEP basically ignores).
class Real(Complex):
"""To Complex, Real adds the operations that work on real numbers.
In short, those are: conversion to float, trunc(), math.floor(),
math.ceil(), round(), divmod(), //, %, <, <=, >, and >=.
Real also provides defaults for some of the derived operations.
"""
# XXX What to do about the __int__ implementation that's
# currently present on float? Get rid of it?
@abstractmethod
def __float__(self):
"""Any Real can be converted to a native float object."""
raise NotImplementedError
@abstractmethod
def __trunc__(self):
"""Truncates self to an Integral.
Returns an Integral i such that:
* i>=0 iff self>0;
* abs(i) <= abs(self);
* for any Integral j satisfying the first two conditions,
abs(i) >= abs(j) [i.e. i has "maximal" abs among those].
i.e. "truncate towards 0".
"""
raise NotImplementedError
@abstractmethod
def __floor__(self):
"""Finds the greatest Integral <= self."""
raise NotImplementedError
@abstractmethod
def __ceil__(self):
"""Finds the least Integral >= self."""
raise NotImplementedError
@abstractmethod
def __round__(self, ndigits:Integral=None):
"""Rounds self to ndigits decimal places, defaulting to 0.
If ndigits is omitted or None, returns an Integral,
otherwise returns a Real, preferably of the same type as
self. Types may choose which direction to round half. For
example, float rounds half toward even.
"""
raise NotImplementedError
def __divmod__(self, other):
"""The pair (self // other, self % other).
Sometimes this can be computed faster than the pair of
operations.
"""
return (self // other, self % other)
def __rdivmod__(self, other):
"""The pair (self // other, self % other).
Sometimes this can be computed faster than the pair of
operations.
"""
return (other // self, other % self)
@abstractmethod
def __floordiv__(self, other):
"""The floor() of self/other. Integral."""
raise NotImplementedError
@abstractmethod
def __rfloordiv__(self, other):
"""The floor() of other/self."""
raise NotImplementedError
@abstractmethod
def __mod__(self, other):
"""self % other
See
https://mail.python.org/pipermail/python-3000/2006-May/001735.html
and consider using "self/other - trunc(self/other)"
instead if you're worried about round-off errors.
"""
raise NotImplementedError
@abstractmethod
def __rmod__(self, other):
"""other % self"""
raise NotImplementedError
@abstractmethod
def __lt__(self, other):
"""< on Reals defines a total ordering, except perhaps for NaN."""
raise NotImplementedError
@abstractmethod
def __le__(self, other):
raise NotImplementedError
# __gt__ and __ge__ are automatically done by reversing the arguments.
# (But __le__ is not computed as the opposite of __gt__!)
# Concrete implementations of Complex abstract methods.
# Subclasses may override these, but don't have to.
def __complex__(self):
return complex(float(self))
@property
def real(self):
return +self
@property
def imag(self):
return 0
def conjugate(self):
"""Conjugate is a no-op for Reals."""
return +self
We should clean up Demo/classes/Rat.py and promote it into
rational.py in the standard library. Then it will implement the
Rational ABC.
class Rational(Real, Exact):
""".numerator and .denominator should be in lowest terms."""
@abstractproperty
def numerator(self):
raise NotImplementedError
@abstractproperty
def denominator(self):
raise NotImplementedError
# Concrete implementation of Real's conversion to float.
# (This invokes Integer.__div__().)
def __float__(self):
return self.numerator / self.denominator
And finally integers:
class Integral(Rational):
"""Integral adds a conversion to int and the bit-string operations."""
@abstractmethod
def __int__(self):
raise NotImplementedError
def __index__(self):
"""__index__() exists because float has __int__()."""
return int(self)
def __lshift__(self, other):
return int(self) << int(other)
def __rlshift__(self, other):
return int(other) << int(self)
def __rshift__(self, other):
return int(self) >> int(other)
def __rrshift__(self, other):
return int(other) >> int(self)
def __and__(self, other):
return int(self) & int(other)
def __rand__(self, other):
return int(other) & int(self)
def __xor__(self, other):
return int(self) ^ int(other)
def __rxor__(self, other):
return int(other) ^ int(self)
def __or__(self, other):
return int(self) | int(other)
def __ror__(self, other):
return int(other) | int(self)
def __invert__(self):
return ~int(self)
# Concrete implementations of Rational and Real abstract methods.
def __float__(self):
"""float(self) == float(int(self))"""
return float(int(self))
@property
def numerator(self):
"""Integers are their own numerators."""
return +self
@property
def denominator(self):
"""Integers have a denominator of 1."""
return 1
Changes to operations and __magic__ methods
To support more precise narrowing from float to int (and more
generally, from Real to Integral), we propose the following new
__magic__ methods, to be called from the corresponding library
functions. All of these return Integrals rather than Reals.
__trunc__(self), called from a new builtin trunc(x), which
returns the Integral closest to x between 0 and x.
__floor__(self), called from math.floor(x), which returns
the greatest Integral <= x.
__ceil__(self), called from math.ceil(x), which returns the
least Integral >= x.
__round__(self), called from round(x), which returns the
Integral closest to x, rounding half as the type chooses.
float will change in 3.0 to round half toward even. There is
also a 2-argument version, __round__(self, ndigits), called
from round(x, ndigits), which should return a Real.
In 2.6, math.floor, math.ceil, and round will continue to
return floats.
The int() conversion implemented by float is equivalent to
trunc(). In general, the int() conversion should try
__int__() first and if it is not found, try __trunc__().
complex.__{divmod,mod,floordiv,int,float}__ also go away. It would
be nice to provide a nice error message to help confused porters, but
not appearing in help(complex) is more important.
Notes for type implementors
Implementors should be careful to make equal numbers equal and
hash them to the same values. This may be subtle if there are two
different extensions of the real numbers. For example, a complex type
could reasonably implement hash() as follows:
def __hash__(self):
return hash(complex(self))
but should be careful of any values that fall outside of the built in
complex’s range or precision.
Adding More Numeric ABCs
There are, of course, more possible ABCs for numbers, and this would
be a poor hierarchy if it precluded the possibility of adding
those. You can add MyFoo between Complex and Real with:
class MyFoo(Complex): ...
MyFoo.register(Real)
Implementing the arithmetic operations
We want to implement the arithmetic operations so that mixed-mode
operations either call an implementation whose author knew about the
types of both arguments, or convert both to the nearest built in type
and do the operation there. For subtypes of Integral, this means that
__add__ and __radd__ should be defined as:
class MyIntegral(Integral):
def __add__(self, other):
if isinstance(other, MyIntegral):
return do_my_adding_stuff(self, other)
elif isinstance(other, OtherTypeIKnowAbout):
return do_my_other_adding_stuff(self, other)
else:
return NotImplemented
def __radd__(self, other):
if isinstance(other, MyIntegral):
return do_my_adding_stuff(other, self)
elif isinstance(other, OtherTypeIKnowAbout):
return do_my_other_adding_stuff(other, self)
elif isinstance(other, Integral):
return int(other) + int(self)
elif isinstance(other, Real):
return float(other) + float(self)
elif isinstance(other, Complex):
return complex(other) + complex(self)
else:
return NotImplemented
There are 5 different cases for a mixed-type operation on subclasses
of Complex. I’ll refer to all of the above code that doesn’t refer to
MyIntegral and OtherTypeIKnowAbout as “boilerplate”. a will be an
instance of A, which is a subtype of Complex (a : A <:
Complex), and b : B <: Complex. I’ll consider a + b:
If A defines an __add__ which accepts b, all is well.
If A falls back to the boilerplate code, and it were to return
a value from __add__, we’d miss the possibility that B defines
a more intelligent __radd__, so the boilerplate should return
NotImplemented from __add__. (Or A may not implement __add__ at
all.)
Then B’s __radd__ gets a chance. If it accepts a, all is well.
If it falls back to the boilerplate, there are no more possible
methods to try, so this is where the default implementation
should live.
If B <: A, Python tries B.__radd__ before A.__add__. This is
ok, because it was implemented with knowledge of A, so it can
handle those instances before delegating to Complex.
If A<:Complex and B<:Real without sharing any other knowledge,
then the appropriate shared operation is the one involving the built
in complex, and both __radd__s land there, so a+b == b+a.
Rejected Alternatives
The initial version of this PEP defined an algebraic hierarchy
inspired by a Haskell Numeric Prelude [2] including
MonoidUnderPlus, AdditiveGroup, Ring, and Field, and mentioned several
other possible algebraic types before getting to the numbers. We had
expected this to be useful to people using vectors and matrices, but
the NumPy community really wasn’t interested, and we ran into the
issue that even if x is an instance of X <: MonoidUnderPlus
and y is an instance of Y <: MonoidUnderPlus, x + y may
still not make sense.
Then we gave the numbers a much more branching structure to include
things like the Gaussian Integers and Z/nZ, which could be Complex but
wouldn’t necessarily support things like division. The community
decided that this was too much complication for Python, so I’ve now
scaled back the proposal to resemble the Scheme numeric tower much
more closely.
The Decimal Type
After consultation with its authors it has been decided that the
Decimal type should not at this time be made part of the numeric
tower.
References
[1]
Possible Python 3K Class Tree?, wiki page by Bill Janssen
(http://wiki.python.org/moin/AbstractBaseClasses)
[2]
NumericPrelude: An experimental alternative hierarchy
of numeric type classes
(https://archives.haskell.org/code.haskell.org/numeric-prelude/docs/html/index.html)
[3]
The Scheme numerical tower
(https://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r5rs-html/r5rs_8.html#SEC50)
Acknowledgements
Thanks to Neal Norwitz for encouraging me to write this PEP in the
first place, to Travis Oliphant for pointing out that the numpy people
didn’t really care about the algebraic concepts, to Alan Isaac for
reminding me that Scheme had already done this, and to Guido van
Rossum and lots of other people on the mailing list for refining the
concept.
Copyright
This document has been placed in the public domain.
| Final | PEP 3141 – A Type Hierarchy for Numbers | Standards Track | This proposal defines a hierarchy of Abstract Base Classes (ABCs) (PEP
3119) to represent number-like classes. It proposes a hierarchy of
Number :> Complex :> Real :> Rational :> Integral where A :> B
means “A is a supertype of B”. The hierarchy is inspired by Scheme’s
numeric tower [3]. |
PEP 3142 – Add a “while” clause to generator expressions
Author:
Gerald Britton <gerald.britton at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
12-Jan-2009
Python-Version:
3.0
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Acknowledgements
Copyright
Abstract
This PEP proposes an enhancement to generator expressions, adding a
“while” clause to complement the existing “if” clause.
Rationale
A generator expression (PEP 289) is a concise method to serve
dynamically-generated objects to list comprehensions (PEP 202).
Current generator expressions allow for an “if” clause to filter
the objects that are returned to those meeting some set of
criteria. However, since the “if” clause is evaluated for every
object that may be returned, in some cases it is possible that all
objects would be rejected after a certain point. For example:
g = (n for n in range(100) if n*n < 50)
which is equivalent to the using a generator function
(PEP 255):
def __gen(exp):
for n in exp:
if n*n < 50:
yield n
g = __gen(iter(range(10)))
would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider
the numbers from 8 to 99 and reject them all since n*n >= 50 for
numbers in that range. Allowing for a “while” clause would allow
the redundant tests to be short-circuited:
g = (n for n in range(100) while n*n < 50)
would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8
since the condition (n*n < 50) is no longer true. This would be
equivalent to the generator function:
def __gen(exp):
for n in exp:
if n*n < 50:
yield n
else:
break
g = __gen(iter(range(100)))
Currently, in order to achieve the same result, one would need to
either write a generator function such as the one above or use the
takewhile function from itertools:
from itertools import takewhile
g = takewhile(lambda n: n*n < 50, range(100))
The takewhile code achieves the same result as the proposed syntax,
albeit in a longer (some would say “less-elegant”) fashion. Also,
the takewhile version requires an extra function call (the lambda
in the example above) with the associated performance penalty.
A simple test shows that:
for n in (n for n in range(100) if 1): pass
performs about 10% better than:
for n in takewhile(lambda n: 1, range(100)): pass
though they achieve similar results. (The first example uses a
generator; takewhile is an iterator). If similarly implemented,
a “while” clause should perform about the same as the “if” clause
does today.
The reader may ask if the “if” and “while” clauses should be
mutually exclusive. There are good examples that show that there
are times when both may be used to good advantage. For example:
p = (p for p in primes() if p > 100 while p < 1000)
should return prime numbers found between 100 and 1000, assuming
I have a primes() generator that yields prime numbers.
Adding a “while” clause to generator expressions maintains the
compact form while adding a useful facility for short-circuiting
the expression.
Acknowledgements
Raymond Hettinger first proposed the concept of generator
expressions in January 2002.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3142 – Add a “while” clause to generator expressions | Standards Track | This PEP proposes an enhancement to generator expressions, adding a
“while” clause to complement the existing “if” clause. |
PEP 3143 – Standard daemon process library
Author:
Ben Finney <ben+python at benfinney.id.au>
Status:
Deferred
Type:
Standards Track
Created:
26-Jan-2009
Python-Version:
3.x
Post-History:
Table of Contents
Abstract
PEP Deferral
Specification
Example usage
Interface
DaemonContext objects
Motivation
Rationale
Correct daemon behaviour
A daemon is not a service
Reference Implementation
Other daemon implementations
References
Copyright
Abstract
Writing a program to become a well-behaved Unix daemon is somewhat
complex and tricky to get right, yet the steps are largely similar for
any daemon regardless of what else the program may need to do.
This PEP introduces a package to the Python standard library that
provides a simple interface to the task of becoming a daemon process.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the PEP
and collecting and incorporating feedback, and with sufficient available
time to do so effectively.
Specification
Example usage
Simple example of direct DaemonContext usage:
import daemon
from spam import do_main_program
with daemon.DaemonContext():
do_main_program()
More complex example usage:
import os
import grp
import signal
import daemon
import lockfile
from spam import (
initial_program_setup,
do_main_program,
program_cleanup,
reload_program_config,
)
context = daemon.DaemonContext(
working_directory='/var/lib/foo',
umask=0o002,
pidfile=lockfile.FileLock('/var/run/spam.pid'),
)
context.signal_map = {
signal.SIGTERM: program_cleanup,
signal.SIGHUP: 'terminate',
signal.SIGUSR1: reload_program_config,
}
mail_gid = grp.getgrnam('mail').gr_gid
context.gid = mail_gid
important_file = open('spam.data', 'w')
interesting_file = open('eggs.data', 'w')
context.files_preserve = [important_file, interesting_file]
initial_program_setup()
with context:
do_main_program()
Interface
A new package, daemon, is added to the standard library.
A class, DaemonContext, is defined to represent the settings and
process context for the program running as a daemon process.
DaemonContext objects
A DaemonContext instance represents the behaviour settings and
process context for the program when it becomes a daemon. The
behaviour and environment is customised by setting options on the
instance, before calling the open method.
Each option can be passed as a keyword argument to the DaemonContext
constructor, or subsequently altered by assigning to an attribute on
the instance at any time prior to calling open. That is, for
options named wibble and wubble, the following invocation:
foo = daemon.DaemonContext(wibble=bar, wubble=baz)
foo.open()
is equivalent to:
foo = daemon.DaemonContext()
foo.wibble = bar
foo.wubble = baz
foo.open()
The following options are defined.
files_preserve
Default:
None
List of files that should not be closed when starting the
daemon. If None, all open file descriptors will be closed.
Elements of the list are file descriptors (as returned by a file
object’s fileno() method) or Python file objects. Each
specifies a file that is not to be closed during daemon start.
chroot_directory
Default:
None
Full path to a directory to set as the effective root directory of
the process. If None, specifies that the root directory is not
to be changed.
working_directory
Default:
'/'
Full path of the working directory to which the process should
change on daemon start.
Since a filesystem cannot be unmounted if a process has its
current working directory on that filesystem, this should either
be left at default or set to a directory that is a sensible “home
directory” for the daemon while it is running.
umask
Default:
0
File access creation mask (“umask”) to set for the process on
daemon start.
Since a process inherits its umask from its parent process,
starting the daemon will reset the umask to this value so that
files are created by the daemon with access modes as it expects.
pidfile
Default:
None
Context manager for a PID lock file. When the daemon context opens
and closes, it enters and exits the pidfile context manager.
detach_process
Default:
None
If True, detach the process context when opening the daemon
context; if False, do not detach.
If unspecified (None) during initialisation of the instance,
this will be set to True by default, and False only if
detaching the process is determined to be redundant; for example,
in the case when the process was started by init, by initd, or
by inetd.
signal_map
Default:
system-dependent
Mapping from operating system signals to callback actions.
The mapping is used when the daemon context opens, and determines
the action for each signal’s signal handler:
A value of None will ignore the signal (by setting the
signal action to signal.SIG_IGN).
A string value will be used as the name of an attribute on the
DaemonContext instance. The attribute’s value will be used
as the action for the signal handler.
Any other value will be used as the action for the signal
handler.
The default value depends on which signals are defined on the
running system. Each item from the list below whose signal is
actually defined in the signal module will appear in the
default map:
signal.SIGTTIN: None
signal.SIGTTOU: None
signal.SIGTSTP: None
signal.SIGTERM: 'terminate'
Depending on how the program will interact with its child
processes, it may need to specify a signal map that includes the
signal.SIGCHLD signal (received when a child process exits).
See the specific operating system’s documentation for more detail
on how to determine what circumstances dictate the need for signal
handlers.
uid
Default:
os.getuid()
gid
Default:
os.getgid()
The user ID (“UID”) value and group ID (“GID”) value to switch
the process to on daemon start.
The default values, the real UID and GID of the process, will
relinquish any effective privilege elevation inherited by the
process.
prevent_core
Default:
True
If true, prevents the generation of core files, in order to avoid
leaking sensitive information from daemons run as root.
stdin
Default:
None
stdout
Default:
None
stderr
Default:
None
Each of stdin, stdout, and stderr is a file-like object
which will be used as the new file for the standard I/O stream
sys.stdin, sys.stdout, and sys.stderr respectively. The file
should therefore be open, with a minimum of mode ‘r’ in the case
of stdin, and mode ‘w+’ in the case of stdout and stderr.
If the object has a fileno() method that returns a file
descriptor, the corresponding file will be excluded from being
closed during daemon start (that is, it will be treated as though
it were listed in files_preserve).
If None, the corresponding system stream is re-bound to the
file named by os.devnull.
The following methods are defined.
open()
Return:
None
Open the daemon context, turning the current program into a daemon
process. This performs the following steps:
If this instance’s is_open property is true, return
immediately. This makes it safe to call open multiple times on
an instance.
If the prevent_core attribute is true, set the resource limits
for the process to prevent any core dump from the process.
If the chroot_directory attribute is not None, set the
effective root directory of the process to that directory (via
os.chroot).This allows running the daemon process inside a “chroot gaol”
as a means of limiting the system’s exposure to rogue behaviour
by the process. Note that the specified directory needs to
already be set up for this purpose.
Set the process UID and GID to the uid and gid attribute
values.
Close all open file descriptors. This excludes those listed in
the files_preserve attribute, and those that correspond to the
stdin, stdout, or stderr attributes.
Change current working directory to the path specified by the
working_directory attribute.
Reset the file access creation mask to the value specified by
the umask attribute.
If the detach_process option is true, detach the current
process into its own process group, and disassociate from any
controlling terminal.
Set signal handlers as specified by the signal_map attribute.
If any of the attributes stdin, stdout, stderr are not
None, bind the system streams sys.stdin, sys.stdout,
and/or sys.stderr to the files represented by the
corresponding attributes. Where the attribute has a file
descriptor, the descriptor is duplicated (instead of re-binding
the name).
If the pidfile attribute is not None, enter its context
manager.
Mark this instance as open (for the purpose of future open and
close calls).
Register the close method to be called during Python’s exit
processing.
When the function returns, the running program is a daemon
process.
close()
Return:
None
Close the daemon context. This performs the following steps:
If this instance’s is_open property is false, return
immediately. This makes it safe to call close multiple times
on an instance.
If the pidfile attribute is not None, exit its context
manager.
Mark this instance as closed (for the purpose of future open
and close calls).
is_open
Return:
True if the instance is open, False otherwise.
This property exposes the state indicating whether the instance is
currently open. It is True if the instance’s open method has
been called and the close method has not subsequently been
called.
terminate(signal_number, stack_frame)
Return:
None
Signal handler for the signal.SIGTERM signal. Performs the
following step:
Raise a SystemExit exception explaining the signal.
The class also implements the context manager protocol via
__enter__ and __exit__ methods.
__enter__()
Return:
The DaemonContext instance
Call the instance’s open() method, then return the instance.
__exit__(exc_type, exc_value, exc_traceback)
Return:
True or False as defined by the context manager
protocol
Call the instance’s close() method, then return True if the
exception was handled or False if it was not.
Motivation
The majority of programs written to be Unix daemons either implement
behaviour very similar to that in the specification, or are
poorly-behaved daemons by the correct daemon behaviour.
Since these steps should be much the same in most implementations but
are very particular and easy to omit or implement incorrectly, they
are a prime target for a standard well-tested implementation in the
standard library.
Rationale
Correct daemon behaviour
According to Stevens in [stevens] §2.6, a program should perform the
following steps to become a Unix daemon process.
Close all open file descriptors.
Change current working directory.
Reset the file access creation mask.
Run in the background.
Disassociate from process group.
Ignore terminal I/O signals.
Disassociate from control terminal.
Don’t reacquire a control terminal.
Correctly handle the following circumstances:
Started by System V init process.
Daemon termination by SIGTERM signal.
Children generate SIGCLD signal.
The daemon tool [slack-daemon] lists (in its summary of features)
behaviour that should be performed when turning a program into a
well-behaved Unix daemon process. It differs from this PEP’s intent in
that it invokes a separate program as a daemon process. The
following features are appropriate for a daemon that starts itself
once the program is already running:
Sets up the correct process context for a daemon.
Behaves sensibly when started by initd(8) or inetd(8).
Revokes any suid or sgid privileges to reduce security risks in case
daemon is incorrectly installed with special privileges.
Prevents the generation of core files to prevent leaking sensitive
information from daemons run as root (optional).
Names the daemon by creating and locking a PID file to guarantee
that only one daemon with the given name can execute at any given
time (optional).
Sets the user and group under which to run the daemon (optional,
root only).
Creates a chroot gaol (optional, root only).
Captures the daemon’s stdout and stderr and directs them to syslog
(optional).
A daemon is not a service
This PEP addresses only Unix-style daemons, for which the above
correct behaviour is relevant, as opposed to comparable behaviours on
other operating systems.
There is a related concept in many systems, called a “service”. A
service differs from the model in this PEP, in that rather than having
the current program continue to run as a daemon process, a service
starts an additional process to run in the background, and the
current process communicates with that additional process via some
defined channels.
The Unix-style daemon model in this PEP can be used, among other
things, to implement the background-process part of a service; but
this PEP does not address the other aspects of setting up and managing
a service.
Reference Implementation
The python-daemon package [python-daemon].
Other daemon implementations
Prior to this PEP, several existing third-party Python libraries or
tools implemented some of this PEP’s correct daemon behaviour.
The reference implementation is a fairly direct successor from the
following implementations:
Many good ideas were contributed by the community to Python cookbook
recipes #66012 [cookbook-66012] and #278731 [cookbook-278731].
The bda.daemon library [bda.daemon] is an implementation of
[cookbook-66012]. It is the predecessor of [python-daemon].
Other Python daemon implementations that differ from this PEP:
The zdaemon tool [zdaemon] was written for the Zope project. Like
[slack-daemon], it differs from this specification because it is
used to run another program as a daemon process.
The Python library daemon [clapper-daemon] is (according to its
homepage) no longer maintained. As of version 1.0.1, it implements
the basic steps from [stevens].
The daemonize library [seutter-daemonize] also implements the
basic steps from [stevens].
Ray Burr’s daemon.py module [burr-daemon] provides the [stevens]
procedure as well as PID file handling and redirection of output to
syslog.
Twisted [twisted] includes, perhaps unsurprisingly, an
implementation of a process daemonisation API that is integrated
with the rest of the Twisted framework; it differs significantly
from the API in this PEP.
The Python initd library [dagitses-initd], which uses
[clapper-daemon], implements an equivalent of Unix initd(8) for
controlling a daemon process.
References
[stevens] (1, 2, 3, 4)
Unix Network Programming, W. Richard Stevens, 1994 Prentice
Hall.
[slack-daemon] (1, 2)
The (non-Python) “libslack” implementation of a daemon tool
http://www.libslack.org/daemon/ by “raf” <[email protected]>.
[python-daemon] (1, 2)
The python-daemon library
http://pypi.python.org/pypi/python-daemon/ by Ben Finney et
al.
[cookbook-66012] (1, 2)
Python Cookbook recipe 66012, “Fork a daemon process on Unix”
http://code.activestate.com/recipes/66012/.
[cookbook-278731]
Python Cookbook recipe 278731, “Creating a daemon the Python way”
http://code.activestate.com/recipes/278731/.
[bda.daemon]
The bda.daemon library
http://pypi.python.org/pypi/bda.daemon/ by Robert
Niederreiter et al.
[zdaemon]
The zdaemon tool http://pypi.python.org/pypi/zdaemon/ by
Guido van Rossum et al.
[clapper-daemon] (1, 2)
The daemon library http://pypi.python.org/pypi/daemon/ by
Brian Clapper.
[seutter-daemonize]
The daemonize library http://daemonize.sourceforge.net/ by
Jerry Seutter.
[burr-daemon]
The daemon.py module
http://www.nightmare.com/~ryb/code/daemon.py by Ray Burr.
[twisted]
The Twisted application framework
http://pypi.python.org/pypi/Twisted/ by Glyph Lefkowitz et
al.
[dagitses-initd]
The Python initd library http://pypi.python.org/pypi/initd/
by Michael Andreas Dagitses.
Copyright
This work is hereby placed in the public domain. To the extent that
placing a work in the public domain is not legally possible, the
copyright holder hereby grants to all recipients of this work all
rights and freedoms that would otherwise be restricted by copyright.
| Deferred | PEP 3143 – Standard daemon process library | Standards Track | Writing a program to become a well-behaved Unix daemon is somewhat
complex and tricky to get right, yet the steps are largely similar for
any daemon regardless of what else the program may need to do. |
PEP 3145 – Asynchronous I/O For subprocess.Popen
Author:
Eric Pruitt, Charles R. McCreary, Josiah Carlson
Status:
Withdrawn
Type:
Standards Track
Created:
04-Aug-2009
Python-Version:
3.2
Post-History:
Table of Contents
Abstract
PEP Deferral
PEP Withdrawal
Motivation
Rationale
Reference Implementation
References
Copyright
Abstract
In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process. This PEP proposes to make
subprocess.Popen more asynchronous to help alleviate these
problems.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
at least until after PEP 3156 has been resolved.
PEP Withdrawal
This can be dealt with in the bug tracker. A specific proposal is
attached to [11].
Motivation
A search for “python asynchronous subprocess” will turn up numerous
accounts of people wanting to execute a child process and communicate with
it from time to time reading only the data that is available instead of
blocking to wait for the program to produce data [1] [2] [3]. The current
behavior of the subprocess module is that when a user sends or receives
data via the stdin, stderr and stdout file objects, dead locks are common
and documented [4] [5]. While communicate can be used to alleviate some of
the buffering issues, it will still cause the parent process to block while
attempting to read data when none is available to be read from the child
process.
Rationale
There is a documented need for asynchronous, non-blocking functionality in
subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the
utility of the Python standard library that can be used on Unix based and
Windows builds of Python. Practically every I/O object in Python has a
file-like wrapper of some sort. Sockets already act as such and for
strings there is StringIO. Popen can be made to act like a file by simply
using the methods attached to the subprocess.Popen.stderr, stdout and
stdin file-like objects. But when using the read and write methods of
those options, you do not have the benefit of asynchronous I/O. In the
proposed solution the wrapper wraps the asynchronous methods to mimic a
file object.
Reference Implementation
I have been maintaining a Google Code repository that contains all of my
changes including tests and documentation [9] as well as blog detailing
the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the
subprocess module as well as a wrapper class for subprocess.Popen
that makes it so that an executed process can take the place of a file by
duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen
class: Popen.send and Popen._recv, each with two separate implementations,
one for Windows and one for Unix-based systems. The Windows
implementation uses ctypes to access the functions needed to control pipes
in the kernel 32 DLL in an asynchronous manner. On Unix based systems,
the Python interface for file control serves the same purpose. The
different implementations of Popen.send and Popen._recv have identical
arguments to make code that uses these functions work across multiple
platforms.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes
selects stdout as the pipe for Popen._recv by default. Popen.recv_err
selects stderr as the pipe by default. Popen.recv and Popen.recv_err
are much easier to read and understand than Popen._recv('stdout' ...) and
Popen._recv('stderr' ...) respectively.
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread
handles this issue by returning all data read over a given time
interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to
allow a process to act like a file so that there are no blocking issues
that can arise from using the stdout and stdin file objects produced from
a subprocess.Popen call.
References
[1]
[ python-Feature Requests-1191964 ] asynchronous Subprocess
https://mail.python.org/pipermail/python-bugs-list/2006-December/036524.html
[2] (1, 2)
Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess
http://ivory.idyll.org/blog/problems-with-subprocess.html
[3] (1, 2)
How can I run an external command asynchronously from Python? - Stack
Overflow
https://stackoverflow.com/q/636561
[4]
18.1. subprocess - Subprocess management - Python v2.6.2 documentation
https://docs.python.org/2.6/library/subprocess.html#subprocess.Popen.wait
[5]
18.1. subprocess - Subprocess management - Python v2.6.2 documentation
https://docs.python.org/2.6/library/subprocess.html#subprocess.Popen.kill
[6]
Issue 1191964: asynchronous Subprocess - Python tracker
https://github.com/python/cpython/issues/41922
[7]
Module to allow Asynchronous subprocess use on Windows and Posix
platforms - ActiveState Code
https://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code
https://web.archive.org/web/20130306074135/http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=svn2c925e935cad0166d5da85e37c742d8e7f609de5&r=2c925e935cad0166d5da85e37c742d8e7f609de5
[9]
subprocdev - Project Hosting on Google Code
https://code.google.com/archive/p/subprocdev/
[10]
Python Subprocess Dev
https://subdev.blogspot.com/
[11]
https://github.com/python/cpython/issues/63023 – Idle: use pipes instead of
sockets to talk with user subprocess
Copyright
This P.E.P. is licensed under the Open Publication License;
http://www.opencontent.org/openpub/.
| Withdrawn | PEP 3145 – Asynchronous I/O For subprocess.Popen | Standards Track | In its present form, the subprocess.Popen implementation is prone to
dead-locking and blocking of the parent Python script while waiting on data
from the child process. This PEP proposes to make
subprocess.Popen more asynchronous to help alleviate these
problems. |
PEP 3146 – Merging Unladen Swallow into CPython
Author:
Collin Winter <collinwinter at google.com>,
Jeffrey Yasskin <jyasskin at google.com>,
Reid Kleckner <rnk at mit.edu>
Status:
Withdrawn
Type:
Standards Track
Created:
01-Jan-2010
Python-Version:
3.3
Post-History:
Table of Contents
PEP Withdrawal
Abstract
Rationale, Implementation
Alternatives
Performance
Benchmarks
Performance vs CPython 2.6.4
Memory Usage
Start-up Time
Binary Size
Performance Retrospective
Correctness and Compatibility
Known Incompatibilities
Platform Support
Impact on CPython Development
Experimenting with Changes to Python or CPython Bytecode
Debugging
Profiling
Addition of C++ to CPython
Managing LLVM Releases, C++ API Changes
Building CPython
Proposed Merge Plan
Contingency Plans
Future Work
Unladen Swallow Community
Licensing
References
Copyright
PEP Withdrawal
With Unladen Swallow going the way of the Norwegian Blue [1]
[2], this PEP has been deemed to have been withdrawn.
Abstract
This PEP proposes the merger of the Unladen Swallow project [3] into
CPython’s source tree. Unladen Swallow is an open-source branch of CPython
focused on performance. Unladen Swallow is source-compatible with valid Python
2.6.4 applications and C extension modules.
Unladen Swallow adds a just-in-time (JIT) compiler to CPython, allowing for the
compilation of selected Python code to optimized machine code. Beyond classical
static compiler optimizations, Unladen Swallow’s JIT compiler takes advantage of
data collected at runtime to make checked assumptions about code behaviour,
allowing the production of faster machine code.
This PEP proposes to integrate Unladen Swallow into CPython’s development tree
in a separate py3k-jit branch, targeted for eventual merger with the main
py3k branch. While Unladen Swallow is by no means finished or perfect, we
feel that Unladen Swallow has reached sufficient maturity to warrant
incorporation into CPython’s roadmap. We have sought to create a stable platform
that the wider CPython development team can build upon, a platform that will
yield increasing performance for years to come.
This PEP will detail Unladen Swallow’s implementation and how it differs from
CPython 2.6.4; the benchmarks used to measure performance; the tools used to
ensure correctness and compatibility; the impact on CPython’s current platform
support; and the impact on the CPython core development process. The PEP
concludes with a proposed merger plan and brief notes on possible directions
for future work.
We seek the following from the BDFL:
Approval for the overall concept of adding a just-in-time compiler to CPython,
following the design laid out below.
Permission to continue working on the just-in-time compiler in the CPython
source tree.
Permission to eventually merge the just-in-time compiler into the py3k
branch once all blocking issues [31] have been addressed.
A pony.
Rationale, Implementation
Many companies and individuals would like Python to be faster, to enable its
use in more projects. Google is one such company.
Unladen Swallow is a Google-sponsored branch of CPython, initiated to improve
the performance of Google’s numerous Python libraries, tools and applications.
To make the adoption of Unladen Swallow as easy as possible, the project
initially aimed at four goals:
A performance improvement of 5x over the baseline of CPython 2.6.4 for
single-threaded code.
100% source compatibility with valid CPython 2.6 applications.
100% source compatibility with valid CPython 2.6 C extension modules.
Design for eventual merger back into CPython.
We chose 2.6.4 as our baseline because Google uses CPython 2.4 internally, and
jumping directly from CPython 2.4 to CPython 3.x was considered infeasible.
To achieve the desired performance, Unladen Swallow has implemented a
just-in-time (JIT) compiler [51] in the tradition of Urs Hoelzle’s work on
Self [52], gathering feedback at runtime and using that to inform
compile-time optimizations. This is similar to the approach taken by the current
breed of JavaScript engines [59], [60]; most Java virtual
machines [64]; Rubinius [61], MacRuby [63], and other Ruby
implementations; Psyco [65]; and others.
We explicitly reject any suggestion that our ideas are original. We have sought
to reuse the published work of other researchers wherever possible. If we have
done any original work, it is by accident. We have tried, as much as possible,
to take good ideas from all corners of the academic and industrial community. A
partial list of the research papers that have informed Unladen Swallow is
available on the Unladen Swallow wiki [54].
The key observation about optimizing dynamic languages is that they are only
dynamic in theory; in practice, each individual function or snippet of code is
relatively static, using a stable set of types and child functions. The current
CPython bytecode interpreter assumes the worst about the code it is running,
that at any moment the user might override the len() function or pass a
never-before-seen type into a function. In practice this never happens, but user
code pays for that support. Unladen Swallow takes advantage of the relatively
static nature of user code to improve performance.
At a high level, the Unladen Swallow JIT compiler works by translating a
function’s CPython bytecode to platform-specific machine code, using data
collected at runtime, as well as classical compiler optimizations, to improve
the quality of the generated machine code. Because we only want to spend
resources compiling Python code that will actually benefit the runtime of the
program, an online heuristic is used to assess how hot a given function is. Once
the hotness value for a function crosses a given threshold, it is selected for
compilation and optimization. Until a function is judged hot, however, it runs
in the standard CPython eval loop, which in Unladen Swallow has been
instrumented to record interesting data about each bytecode executed. This
runtime data is used to reduce the flexibility of the generated machine code,
allowing us to optimize for the common case. For example, we collect data on
Whether a branch was taken/not taken. If a branch is never taken, we will not
compile it to machine code.
Types used by operators. If we find that a + b is only ever adding
integers, the generated machine code for that snippet will not support adding
floats.
Functions called at each callsite. If we find that a particular foo()
callsite is always calling the same foo function, we can optimize the
call or inline it away
Refer to [55] for a complete list of data points gathered and how
they are used.
However, if by chance the historically-untaken branch is now taken, or some
integer-optimized a + b snippet receives two strings, we must support this.
We cannot change Python semantics. Each of these sections of optimized machine
code is preceded by a guard, which checks whether the simplifying
assumptions we made when optimizing still hold. If the assumptions are still
valid, we run the optimized machine code; if they are not, we revert back to
the interpreter and pick up where we left off.
We have chosen to reuse a set of existing compiler libraries called LLVM
[4] for code generation and code optimization. This has saved our small
team from needing to understand and debug code generation on multiple machine
instruction sets and from needing to implement a large set of classical compiler
optimizations. The project would not have been possible without such code reuse.
We have found LLVM easy to modify and its community receptive to our suggestions
and modifications.
In somewhat more depth, Unladen Swallow’s JIT works by compiling CPython
bytecode to LLVM’s own intermediate representation (IR) [95], taking
into account any runtime data from the CPython eval loop. We then run a set of
LLVM’s built-in optimization passes, producing a smaller, optimized version of
the original LLVM IR. LLVM then lowers the IR to platform-specific machine code,
performing register allocation, instruction scheduling, and any necessary
relocations. This arrangement of the compilation pipeline allows the LLVM-based
JIT to be easily omitted from a compiled python binary by passing
--without-llvm to ./configure; various use cases for this flag are
discussed later.
For a complete detailing of how Unladen Swallow works, consult the Unladen
Swallow documentation [53], [55].
Unladen Swallow has focused on improving the performance of single-threaded,
pure-Python code. We have not made an effort to remove CPython’s global
interpreter lock (GIL); we feel this is separate from our work, and due to its
sensitivity, is best done in a mainline development branch. We considered
making GIL-removal a part of Unladen Swallow, but were concerned by the
possibility of introducing subtle bugs when porting our work from CPython 2.6
to 3.x.
A JIT compiler is an extremely versatile tool, and we have by no means
exhausted its full potential. We have tried to create a sufficiently flexible
framework that the wider CPython development community can build upon it for
years to come, extracting increased performance in each subsequent release.
Alternatives
There are number of alternative strategies for improving Python performance
which we considered, but found unsatisfactory.
Cython, Shedskin: Cython [102] and Shedskin [103] are both
static compilers for Python. We view these as useful-but-limited workarounds
for CPython’s historically-poor performance. Shedskin does not support the
full Python standard library [104], while Cython
requires manual Cython-specific annotations for optimum performance.Static compilers like these are useful for writing extension modules without
worrying about reference counting, but because they are static, ahead-of-time
compilers, they cannot optimize the full range of code under consideration by
a just-in-time compiler informed by runtime data.
IronPython: IronPython [107] is Python on Microsoft’s .Net
platform. It is not actively tested on Mono [108], meaning that it is
essentially Windows-only, making it unsuitable as a general CPython
replacement.
Jython: Jython [109] is a complete implementation of Python 2.5, but
is significantly slower than Unladen Swallow (3-5x on measured benchmarks) and
has no support for CPython extension modules [110], which would
make migration of large applications prohibitively expensive.
Psyco: Psyco [65] is a specializing JIT compiler for CPython,
implemented as an extension module. It primarily improves performance for
numerical code. Pros: exists; makes some code faster. Cons: 32-bit only, with
no plans for 64-bit support; supports x86 only; very difficult to maintain;
incompatible with SSE2 optimized code due to alignment issues.
PyPy: PyPy [66] has good performance on numerical code, but is slower
than Unladen Swallow on some workloads. Migration of large applications from
CPython to PyPy would be prohibitively expensive: PyPy’s JIT compiler supports
only 32-bit x86 code generation; important modules, such as MySQLdb and
pycrypto, do not build against PyPy; PyPy does not offer an embedding API,
much less the same API as CPython.
PyV8: PyV8 [111] is an alpha-stage experimental Python-to-JavaScript
compiler that runs on top of V8. PyV8 does not implement the whole Python
language, and has no support for CPython extension modules.
WPython: WPython [105] is a wordcode-based reimplementation of
CPython’s interpreter loop. While it provides a modest improvement to
interpreter performance [106], it is not an either-or
substitute for a just-in-time compiler. An interpreter will never be as fast
as optimized machine code. We view WPython and similar interpreter
enhancements as complementary to our work, rather than as competitors.
Performance
Benchmarks
Unladen Swallow has developed a fairly large suite of benchmarks, ranging from
synthetic microbenchmarks designed to test a single feature up through
whole-application macrobenchmarks. The inspiration for these benchmarks has come
variously from third-party contributors (in the case of the html5lib
benchmark), Google’s own internal workloads (slowspitfire, pickle,
unpickle), as well as tools and libraries in heavy use throughout the wider
Python community (django, 2to3, spambayes). These benchmarks are run
through a single interface called perf.py that takes care of collecting
memory usage information, graphing performance, and running statistics on the
benchmark results to ensure significance.
The full list of available benchmarks is available on the Unladen Swallow wiki
[43], including instructions on downloading and running the
benchmarks for yourself. All our benchmarks are open-source; none are
Google-proprietary. We believe this collection of benchmarks serves as a useful
tool to benchmark any complete Python implementation, and indeed, PyPy is
already using these benchmarks for their own performance testing
[81], [96]. We welcome this, and we seek
additional workloads for the benchmark suite from the Python community.
We have focused our efforts on collecting macrobenchmarks and benchmarks that
simulate real applications as well as possible, when running a whole application
is not feasible. Along a different axis, our benchmark collection originally
focused on the kinds of workloads seen by Google’s Python code (webapps, text
processing), though we have since expanded the collection to include workloads
Google cares nothing about. We have so far shied away from heavily numerical
workloads, since NumPy [80] already does an excellent job on such code and
so improving numerical performance was not an initial high priority for the
team; we have begun to incorporate such benchmarks into the collection
[97] and have started work on optimizing numerical Python code.
Beyond these benchmarks, there are also a variety of workloads we are explicitly
not interested in benchmarking. Unladen Swallow is focused on improving the
performance of pure-Python code, so the performance of extension modules like
NumPy is uninteresting since NumPy’s core routines are implemented in
C. Similarly, workloads that involve a lot of IO like GUIs, databases or
socket-heavy applications would, we feel, fail to accurately measure interpreter
or code generation optimizations. That said, there’s certainly room to improve
the performance of C-language extensions modules in the standard library, and
as such, we have added benchmarks for the cPickle and re modules.
Performance vs CPython 2.6.4
The charts below compare the arithmetic mean of multiple benchmark iterations
for CPython 2.6.4 and Unladen Swallow. perf.py gathers more data than this,
and indeed, arithmetic mean is not the whole story; we reproduce only the mean
for the sake of conciseness. We include the t score from the Student’s
two-tailed T-test [44] at the 95% confidence interval to indicate
the significance of the result. Most benchmarks are run for 100 iterations,
though some longer-running whole-application benchmarks are run for fewer
iterations.
A description of each of these benchmarks is available on the Unladen Swallow
wiki [43].
Command:
./perf.py -r -b default,apps ../a/python ../b/python
32-bit; gcc 4.0.3; Ubuntu Dapper; Intel Core2 Duo 6600 @ 2.4GHz; 2 cores; 4MB L2 cache; 4GB RAM
Benchmark
CPython 2.6.4
Unladen Swallow r988
Change
Significance
Timeline
2to3
25.13 s
24.87 s
1.01x faster
t=8.94
http://tinyurl.com/yamhrpg
django
1.08 s
0.80 s
1.35x faster
t=315.59
http://tinyurl.com/y9mrn8s
html5lib
14.29 s
13.20 s
1.08x faster
t=2.17
http://tinyurl.com/y8tyslu
nbody
0.51 s
0.28 s
1.84x faster
t=78.007
http://tinyurl.com/y989qhg
rietveld
0.75 s
0.55 s
1.37x faster
Insignificant
http://tinyurl.com/ye7mqd3
slowpickle
0.75 s
0.55 s
1.37x faster
t=20.78
http://tinyurl.com/ybrsfnd
slowspitfire
0.83 s
0.61 s
1.36x faster
t=2124.66
http://tinyurl.com/yfknhaw
slowunpickle
0.33 s
0.26 s
1.26x faster
t=15.12
http://tinyurl.com/yzlakoo
spambayes
0.31 s
0.34 s
1.10x slower
Insignificant
http://tinyurl.com/yem62ub
64-bit; gcc 4.2.4; Ubuntu Hardy; AMD Opteron 8214 HE @ 2.2 GHz; 4 cores; 1MB L2 cache; 8GB RAM
Benchmark
CPython 2.6.4
Unladen Swallow r988
Change
Significance
Timeline
2to3
31.98 s
30.41 s
1.05x faster
t=8.35
http://tinyurl.com/ybcrl3b
django
1.22 s
0.94 s
1.30x faster
t=106.68
http://tinyurl.com/ybwqll6
html5lib
18.97 s
17.79 s
1.06x faster
t=2.78
http://tinyurl.com/yzlyqvk
nbody
0.77 s
0.27 s
2.86x faster
t=133.49
http://tinyurl.com/yeyqhbg
rietveld
0.74 s
0.80 s
1.08x slower
t=-2.45
http://tinyurl.com/yzjc6ff
slowpickle
0.91 s
0.62 s
1.48x faster
t=28.04
http://tinyurl.com/yf7en6k
slowspitfire
1.01 s
0.72 s
1.40x faster
t=98.70
http://tinyurl.com/yc8pe2o
slowunpickle
0.51 s
0.34 s
1.51x faster
t=32.65
http://tinyurl.com/yjufu4j
spambayes
0.43 s
0.45 s
1.06x slower
Insignificant
http://tinyurl.com/yztbjfp
Many of these benchmarks take a hit under Unladen Swallow because the current
version blocks execution to compile Python functions down to machine code. This
leads to the behaviour seen in the timeline graphs for the html5lib and
rietveld benchmarks, for example, and slows down the overall performance of
2to3. We have an active development branch to fix this problem
([46], [47]), but working within
the strictures of CPython’s current threading system has complicated the process
and required far more care and time than originally anticipated. We view this
issue as critical to final merger into the py3k branch.
We have obviously not met our initial goal of a 5x performance improvement. A
performance retrospective follows, which addresses why we failed to meet our
initial performance goal. We maintain a list of yet-to-be-implemented
performance work [50].
Memory Usage
The following table shows maximum memory usage (in kilobytes) for each of
Unladen Swallow’s default benchmarks for both CPython 2.6.4 and Unladen Swallow
r988, as well as a timeline of memory usage across the lifetime of the
benchmark. We include tables for both 32- and 64-bit binaries. Memory usage was
measured on Linux 2.6 systems by summing the Private_ sections from the
kernel’s /proc/$pid/smaps pseudo-files [45].
Command:
./perf.py -r --track_memory -b default,apps ../a/python ../b/python
32-bit
Benchmark
CPython 2.6.4
Unladen Swallow r988
Change
Timeline
2to3
26396 kb
46896 kb
1.77x
http://tinyurl.com/yhr2h4z
django
10028 kb
27740 kb
2.76x
http://tinyurl.com/yhan8vs
html5lib
150028 kb
173924 kb
1.15x
http://tinyurl.com/ybt44en
nbody
3020 kb
16036 kb
5.31x
http://tinyurl.com/ya8hltw
rietveld
15008 kb
46400 kb
3.09x
http://tinyurl.com/yhd5dra
slowpickle
4608 kb
16656 kb
3.61x
http://tinyurl.com/ybukyvo
slowspitfire
85776 kb
97620 kb
1.13x
http://tinyurl.com/y9vj35z
slowunpickle
3448 kb
13744 kb
3.98x
http://tinyurl.com/yexh4d5
spambayes
7352 kb
46480 kb
6.32x
http://tinyurl.com/yem62ub
64-bit
Benchmark
CPython 2.6.4
Unladen Swallow r988
Change
Timeline
2to3
51596 kb
82340 kb
1.59x
http://tinyurl.com/yljg6rs
django
16020 kb
38908 kb
2.43x
http://tinyurl.com/ylqsebh
html5lib
259232 kb
324968 kb
1.25x
http://tinyurl.com/yha6oee
nbody
4296 kb
23012 kb
5.35x
http://tinyurl.com/yztozza
rietveld
24140 kb
73960 kb
3.06x
http://tinyurl.com/ybg2nq7
slowpickle
4928 kb
23300 kb
4.73x
http://tinyurl.com/yk5tpbr
slowspitfire
133276 kb
148676 kb
1.11x
http://tinyurl.com/y8bz2xe
slowunpickle
4896 kb
16948 kb
3.46x
http://tinyurl.com/ygywwoc
spambayes
10728 kb
84992 kb
7.92x
http://tinyurl.com/yhjban5
The increased memory usage comes from a) LLVM code generation, analysis and
optimization libraries; b) native code; c) memory usage issues or leaks in
LLVM; d) data structures needed to optimize and generate machine code; e)
as-yet uncategorized other sources.
While we have made significant progress in reducing memory usage since the
initial naive JIT implementation [42], there is obviously more
to do. We believe that there are still memory savings to be made without
sacrificing performance. We have tended to focus on raw performance, and we
have not yet made a concerted push to reduce memory usage. We view reducing
memory usage as a blocking issue for final merger into the py3k branch. We
seek guidance from the community on an acceptable level of increased memory
usage.
Start-up Time
Statically linking LLVM’s code generation, analysis and optimization libraries
increases the time needed to start the Python binary. C++ static initializers
used by LLVM also increase start-up time, as does importing the collection of
pre-compiled C runtime routines we want to inline to Python code.
Results from Unladen Swallow’s startup benchmarks:
$ ./perf.py -r -b startup /tmp/cpy-26/bin/python /tmp/unladen/bin/python
### normal_startup ###
Min: 0.219186 -> 0.352075: 1.6063x slower
Avg: 0.227228 -> 0.364384: 1.6036x slower
Significant (t=-51.879098, a=0.95)
Stddev: 0.00762 -> 0.02532: 3.3227x larger
Timeline: http://tinyurl.com/yfe8z3r
### startup_nosite ###
Min: 0.105949 -> 0.264912: 2.5004x slower
Avg: 0.107574 -> 0.267505: 2.4867x slower
Significant (t=-703.557403, a=0.95)
Stddev: 0.00214 -> 0.00240: 1.1209x larger
Timeline: http://tinyurl.com/yajn8fa
### bzr_startup ###
Min: 0.067990 -> 0.097985: 1.4412x slower
Avg: 0.084322 -> 0.111348: 1.3205x slower
Significant (t=-37.432534, a=0.95)
Stddev: 0.00793 -> 0.00643: 1.2330x smaller
Timeline: http://tinyurl.com/ybdm537
### hg_startup ###
Min: 0.016997 -> 0.024997: 1.4707x slower
Avg: 0.026990 -> 0.036772: 1.3625x slower
Significant (t=-53.104502, a=0.95)
Stddev: 0.00406 -> 0.00417: 1.0273x larger
Timeline: http://tinyurl.com/ycout8m
bzr_startup and hg_startup measure how long it takes Bazaar and
Mercurial, respectively, to display their help screens. startup_nosite
runs python -S many times; usage of the -S option is rare, but we feel
this gives a good indication of where increased startup time is coming from.
Unladen Swallow has made headway toward optimizing startup time, but there is
still more work to do and further optimizations to implement. Improving start-up
time is a high-priority item [33] in Unladen Swallow’s
merger punchlist.
Binary Size
Statically linking LLVM’s code generation, analysis and optimization libraries
significantly increases the size of the python binary. The tables below
report stripped on-disk binary sizes; the binaries are stripped to better
correspond with the configurations used by system package managers. We feel this
is the most realistic measure of any change in binary size.
Binary size
CPython 2.6.4
CPython 3.1.1
Unladen Swallow r1041
32-bit
1.3M
1.4M
12M
64-bit
1.6M
1.6M
12M
The increased binary size is caused by statically linking LLVM’s code
generation, analysis and optimization libraries into the python binary.
This can be straightforwardly addressed by modifying LLVM to better support
shared linking and then using that, instead of the current static linking. For
the moment, though, static linking provides an accurate look at the cost of
linking against LLVM.
Even when statically linking, we believe there is still headroom to improve
on-disk binary size by narrowing Unladen Swallow’s dependencies on LLVM. This
issue is actively being addressed [32].
Performance Retrospective
Our initial goal for Unladen Swallow was a 5x performance improvement over
CPython 2.6. We did not hit that, nor to put it bluntly, even come close. Why
did the project not hit that goal, and can an LLVM-based JIT ever hit that goal?
Why did Unladen Swallow not achieve its 5x goal? The primary reason was
that LLVM required more work than we had initially anticipated. Based on the
fact that Apple was shipping products based on LLVM [82], and
other high-level languages had successfully implemented LLVM-based JITs
([61], [63], [83]), we had assumed that LLVM’s JIT was
relatively free of show-stopper bugs.
That turned out to be incorrect. We had to turn our attention away from
performance to fix a number of critical bugs in LLVM’s JIT infrastructure (for
example, [84], [85]) as well as a number of
nice-to-have enhancements that would enable further optimizations along various
axes (for example, [87],
[86], [88]). LLVM’s static code generation
facilities, tools and optimization passes are stable and stress-tested, but the
just-in-time infrastructure was relatively untested and buggy. We have fixed
this.
(Our hypothesis is that we hit these problems – problems other projects had
avoided – because of the complexity and thoroughness of CPython’s standard
library test suite.)
We also diverted engineering effort away from performance and into support tools
such as gdb and oProfile. gdb did not work well with JIT compilers at all, and
LLVM previously had no integration with oProfile. Having JIT-aware debuggers and
profilers has been very valuable to the project, and we do not regret
channeling our time in these directions. See the Debugging and Profiling
sections for more information.
Can an LLVM-based CPython JIT ever hit the 5x performance target? The benchmark
results for JIT-based JavaScript implementations suggest that 5x is indeed
possible, as do the results PyPy’s JIT has delivered for numeric workloads. The
experience of Self-92 [52] is also instructive.
Can LLVM deliver this? We believe that we have only begun to scratch the surface
of what our LLVM-based JIT can deliver. The optimizations we have incorporated
into this system thus far have borne significant fruit (for example,
[89], [90],
[91]). Our experience to date is that the limiting factor
on Unladen Swallow’s performance is the engineering cycles needed to implement
the literature. We have found LLVM easy to work with and to modify, and its
built-in optimizations have greatly simplified the task of implementing
Python-level optimizations.
An overview of further performance opportunities is discussed in the
Future Work section.
Correctness and Compatibility
Unladen Swallow’s correctness test suite includes CPython’s test suite (under
Lib/test/), as well as a number of important third-party applications and
libraries [6]. A full list of these applications and libraries is
reproduced below. Any dependencies needed by these packages, such as
zope.interface [34], are also tested indirectly as a part of
testing the primary package, thus widening the corpus of tested third-party
Python code.
2to3
Cheetah
cvs2svn
Django
Nose
NumPy
PyCrypto
pyOpenSSL
PyXML
Setuptools
SQLAlchemy
SWIG
SymPy
Twisted
ZODB
These applications pass all relevant tests when run under Unladen Swallow. Note
that some tests that failed against our baseline of CPython 2.6.4 were disabled,
as were tests that made assumptions about CPython internals such as exact
bytecode numbers or bytecode format. Any package with disabled tests includes
a README.unladen file that details the changes (for example,
[37]).
In addition, Unladen Swallow is tested automatically against an array of
internal Google Python libraries and applications. These include Google’s
internal Python bindings for BigTable [35], the Mondrian code review
application [36], and Google’s Python standard library, among others.
The changes needed to run these projects under Unladen Swallow have consistently
broken into one of three camps:
Adding CPython 2.6 C API compatibility. Since Google still primarily uses
CPython 2.4 internally, we have needed to convert uses of int to
Py_ssize_t and similar API changes.
Fixing or disabling explicit, incorrect tests of the CPython version number.
Conditionally disabling code that worked around or depending on bugs in
CPython 2.4 that have since been fixed.
Testing against this wide range of public and proprietary applications and
libraries has been instrumental in ensuring the correctness of Unladen Swallow.
Testing has exposed bugs that we have duly corrected. Our automated regression
testing regime has given us high confidence in our changes as we have moved
forward.
In addition to third-party testing, we have added further tests to CPython’s
test suite for corner cases of the language or implementation that we felt were
untested or underspecified (for example, [48],
[49]). These have been especially important when implementing
optimizations, helping make sure we have not accidentally broken the darker
corners of Python.
We have also constructed a test suite focused solely on the LLVM-based JIT
compiler and the optimizations implemented for it [38]. Because of
the complexity and subtlety inherent in writing an optimizing compiler, we have
attempted to exhaustively enumerate the constructs, scenarios and corner cases
we are compiling and optimizing. The JIT tests also include tests for things
like the JIT hotness model, making it easier for future CPython developers to
maintain and improve.
We have recently begun using fuzz testing [39] to stress-test the
compiler. We have used both pyfuzz [40] and Fusil [41] in the past,
and we recommend they be introduced as an automated part of the CPython testing
process.
Known Incompatibilities
The only application or library we know to not work with Unladen Swallow that
does work with CPython 2.6.4 is Psyco [65]. We are aware of some libraries
such as PyGame [79] that work well with CPython 2.6.4, but suffer some
degradation due to changes made in Unladen Swallow. We are tracking this issue
[47] and are working to resolve these instances of
degradation.
While Unladen Swallow is source-compatible with CPython 2.6.4, it is not
binary compatible. C extension modules compiled against one will need to be
recompiled to work with the other.
The merger of Unladen Swallow should have minimal impact on long-lived
CPython optimization branches like WPython. WPython [105] and Unladen
Swallow are largely orthogonal, and there is no technical reason why both
could not be merged into CPython. The changes needed to make WPython
compatible with a JIT-enhanced version of CPython should be minimal
[114]. The same should be true for other CPython optimization
projects (for example, [115]).
Invasive forks of CPython such as Stackless Python [116] are more
challenging to support. Since Stackless is highly unlikely to be merged into
CPython [117] and an increased maintenance burden is part and
parcel of any fork, we consider compatibility with Stackless to be relatively
low-priority. JIT-compiled stack frames use the C stack, so Stackless should
be able to treat them the same as it treats calls through extension modules.
If that turns out to be unacceptable, Stackless could either remove the JIT
compiler or improve JIT code generation to better support heap-based stack
frames [118], [119].
Platform Support
Unladen Swallow is inherently limited by the platform support provided by LLVM,
especially LLVM’s JIT compilation system [7]. LLVM’s JIT has the
best support on x86 and x86-64 systems, and these are the platforms where
Unladen Swallow has received the most testing. We are confident in LLVM/Unladen
Swallow’s support for x86 and x86-64 hardware. PPC and ARM support exists, but
is not widely used and may be buggy (for example, [100],
[84], [101]).
Unladen Swallow is known to work on the following operating systems: Linux,
Darwin, Windows. Unladen Swallow has received the most testing on Linux and
Darwin, though it still builds and passes its tests on Windows.
In order to support hardware and software platforms where LLVM’s JIT does not
work, Unladen Swallow provides a ./configure --without-llvm option. This
flag carves out any part of Unladen Swallow that depends on LLVM, yielding a
Python binary that works and passes its tests, but has no performance
advantages. This configuration is recommended for hardware unsupported by LLVM,
or systems that care more about memory usage than performance.
Impact on CPython Development
Experimenting with Changes to Python or CPython Bytecode
Unladen Swallow’s JIT compiler operates on CPython bytecode, and as such, it is
immune to Python language changes that affect only the parser.
We recommend that changes to the CPython bytecode compiler or the semantics of
individual bytecodes be prototyped in the interpreter loop first, then be ported
to the JIT compiler once the semantics are clear. To make this easier, Unladen
Swallow includes a --without-llvm configure-time option that strips out the
JIT compiler and all associated infrastructure. This leaves the current burden
of experimentation unchanged so that developers can prototype in the current
low-barrier-to-entry interpreter loop.
Unladen Swallow began implementing its JIT compiler by doing straightforward,
naive translations from bytecode implementations into LLVM API calls. We found
this process to be easily understood, and we recommend the same approach for
CPython. We include several sample changes from the Unladen Swallow repository
here as examples of this style of development: [26], [27],
[28], [29].
Debugging
The Unladen Swallow team implemented changes to gdb to make it easier to use gdb
to debug JIT-compiled Python code. These changes were released in gdb 7.0
[17]. They make it possible for gdb to identify and unwind past
JIT-generated call stack frames. This allows gdb to continue to function as
before for CPython development if one is changing, for example, the list
type or builtin functions.
Example backtrace after our changes, where baz, bar and foo are
JIT-compiled:
Program received signal SIGSEGV, Segmentation fault.
0x00002aaaabe7d1a8 in baz ()
(gdb) bt
#0 0x00002aaaabe7d1a8 in baz ()
#1 0x00002aaaabe7d12c in bar ()
#2 0x00002aaaabe7d0aa in foo ()
#3 0x00002aaaabe7d02c in main ()
#4 0x0000000000b870a2 in llvm::JIT::runFunction (this=0x1405b70, F=0x14024e0, ArgValues=...)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/JIT/JIT.cpp:395
#5 0x0000000000baa4c5 in llvm::ExecutionEngine::runFunctionAsMain
(this=0x1405b70, Fn=0x14024e0, argv=..., envp=0x7fffffffe3c0)
at /home/rnk/llvm-gdb/lib/ExecutionEngine/ExecutionEngine.cpp:377
#6 0x00000000007ebd52 in main (argc=2, argv=0x7fffffffe3a8,
envp=0x7fffffffe3c0) at /home/rnk/llvm-gdb/tools/lli/lli.cpp:208
Previously, the JIT-compiled frames would have caused gdb to unwind incorrectly,
generating lots of obviously-incorrect #6 0x00002aaaabe7d0aa in ?? ()-style
stack frames.
Highlights:
gdb 7.0 is able to correctly parse JIT-compiled stack frames, allowing full
use of gdb on non-JIT-compiled functions, that is, the vast majority of the
CPython codebase.
Disassembling inside a JIT-compiled stack frame automatically prints the full
list of instructions making up that function. This is an advance over the
state of gdb before our work: developers needed to guess the starting address
of the function and manually disassemble the assembly code.
Flexible underlying mechanism allows CPython to add more and more information,
and eventually reach parity with C/C++ support in gdb for JIT-compiled machine
code.
Lowlights:
gdb cannot print local variables or tell you what line you’re currently
executing inside a JIT-compiled function. Nor can it step through
JIT-compiled code, except for one instruction at a time.
Not yet integrated with Apple’s gdb or Microsoft’s Visual Studio debuggers.
The Unladen Swallow team is working with Apple to get these changes
incorporated into their future gdb releases.
Profiling
Unladen Swallow integrates with oProfile 0.9.4 and newer [18] to support
assembly-level profiling on Linux systems. This means that oProfile will
correctly symbolize JIT-compiled functions in its reports.
Example report, where the #u#-prefixed symbol names are JIT-compiled Python
functions:
$ opreport -l ./python | less
CPU: Core 2, speed 1600 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
samples % image name symbol name
79589 4.2329 python PyString_FromFormatV
62971 3.3491 python PyEval_EvalCodeEx
62713 3.3354 python tupledealloc
57071 3.0353 python _PyEval_CallFunction
50009 2.6597 24532.jo #u#force_unicode
47468 2.5246 python PyUnicodeUCS2_Decode
45829 2.4374 python PyFrame_New
45173 2.4025 python lookdict_string
43082 2.2913 python PyType_IsSubtype
39763 2.1148 24532.jo #u#render5
38145 2.0287 python _PyType_Lookup
37643 2.0020 python PyObject_GC_UnTrack
37105 1.9734 python frame_dealloc
36849 1.9598 python PyEval_EvalFrame
35630 1.8950 24532.jo #u#resolve
33313 1.7717 python PyObject_IsInstance
33208 1.7662 python PyDict_GetItem
33168 1.7640 python PyTuple_New
30458 1.6199 python PyCFunction_NewEx
This support is functional, but as-yet unpolished. Unladen Swallow maintains a
punchlist of items we feel are important to improve in our oProfile integration
to make it more useful to core CPython developers [19].
Highlights:
Symbolization of JITted frames working in oProfile on Linux.
Lowlights:
No work yet invested in improving symbolization of JIT-compiled frames for
Apple’s Shark [20] or Microsoft’s Visual Studio profiling tools.
Some polishing still desired for oProfile output.
We recommend using oProfile 0.9.5 (and newer) to work around a now-fixed bug on
x86-64 platforms in oProfile. oProfile 0.9.4 will work fine on 32-bit platforms,
however.
Given the ease of integrating oProfile with LLVM [21] and
Unladen Swallow [22], other profiling tools should be easy as
well, provided they support a similar JIT interface [23].
We have documented the process for using oProfile to profile Unladen Swallow
[24]. This document will be merged into CPython’s Doc/
tree in the merge.
Addition of C++ to CPython
In order to use LLVM, Unladen Swallow has introduced C++ into the core CPython
tree and build process. This is an unavoidable part of depending on LLVM; though
LLVM offers a C API [8], it is limited and does not expose the
functionality needed by CPython. Because of this, we have implemented the
internal details of the Unladen Swallow JIT and its supporting infrastructure
in C++. We do not propose converting the entire CPython codebase to C++.
Highlights:
Easy use of LLVM’s full, powerful code generation and related APIs.
Convenient, abstract data structures simplify code.
C++ is limited to relatively small corners of the CPython codebase.
C++ can be disabled via ./configure --without-llvm, which even omits the
dependency on libstdc++.
Lowlights:
Developers must know two related languages, C and C++ to work on the full
range of CPython’s internals.
A C++ style guide will need to be developed and enforced. PEP 7 will be
extended [120] to encompass C++ by taking the relevant parts of
the C++ style guides from Unladen Swallow [70], LLVM
[71] and Google [72].
Different C++ compilers emit different ABIs; this can cause problems if
CPython is compiled with one C++ compiler and extensions modules are compiled
with a different C++ compiler.
Managing LLVM Releases, C++ API Changes
LLVM is released regularly every six months. This means that LLVM may be
released two or three times during the course of development of a CPython 3.x
release. Each LLVM release brings newer and more powerful optimizations,
improved platform support and more sophisticated code generation.
LLVM releases usually include incompatible changes to the LLVM C++ API; the
release notes for LLVM 2.6 [9] include a list of
intentionally-introduced incompatibilities. Unladen Swallow has tracked LLVM
trunk closely over the course of development. Our experience has been
that LLVM API changes are obvious and easily or mechanically remedied. We
include two such changes from the Unladen Swallow tree as references here:
[10], [11].
Due to API incompatibilities, we recommend that an LLVM-based CPython target
compatibility with a single version of LLVM at a time. This will lower the
overhead on the core development team. Pegging to an LLVM version should not be
a problem from a packaging perspective, because pre-built LLVM packages
generally become available via standard system package managers fairly quickly
following an LLVM release, and failing that, llvm.org itself includes binary
releases.
Unladen Swallow has historically included a copy of the LLVM and Clang source
trees in the Unladen Swallow tree; this was done to allow us to closely track
LLVM trunk as we made patches to it. We do not recommend this model of
development for CPython. CPython releases should be based on official LLVM
releases. Pre-built LLVM packages are available from MacPorts [12]
for Darwin, and from most major Linux distributions ([13],
[14], [16]). LLVM itself provides additional binaries,
such as for MinGW [25].
LLVM is currently intended to be statically linked; this means that binary
releases of CPython will include the relevant parts (not all!) of LLVM. This
will increase the binary size, as noted above. To simplify downstream package
management, we will modify LLVM to better support shared linking. This issue
will block final merger [98].
Unladen Swallow has tasked a full-time engineer with fixing any remaining
critical issues in LLVM before LLVM’s 2.7 release. We consider it essential that
CPython 3.x be able to depend on a released version of LLVM, rather than closely
tracking LLVM trunk as Unladen Swallow has done. We believe we will finish this
work [99] before the release of LLVM 2.7, expected in May 2010.
Building CPython
In addition to a runtime dependency on LLVM, Unladen Swallow includes a
build-time dependency on Clang [5], an LLVM-based C/C++ compiler. We use
this to compile parts of the C-language Python runtime to LLVM’s intermediate
representation; this allows us to perform cross-language inlining, yielding
increased performance. Clang is not required to run Unladen Swallow. Clang
binary packages are available from most major Linux distributions (for example,
[15]).
We examined the impact of Unladen Swallow on the time needed to build Python,
including configure, full builds and incremental builds after touching a single
C source file.
./configure
CPython 2.6.4
CPython 3.1.1
Unladen Swallow r988
Run 1
0m20.795s
0m16.558s
0m15.477s
Run 2
0m15.255s
0m16.349s
0m15.391s
Run 3
0m15.228s
0m16.299s
0m15.528s
Full make
CPython 2.6.4
CPython 3.1.1
Unladen Swallow r988
Run 1
1m30.776s
1m22.367s
1m54.053s
Run 2
1m21.374s
1m22.064s
1m49.448s
Run 3
1m22.047s
1m23.645s
1m49.305s
Full builds take a hit due to a) additional .cc files needed for LLVM
interaction, b) statically linking LLVM into libpython, c) compiling parts
of the Python runtime to LLVM IR to enable cross-language inlining.
Incremental builds are also somewhat slower than mainline CPython. The table
below shows incremental rebuild times after touching Objects/listobject.c.
Incr make
CPython 2.6.4
CPython 3.1.1
Unladen Swallow r1024
Run 1
0m1.854s
0m1.456s
0m6.680s
Run 2
0m1.437s
0m1.442s
0m5.310s
Run 3
0m1.440s
0m1.425s
0m7.639s
As with full builds, this extra time comes from statically linking LLVM
into libpython. If libpython were linked shared against LLVM, this
overhead would go down.
Proposed Merge Plan
We propose focusing our efforts on eventual merger with CPython’s 3.x line of
development. The BDFL has indicated that 2.7 is to be the final release of
CPython’s 2.x line of development [30], and since 2.7 alpha 1 has
already been released, we have missed the window. Python 3 is the
future, and that is where we will target our performance efforts.
We recommend the following plan for merger of Unladen Swallow into the CPython
source tree:
Creation of a branch in the CPython SVN repository to work in, call it
py3k-jit as a strawman. This will be a branch of the CPython py3k
branch.
We will keep this branch closely integrated to py3k. The further we
deviate, the harder our work will be.
Any JIT-related patches will go into the py3k-jit branch.
Non-JIT-related patches will go into the py3k branch (once reviewed and
approved) and be merged back into the py3k-jit branch.
Potentially-contentious issues, such as the introduction of new command line
flags or environment variables, will be discussed on python-dev.
Because Google uses CPython 2.x internally, Unladen Swallow is based on CPython
2.6. We would need to port our compiler to Python 3; this would be done as
patches are applied to the py3k-jit branch, so that the branch remains a
consistent implementation of Python 3 at all times.
We believe this approach will be minimally disruptive to the 3.2 or 3.3 release
process while we iron out any remaining issues blocking final merger into
py3k. Unladen Swallow maintains a punchlist of known issues needed before
final merger [31], which includes all problems mentioned in this
PEP; we trust the CPython community will have its own concerns. This punchlist
is not static; other issues may emerge in the future that will block final
merger into the py3k branch.
Changes will be committed directly to the py3k-jit branch, with only large,
tricky or controversial changes sent for pre-commit code review.
Contingency Plans
There is a chance that we will not be able to reduce memory usage or startup
time to a level satisfactory to the CPython community. Our primary contingency
plan for this situation is to shift from an online just-in-time compilation
strategy to an offline ahead-of-time strategy using an instrumented CPython
interpreter loop to obtain feedback. This is the same model used by gcc’s
feedback-directed optimizations (-fprofile-generate) [112] and
Microsoft Visual Studio’s profile-guided optimizations [113]; we will
refer to this as “feedback-directed optimization” here, or FDO.
We believe that an FDO compiler for Python would be inferior to a JIT compiler.
FDO requires a high-quality, representative benchmark suite, which is a relative
rarity in both open- and closed-source development. A JIT compiler can
dynamically find and optimize the hot spots in any application – benchmark
suite or no – allowing it to adapt to changes in application bottlenecks
without human intervention.
If an ahead-of-time FDO compiler is required, it should be able to leverage a
large percentage of the code and infrastructure already developed for Unladen
Swallow’s JIT compiler. Indeed, these two compilation strategies could exist
side by side.
Future Work
A JIT compiler is an extremely flexible tool, and we have by no means exhausted
its full potential. Unladen Swallow maintains a list of yet-to-be-implemented
performance optimizations [50] that the team has not yet
had time to fully implement. Examples:
Python/Python inlining [67]. Our compiler currently performs no
inlining between pure-Python functions. Work on this is on-going
[69].
Unboxing [68]. Unboxing is critical for numerical performance. PyPy
in particular has demonstrated the value of unboxing to heavily numeric
workloads.
Recompilation, adaptation. Unladen Swallow currently only compiles a Python
function once, based on its usage pattern up to that point. If the usage
pattern changes, limitations in LLVM [73] prevent us from
recompiling the function to better serve the new usage pattern.
JIT-compile regular expressions. Modern JavaScript engines reuse their JIT
compilation infrastructure to boost regex performance [74].
Unladen Swallow has developed benchmarks for Python regular expression
performance ([75], [76], [77]), but
work on regex performance is still at an early stage [78].
Trace compilation [92], [93].
Based on the results of PyPy and Tracemonkey [94], we believe that
a CPython JIT should incorporate trace compilation to some degree. We
initially avoided a purely-tracing JIT compiler in favor of a simpler,
function-at-a-time compiler. However this function-at-a-time compiler has laid
the groundwork for a future tracing compiler implemented in the same terms.
Profile generation/reuse. The runtime data gathered by the JIT could be
persisted to disk and reused by subsequent JIT compilations, or by external
tools such as Cython [102] or a feedback-enhanced code coverage tool.
This list is by no means exhaustive. There is a vast literature on optimizations
for dynamic languages that could and should be implemented in terms of Unladen
Swallow’s LLVM-based JIT compiler [54].
Unladen Swallow Community
We would like to thank the community of developers who have contributed to
Unladen Swallow, in particular: James Abbatiello, Joerg Blank, Eric Christopher,
Alex Gaynor, Chris Lattner, Nick Lewycky, Evan Phoenix and Thomas Wouters.
Licensing
All work on Unladen Swallow is licensed to the Python Software Foundation (PSF)
under the terms of the Python Software Foundation License v2 [56] under
the umbrella of Google’s blanket Contributor License Agreement with the PSF.
LLVM is licensed [57] under the University of llinois/NCSA Open Source
License [58], a liberal, OSI-approved license. The University of Illinois
Urbana-Champaign is the sole copyright holder for LLVM.
References
[1]
http://qinsb.blogspot.com/2011/03/unladen-swallow-retrospective.html
[2]
http://en.wikipedia.org/wiki/Dead_Parrot_sketch
[3]
http://code.google.com/p/unladen-swallow/
[4]
http://llvm.org/
[5]
http://clang.llvm.org/
[6]
http://code.google.com/p/unladen-swallow/wiki/Testing
[7]
http://llvm.org/docs/GettingStarted.html#hardware
[8]
http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm-c/
[9]
http://llvm.org/releases/2.6/docs/ReleaseNotes.html#whatsnew
[10]
http://code.google.com/p/unladen-swallow/source/detail?r=820
[11]
http://code.google.com/p/unladen-swallow/source/detail?r=532
[12]
http://trac.macports.org/browser/trunk/dports/lang/llvm/Portfile
[13]
http://packages.ubuntu.com/karmic/llvm
[14]
http://packages.debian.org/unstable/devel/llvm
[15]
http://packages.debian.org/sid/clang
[16]
http://koji.fedoraproject.org/koji/buildinfo?buildID=134384
[17]
http://www.gnu.org/software/gdb/download/ANNOUNCEMENT
[18]
http://oprofile.sourceforge.net/news/
[19]
http://code.google.com/p/unladen-swallow/issues/detail?id=63
[20]
http://developer.apple.com/tools/sharkoptimize.html
[21]
http://llvm.org/viewvc/llvm-project?view=rev&revision=75279
[22]
http://code.google.com/p/unladen-swallow/source/detail?r=986
[23]
http://oprofile.sourceforge.net/doc/devel/jit-interface.html
[24]
http://code.google.com/p/unladen-swallow/wiki/UsingOProfile
[25]
http://llvm.org/releases/download.html
[26]
http://code.google.com/p/unladen-swallow/source/detail?r=359
[27]
http://code.google.com/p/unladen-swallow/source/detail?r=376
[28]
http://code.google.com/p/unladen-swallow/source/detail?r=417
[29]
http://code.google.com/p/unladen-swallow/source/detail?r=517
[30]
https://mail.python.org/pipermail/python-dev/2010-January/095682.html
[31] (1, 2)
http://code.google.com/p/unladen-swallow/issues/list?q=label:Merger
[32]
http://code.google.com/p/unladen-swallow/issues/detail?id=118
[33]
http://code.google.com/p/unladen-swallow/issues/detail?id=64
[34]
http://www.zope.org/Products/ZopeInterface
[35]
http://en.wikipedia.org/wiki/BigTable
[36]
http://www.niallkennedy.com/blog/2006/11/google-mondrian.html
[37]
http://code.google.com/p/unladen-swallow/source/browse/tests/lib/sqlalchemy/README.unladen
[38]
http://code.google.com/p/unladen-swallow/source/browse/trunk/Lib/test/test_llvm.py
[39]
http://en.wikipedia.org/wiki/Fuzz_testing
[40]
http://bitbucket.org/ebo/pyfuzz/overview/
[41]
http://lwn.net/Articles/322826/
[42]
http://code.google.com/p/unladen-swallow/issues/detail?id=68
[43] (1, 2)
http://code.google.com/p/unladen-swallow/wiki/Benchmarks
[44]
http://en.wikipedia.org/wiki/Student’s_t-test
[45]
http://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html
[46]
http://code.google.com/p/unladen-swallow/source/browse/branches/background-thread
[47] (1, 2)
http://code.google.com/p/unladen-swallow/issues/detail?id=40
[48]
http://code.google.com/p/unladen-swallow/source/detail?r=888
[49]
http://code.google.com/p/unladen-swallow/source/diff?spec=svn576&r=576&format=side&path=/trunk/Lib/test/test_trace.py
[50] (1, 2)
http://code.google.com/p/unladen-swallow/issues/list?q=label:Performance
[51]
http://en.wikipedia.org/wiki/Just-in-time_compilation
[52] (1, 2)
http://research.sun.com/self/papers/urs-thesis.html
[53]
http://code.google.com/p/unladen-swallow/wiki/ProjectPlan
[54] (1, 2)
http://code.google.com/p/unladen-swallow/wiki/RelevantPapers
[55] (1, 2)
http://code.google.com/p/unladen-swallow/source/browse/trunk/Python/llvm_notes.txt
[56]
http://www.python.org/psf/license/
[57]
http://llvm.org/docs/DeveloperPolicy.html#clp
[58]
http://www.opensource.org/licenses/UoI-NCSA.php
[59]
http://code.google.com/p/v8/
[60]
http://webkit.org/blog/214/introducing-squirrelfish-extreme/
[61] (1, 2)
http://rubini.us/
[62]
http://lists.parrot.org/pipermail/parrot-dev/2009-September/002811.html
[63] (1, 2)
http://www.macruby.org/
[64]
http://en.wikipedia.org/wiki/HotSpot
[65] (1, 2, 3)
http://psyco.sourceforge.net/
[66]
http://codespeak.net/pypy/dist/pypy/doc/
[67]
http://en.wikipedia.org/wiki/Inline_expansion
[68]
http://en.wikipedia.org/wiki/Object_type_(object-oriented_programming%29
[69]
http://code.google.com/p/unladen-swallow/issues/detail?id=86
[70]
http://code.google.com/p/unladen-swallow/wiki/StyleGuide
[71]
http://llvm.org/docs/CodingStandards.html
[72]
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
[73]
http://code.google.com/p/unladen-swallow/issues/detail?id=41
[74]
http://code.google.com/p/unladen-swallow/wiki/ProjectPlan#Regular_Expressions
[75]
http://code.google.com/p/unladen-swallow/source/browse/tests/performance/bm_regex_compile.py
[76]
http://code.google.com/p/unladen-swallow/source/browse/tests/performance/bm_regex_v8.py
[77]
http://code.google.com/p/unladen-swallow/source/browse/tests/performance/bm_regex_effbot.py
[78]
http://code.google.com/p/unladen-swallow/issues/detail?id=13
[79]
http://www.pygame.org/
[80]
http://numpy.scipy.org/
[81]
http://codespeak.net:8099/plotsummary.html
[82]
http://llvm.org/Users.html
[83]
http://www.ffconsultancy.com/ocaml/hlvm/
[84] (1, 2)
http://llvm.org/PR5201
[85]
http://llvm.org/viewvc/llvm-project?view=rev&revision=76828
[86]
http://llvm.org/viewvc/llvm-project?rev=91611&view=rev
[87]
http://llvm.org/viewvc/llvm-project?rev=85182&view=rev
[88]
http://llvm.org/PR5735
[89]
http://code.google.com/p/unladen-swallow/issues/detail?id=73
[90]
http://code.google.com/p/unladen-swallow/issues/detail?id=88
[91]
http://code.google.com/p/unladen-swallow/issues/detail?id=67
[92]
http://www.ics.uci.edu/~franz/Site/pubs-pdf/C44Prepub.pdf
[93]
http://www.ics.uci.edu/~franz/Site/pubs-pdf/ICS-TR-07-12.pdf
[94]
https://wiki.mozilla.org/JavaScript:TraceMonkey
[95]
http://llvm.org/docs/LangRef.html
[96]
http://code.google.com/p/unladen-swallow/issues/detail?id=120
[97]
http://code.google.com/p/unladen-swallow/source/browse/tests/performance/bm_nbody.py
[98]
http://code.google.com/p/unladen-swallow/issues/detail?id=130
[99]
http://code.google.com/p/unladen-swallow/issues/detail?id=131
[100]
http://llvm.org/PR4816
[101]
http://llvm.org/PR6065
[102] (1, 2)
http://www.cython.org/
[103]
http://shed-skin.blogspot.com/
[104]
http://shedskin.googlecode.com/files/shedskin-tutorial-0.3.html
[105] (1, 2)
http://code.google.com/p/wpython/
[106]
http://www.mail-archive.com/[email protected]/msg45143.html
[107]
http://ironpython.net/
[108]
http://www.mono-project.com/
[109]
http://www.jython.org/
[110]
http://wiki.python.org/jython/JythonFaq/GeneralInfo
[111]
http://code.google.com/p/pyv8/
[112]
http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
[113]
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
[114]
http://www.mail-archive.com/[email protected]/msg44962.html
[115]
http://portal.acm.org/citation.cfm?id=1534530.1534550
[116]
http://www.stackless.com/
[117]
https://mail.python.org/pipermail/python-dev/2004-June/045165.html
[118]
http://www.nondot.org/sabre/LLVMNotes/ExplicitlyManagedStackFrames.txt
[119]
http://old.nabble.com/LLVM-and-coroutines-microthreads-td23080883.html
[120]
http://www.mail-archive.com/[email protected]/msg45544.html
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 3146 – Merging Unladen Swallow into CPython | Standards Track | This PEP proposes the merger of the Unladen Swallow project [3] into
CPython’s source tree. Unladen Swallow is an open-source branch of CPython
focused on performance. Unladen Swallow is source-compatible with valid Python
2.6.4 applications and C extension modules. |
PEP 3147 – PYC Repository Directories
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Standards Track
Created:
16-Dec-2009
Python-Version:
3.2
Post-History:
30-Jan-2010, 25-Feb-2010, 03-Mar-2010, 12-Apr-2010
Resolution:
Python-Dev message
Table of Contents
Abstract
Background
Rationale
Proposal
Examples
Python behavior
Case 0: The steady state
Case 1: The first import
Case 2: The second import
Case 3: __pycache__/foo.<magic>.pyc with no source
Case 4: legacy pyc files and source-less imports
Case 5: read-only file systems
Flow chart
Alternative Python implementations
Implementation strategy
Effects on existing code
Detecting PEP 3147 availability
__file__
py_compile and compileall
bdist_wininst and the Windows installer
File extension checks
Backports
Makefiles and other dependency tools
Alternatives
Hexadecimal magic tags
PEP 304
Fat byte compilation files
Multiple file extensions
.pyc
Reference implementation
References
ACKNOWLEDGMENTS
Copyright
Abstract
This PEP describes an extension to Python’s import mechanism which
improves sharing of Python source code files among multiple installed
different versions of the Python interpreter. It does this by
allowing more than one byte compilation file (.pyc files) to be
co-located with the Python source file (.py file). The extension
described here can also be used to support different Python
compilation caches, such as JIT output that may be produced by an
Unladen Swallow (PEP 3146) enabled C Python.
Background
CPython compiles its source code into “byte code”, and for performance
reasons, it caches this byte code on the file system whenever the
source file has changes. This makes loading of Python modules much
faster because the compilation phase can be bypassed. When your
source file is foo.py, CPython caches the byte code in a foo.pyc
file right next to the source.
Byte code files contain two 32-bit big-endian numbers followed by the
marshaled [2] code object. The 32-bit numbers represent a magic
number and a timestamp. The magic number changes whenever Python
changes the byte code format, e.g. by adding new byte codes to its
virtual machine. This ensures that pyc files built for previous
versions of the VM won’t cause problems. The timestamp is used to
make sure that the pyc file match the py file that was used to create
it. When either the magic number or timestamp do not match, the py
file is recompiled and a new pyc file is written.
In practice, it is well known that pyc files are not compatible across
Python major releases. A reading of import.c [3] in the Python
source code proves that within recent memory, every new CPython major
release has bumped the pyc magic number.
Rationale
Linux distributions such as Ubuntu [4] and Debian [5] provide more
than one Python version at the same time to their users. For example,
Ubuntu 9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1,
with Python 2.6 being the default.
This causes a conflict for third party Python source files installed
by the system, because you cannot compile a single Python source file
for more than one Python version at a time. When Python finds a pyc
file with a non-matching magic number, it falls back to the slower
process of recompiling the source. Thus if your system installed a
/usr/share/python/foo.py, two different versions of Python would
fight over the pyc file and rewrite it each time the source is
compiled. (The standard library is unaffected by this, since multiple
versions of the stdlib are installed on such distributions..)
Furthermore, in order to ease the burden on operating system packagers
for these distributions, the distribution packages do not contain
Python version numbers [6]; they are shared across all Python
versions installed on the system. Putting Python version numbers in
the packages would be a maintenance nightmare, since all the packages
- and their dependencies - would have to be updated every time a new
Python release was added or removed from the distribution. Because of
the sheer number of packages available, this amount of work is
infeasible.
(PEP 384 has been proposed to address binary compatibility issues
of third party extension modules across different versions of Python.)
Because these distributions cannot share pyc files, elaborate
mechanisms have been developed to put the resulting pyc files in
non-shared locations while the source code is still shared. Examples
include the symlink-based Debian regimes python-support [8] and
python-central [9]. These approaches make for much more complicated,
fragile, inscrutable, and fragmented policies for delivering Python
applications to a wide range of users. Arguably more users get Python
from their operating system vendor than from upstream tarballs. Thus,
solving this pyc sharing problem for CPython is a high priority for
such vendors.
This PEP proposes a solution to this problem.
Proposal
Python’s import machinery is extended to write and search for byte
code cache files in a single directory inside every Python package
directory. This directory will be called __pycache__.
Further, pyc file names will contain a magic string (called a “tag”)
that differentiates the Python version they were compiled for. This
allows multiple byte compiled cache files to co-exist for a single
Python source file.
The magic tag is implementation defined, but should contain the
implementation name and a version number shorthand, e.g. cpython-32.
It must be unique among all versions of Python, and whenever the magic
number is bumped, a new magic tag must be defined. An example pyc
file for Python 3.2 is thus foo.cpython-32.pyc.
The magic tag is available in the imp module via the get_tag()
function. This is parallel to the imp.get_magic() function.
This scheme has the added benefit of reducing the clutter in a Python
package directory.
When a Python source file is imported for the first time, a
__pycache__ directory will be created in the package directory, if
one does not already exist. The pyc file for the imported source will
be written to the __pycache__ directory, using the magic-tag
formatted name. If either the creation of the __pycache__ directory
or the pyc file inside that fails, the import will still succeed, just
as it does in a pre-PEP 3147 world.
If the py source file is missing, the pyc file inside __pycache__
will be ignored. This eliminates the problem of accidental stale pyc
file imports.
For backward compatibility, Python will still support pyc-only
distributions, however it will only do so when the pyc file lives in
the directory where the py file would have been, i.e. not in the
__pycache__ directory. pyc file outside of __pycache__ will only
be imported if the py source file is missing.
Tools such as py_compile [15] and compileall [16] will be
extended to create PEP 3147 formatted layouts automatically, but will
have an option to create pyc-only distribution layouts.
Examples
What would this look like in practice?
Let’s say we have a Python package named alpha which contains a
sub-package name beta. The source directory layout before byte
compilation might look like this:
alpha/
__init__.py
one.py
two.py
beta/
__init__.py
three.py
four.py
After byte compiling this package with Python 3.2, you would see the
following layout:
alpha/
__pycache__/
__init__.cpython-32.pyc
one.cpython-32.pyc
two.cpython-32.pyc
__init__.py
one.py
two.py
beta/
__pycache__/
__init__.cpython-32.pyc
three.cpython-32.pyc
four.cpython-32.pyc
__init__.py
three.py
four.py
Note: listing order may differ depending on the platform.
Let’s say that two new versions of Python are installed, one is Python
3.3 and another is Unladen Swallow. After byte compilation, the file
system would look like this:
alpha/
__pycache__/
__init__.cpython-32.pyc
__init__.cpython-33.pyc
__init__.unladen-10.pyc
one.cpython-32.pyc
one.cpython-33.pyc
one.unladen-10.pyc
two.cpython-32.pyc
two.cpython-33.pyc
two.unladen-10.pyc
__init__.py
one.py
two.py
beta/
__pycache__/
__init__.cpython-32.pyc
__init__.cpython-33.pyc
__init__.unladen-10.pyc
three.cpython-32.pyc
three.cpython-33.pyc
three.unladen-10.pyc
four.cpython-32.pyc
four.cpython-33.pyc
four.unladen-10.pyc
__init__.py
three.py
four.py
As you can see, as long as the Python version identifier string is
unique, any number of pyc files can co-exist. These identifier
strings are described in more detail below.
A nice property of this layout is that the __pycache__ directories
can generally be ignored, such that a normal directory listing would
show something like this:
alpha/
__pycache__/
__init__.py
one.py
two.py
beta/
__pycache__/
__init__.py
three.py
four.py
This is much less cluttered than even today’s Python.
Python behavior
When Python searches for a module to import (say foo), it may find
one of several situations. As per current Python rules, the term
“matching pyc” means that the magic number matches the current
interpreter’s magic number, and the source file’s timestamp matches
the timestamp in the pyc file exactly.
Case 0: The steady state
When Python is asked to import module foo, it searches for a
foo.py file (or foo package, but that’s not important for this
discussion) along its sys.path. If found, Python looks to see if
there is a matching __pycache__/foo.<magic>.pyc file, and if so,
that pyc file is loaded.
Case 1: The first import
When Python locates the foo.py, if the __pycache__/foo.<magic>.pyc
file is missing, Python will create it, also creating the
__pycache__ directory if necessary. Python will parse and byte
compile the foo.py file and save the byte code in
__pycache__/foo.<magic>.pyc.
Case 2: The second import
When Python is asked to import module foo a second time (in a
different process of course), it will again search for the foo.py
file along its sys.path. When Python locates the foo.py file, it
looks for a matching __pycache__/foo.<magic>.pyc and finding this,
it reads the byte code and continues as usual.
Case 3: __pycache__/foo.<magic>.pyc with no source
It’s possible that the foo.py file somehow got removed, while
leaving the cached pyc file still on the file system. If the
__pycache__/foo.<magic>.pyc file exists, but the foo.py file used
to create it does not, Python will raise an ImportError when asked
to import foo. In other words, Python will not import a pyc file from
the cache directory unless the source file exists.
Case 4: legacy pyc files and source-less imports
Python will ignore all legacy pyc files when a source file exists next
to it. In other words, if a foo.pyc file exists next to the
foo.py file, the pyc file will be ignored in all cases
In order to continue to support source-less distributions though, if
the source file is missing, Python will import a lone pyc file if it
lives where the source file would have been.
Case 5: read-only file systems
When the source lives on a read-only file system, or the __pycache__
directory or pyc file cannot otherwise be written, all the same rules
apply. This is also the case when __pycache__ happens to be written
with permissions which do not allow for writing containing pyc files.
Flow chart
Here is a flow chart describing how modules are loaded:
Alternative Python implementations
Alternative Python implementations such as Jython [11], IronPython
[12], PyPy [13], Pynie [14], and Unladen Swallow can also use the
__pycache__ directory to store whatever compilation artifacts make
sense for their platforms. For example, Jython could store the class
file for the module in __pycache__/foo.jython-32.class.
Implementation strategy
This feature is targeted for Python 3.2, solving the problem for those
and all future versions. It may be back-ported to Python 2.7.
Vendors are free to backport the changes to earlier distributions as
they see fit. For backports of this feature to Python 2, when the
-U flag is used, a file such as foo.cpython-27u.pyc can be
written.
Effects on existing code
Adoption of this PEP will affect existing code and idioms, both inside
Python and outside. This section enumerates some of these effects.
Detecting PEP 3147 availability
The easiest way to detect whether your version of Python provides PEP
3147 functionality is to do the following check:
>>> import imp
>>> has3147 = hasattr(imp, 'get_tag')
__file__
In Python 3, when you import a module, its __file__ attribute points
to its source py file (in Python 2, it points to the pyc file). A
package’s __file__ points to the py file for its __init__.py.
E.g.:
>>> import foo
>>> foo.__file__
'foo.py'
# baz is a package
>>> import baz
>>> baz.__file__
'baz/__init__.py'
Nothing in this PEP would change the semantics of __file__.
This PEP proposes the addition of an __cached__ attribute to
modules, which will always point to the actual pyc file that was
read or written. When the environment variable
$PYTHONDONTWRITEBYTECODE is set, or the -B option is given, or if
the source lives on a read-only filesystem, then the __cached__
attribute will point to the location that the pyc file would have
been written to if it didn’t exist. This location of course includes
the __pycache__ subdirectory in its path.
For alternative Python implementations which do not support pyc
files, the __cached__ attribute may point to whatever information
makes sense. E.g. on Jython, this might be the .class file for the
module: __pycache__/foo.jython-32.class. Some implementations may
use multiple compiled files to create the module, in which case
__cached__ may be a tuple. The exact contents of __cached__ are
Python implementation specific.
It is recommended that when nothing sensible can be calculated,
implementations should set the __cached__ attribute to None.
py_compile and compileall
Python comes with two modules, py_compile [15] and compileall
[16] which support compiling Python modules external to the built-in
import machinery. py_compile in particular has intimate knowledge
of byte compilation, so these will be updated to understand the new
layout. The -b flag is added to compileall for writing legacy
.pyc byte-compiled file path names.
bdist_wininst and the Windows installer
These tools also compile modules explicitly on installation. If they
do not use py_compile and compileall, then they would also have to
be modified to understand the new layout.
File extension checks
There exists some code which checks for files ending in .pyc and
simply chops off the last character to find the matching .py file.
This code will obviously fail once this PEP is implemented.
To support this use case, we’ll add two new methods to the imp
package [17]:
imp.cache_from_source(py_path) -> pyc_path
imp.source_from_cache(pyc_path) -> py_path
Alternative implementations are free to override these functions to
return reasonable values based on their own support for this PEP.
These methods are allowed to return None when the implementation (or
PEP 302 loader in effect) for whatever reason cannot calculate
the appropriate file name. They should not raise exceptions.
Backports
For versions of Python earlier than 3.2 (and possibly 2.7), it is
possible to backport this PEP. However, in Python 3.2 (and possibly
2.7), this behavior will be turned on by default, and in fact, it will
replace the old behavior. Backports will need to support the old
layout by default. We suggest supporting PEP 3147 through the use of
an environment variable called $PYTHONENABLECACHEDIR or the command
line switch -Xenablecachedir to enable the feature.
Makefiles and other dependency tools
Makefiles and other tools which calculate dependencies on .pyc files
(e.g. to byte-compile the source if the .pyc is missing) will have
to be updated to check the new paths.
Alternatives
This section describes some alternative approaches or details that
were considered and rejected during the PEP’s development.
Hexadecimal magic tags
pyc files inside of the __pycache__ directories contain a magic tag
in their file names. These are mnemonic tags for the actual magic
numbers used by the importer. We could have used the hexadecimal
representation [10] of the binary magic number as a unique
identifier. For example, in Python 3.2:
>>> from binascii import hexlify
>>> from imp import get_magic
>>> 'foo.{}.pyc'.format(hexlify(get_magic()).decode('ascii'))
'foo.580c0d0a.pyc'
This isn’t particularly human friendly though, thus the magic tag
proposed in this PEP.
PEP 304
There is some overlap between the goals of this PEP and PEP 304,
which has been withdrawn. However PEP 304 would allow a user to
create a shadow file system hierarchy in which to store pyc files.
This concept of a shadow hierarchy for pyc files could be used to
satisfy the aims of this PEP. Although the PEP 304 does not indicate
why it was withdrawn, shadow directories have a number of problems.
The location of the shadow pyc files would not be easily discovered
and would depend on the proper and consistent use of the
$PYTHONBYTECODE environment variable both by the system and by end
users. There are also global implications, meaning that while the
system might want to shadow pyc files, users might not want to, but
the PEP defines only an all-or-nothing approach.
As an example of the problem, a common (though fragile) Python idiom
for locating data files is to do something like this:
from os import dirname, join
import foo.bar
data_file = join(dirname(foo.bar.__file__), 'my.dat')
This would be problematic since foo.bar.__file__ will give the
location of the pyc file in the shadow directory, and it may not be
possible to find the my.dat file relative to the source directory
from there.
Fat byte compilation files
An earlier version of this PEP described “fat” Python byte code files.
These files would contain the equivalent of multiple pyc files in a
single pyf file, with a lookup table keyed off the appropriate magic
number. This was an extensible file format so that the first 5
parallel Python implementations could be supported fairly efficiently,
but with extension lookup tables available to scale pyf byte code
objects as large as necessary.
The fat byte compilation files were fairly complex, and inherently
introduced difficult race conditions, so the current simplification of
using directories was suggested. The same problem applies to using
zip files as the fat pyc file format.
Multiple file extensions
The PEP author also considered an approach where multiple thin byte
compiled files lived in the same place, but used different file
extensions to designate the Python version. E.g. foo.pyc25,
foo.pyc26, foo.pyc31 etc. This was rejected because of the clutter
involved in writing so many different files. The multiple extension
approach makes it more difficult (and an ongoing task) to update any
tools that are dependent on the file extension.
.pyc
A proposal was floated to call the __pycache__ directory .pyc or
some other dot-file name. This would have the effect on *nix systems
of hiding the directory. There are many reasons why this was
rejected by the BDFL [20] including the fact that dot-files are only
special on some platforms, and we actually do not want to hide these
completely from users.
Reference implementation
Work on this code is tracked in a Bazaar branch on Launchpad [22]
until it’s ready for merge into Python 3.2. The work-in-progress diff
can also be viewed [23] and is updated automatically as new changes
are uploaded.
A Rietveld code review issue [24] has been opened as of 2010-04-01 (no,
this is not an April Fools joke :).
References
[2]
The marshal module:
https://docs.python.org/3.1/library/marshal.html
[3]
import.c:
https://github.com/python/cpython/blob/v3.2a1/Python/import.c
[4]
Ubuntu: https://www.ubuntu.com
[5]
Debian: https://www.debian.org
[6]
Debian Python Policy:
https://www.debian.org/doc/packaging-manuals/python-policy/
[8]
python-support:
https://web.archive.org/web/20100110123824/http://wiki.debian.org/DebianPythonFAQ#Whatispython-support.3F
[9]
python-central:
https://web.archive.org/web/20100110123824/http://wiki.debian.org/DebianPythonFAQ#Whatispython-central.3F
[10]
binascii.hexlify():
https://docs.python.org/3.1/library/binascii.html#binascii.hexlify
[11]
Jython: http://www.jython.org/
[12]
IronPython: http://ironpython.net/
[13]
PyPy: https://web.archive.org/web/20100310130136/http://codespeak.net/pypy/dist/pypy/doc/
[14]
Pynie: https://code.google.com/archive/p/pynie/
[15] (1, 2)
py_compile: https://docs.python.org/3.1/library/py_compile.html
[16] (1, 2)
compileall: https://docs.python.org/3.1/library/compileall.html
[17]
imp: https://docs.python.org/3.1/library/imp.html
[20]
https://www.mail-archive.com/[email protected]/msg45203.html
[21] importlib: https://docs.python.org/3.1/library/importlib.html
[22]
https://code.launchpad.net/~barry/python/pep3147
[23]
https://code.launchpad.net/~barry/python/pep3147/+merge/22648
[24]
http://codereview.appspot.com/842043/show
ACKNOWLEDGMENTS
Barry Warsaw’s original idea was for fat Python byte code files.
Martin von Loewis reviewed an early draft of the PEP and suggested the
simplification to store traditional pyc and pyo files in a
directory. Many other people reviewed early versions of this PEP and
provided useful feedback including but not limited to:
David Malcolm
Josselin Mouette
Matthias Klose
Michael Hudson
Michael Vogt
Piotr Ożarowski
Scott Kitterman
Toshio Kuratomi
Copyright
This document has been placed in the public domain.
| Final | PEP 3147 – PYC Repository Directories | Standards Track | This PEP describes an extension to Python’s import mechanism which
improves sharing of Python source code files among multiple installed
different versions of the Python interpreter. It does this by
allowing more than one byte compilation file (.pyc files) to be
co-located with the Python source file (.py file). The extension
described here can also be used to support different Python
compilation caches, such as JIT output that may be produced by an
Unladen Swallow (PEP 3146) enabled C Python. |
PEP 3148 – futures - execute computations asynchronously
Author:
Brian Quinlan <brian at sweetapp.com>
Status:
Final
Type:
Standards Track
Created:
16-Oct-2009
Python-Version:
3.2
Post-History:
Table of Contents
Abstract
Motivation
Specification
Naming
Interface
Executor
ProcessPoolExecutor
ThreadPoolExecutor
Future Objects
Internal Future Methods
Module Functions
Check Prime Example
Web Crawl Example
Rationale
Reference Implementation
References
Copyright
Abstract
This PEP proposes a design for a package that facilitates the
evaluation of callables using threads and processes.
Motivation
Python currently has powerful primitives to construct multi-threaded
and multi-process applications but parallelizing simple operations
requires a lot of work i.e. explicitly launching processes/threads,
constructing a work/results queue, and waiting for completion or some
other termination condition (e.g. failure, timeout). It is also
difficult to design an application with a global process/thread limit
when each component invents its own parallel execution strategy.
Specification
Naming
The proposed package would be called “futures” and would live in a new
“concurrent” top-level package. The rationale behind pushing the
futures library into a “concurrent” namespace has multiple components.
The first, most simple one is to prevent any and all confusion with
the existing “from __future__ import x” idiom which has been in use
for a long time within Python. Additionally, it is felt that adding
the “concurrent” precursor to the name fully denotes what the library
is related to - namely concurrency - this should clear up any addition
ambiguity as it has been noted that not everyone in the community is
familiar with Java Futures, or the Futures term except as it relates
to the US stock market.
Finally; we are carving out a new namespace for the standard library -
obviously named “concurrent”. We hope to either add, or move existing,
concurrency-related libraries to this in the future. A prime example
is the multiprocessing.Pool work, as well as other “addons” included
in that module, which work across thread and process boundaries.
Interface
The proposed package provides two core classes: Executor and
Future. An Executor receives asynchronous work requests (in terms
of a callable and its arguments) and returns a Future to represent
the execution of that work request.
Executor
Executor is an abstract class that provides methods to execute calls
asynchronously.
submit(fn, *args, **kwargs)
Schedules the callable to be executed as fn(*args, **kwargs)
and returns a Future instance representing the execution of the
callable.This is an abstract method and must be implemented by Executor
subclasses.
map(func, *iterables, timeout=None)
Equivalent to map(func, *iterables) but func is executed
asynchronously and several calls to func may be made concurrently.
The returned iterator raises a TimeoutError if __next__() is
called and the result isn’t available after timeout seconds from
the original call to map(). If timeout is not specified or
None then there is no limit to the wait time. If a call raises
an exception then that exception will be raised when its value is
retrieved from the iterator.
shutdown(wait=True)
Signal the executor that it should free any resources that it is
using when the currently pending futures are done executing.
Calls to Executor.submit and Executor.map and made after
shutdown will raise RuntimeError.If wait is True then this method will not return until all the
pending futures are done executing and the resources associated
with the executor have been freed. If wait is False then this
method will return immediately and the resources associated with
the executor will be freed when all pending futures are done
executing. Regardless of the value of wait, the entire Python
program will not exit until all pending futures are done
executing.
__enter__()
__exit__(exc_type, exc_val, exc_tb)
When using an executor as a context manager, __exit__ will call
Executor.shutdown(wait=True).
ProcessPoolExecutor
The ProcessPoolExecutor class is an Executor subclass that uses a
pool of processes to execute calls asynchronously. The callable
objects and arguments passed to ProcessPoolExecutor.submit must be
pickleable according to the same limitations as the multiprocessing
module.
Calling Executor or Future methods from within a callable
submitted to a ProcessPoolExecutor will result in deadlock.
__init__(max_workers)
Executes calls asynchronously using a pool of a most max_workers
processes. If max_workers is None or not given then as many
worker processes will be created as the machine has processors.
ThreadPoolExecutor
The ThreadPoolExecutor class is an Executor subclass that uses a
pool of threads to execute calls asynchronously.
Deadlock can occur when the callable associated with a Future waits
on the results of another Future. For example:
import time
def wait_on_b():
time.sleep(5)
print(b.result()) # b will never complete because it is waiting on a.
return 5
def wait_on_a():
time.sleep(5)
print(a.result()) # a will never complete because it is waiting on b.
return 6
executor = ThreadPoolExecutor(max_workers=2)
a = executor.submit(wait_on_b)
b = executor.submit(wait_on_a)
And:
def wait_on_future():
f = executor.submit(pow, 5, 2)
# This will never complete because there is only one worker thread and
# it is executing this function.
print(f.result())
executor = ThreadPoolExecutor(max_workers=1)
executor.submit(wait_on_future)
__init__(max_workers)
Executes calls asynchronously using a pool of at most
max_workers threads.
Future Objects
The Future class encapsulates the asynchronous execution of a
callable. Future instances are returned by Executor.submit.
cancel()
Attempt to cancel the call. If the call is currently being
executed then it cannot be cancelled and the method will return
False, otherwise the call will be cancelled and the method will
return True.
cancelled()
Return True if the call was successfully cancelled.
running()
Return True if the call is currently being executed and cannot
be cancelled.
done()
Return True if the call was successfully cancelled or finished
running.
result(timeout=None)
Return the value returned by the call. If the call hasn’t yet
completed then this method will wait up to timeout seconds. If
the call hasn’t completed in timeout seconds then a
TimeoutError will be raised. If timeout is not specified or
None then there is no limit to the wait time.If the future is cancelled before completing then CancelledError
will be raised.
If the call raised then this method will raise the same exception.
exception(timeout=None)
Return the exception raised by the call. If the call hasn’t yet
completed then this method will wait up to timeout seconds. If
the call hasn’t completed in timeout seconds then a
TimeoutError will be raised. If timeout is not specified or
None then there is no limit to the wait time.If the future is cancelled before completing then CancelledError
will be raised.
If the call completed without raising then None is returned.
add_done_callback(fn)
Attaches a callable fn to the future that will be called when
the future is cancelled or finishes running. fn will be called
with the future as its only argument.Added callables are called in the order that they were added and
are always called in a thread belonging to the process that added
them. If the callable raises an Exception then it will be
logged and ignored. If the callable raises another
BaseException then behavior is not defined.
If the future has already completed or been cancelled then fn
will be called immediately.
Internal Future Methods
The following Future methods are meant for use in unit tests and
Executor implementations.
set_running_or_notify_cancel()
Should be called by Executor implementations before executing
the work associated with the Future.If the method returns False then the Future was cancelled,
i.e. Future.cancel was called and returned True. Any threads
waiting on the Future completing (i.e. through as_completed()
or wait()) will be woken up.
If the method returns True then the Future was not cancelled
and has been put in the running state, i.e. calls to
Future.running() will return True.
This method can only be called once and cannot be called after
Future.set_result() or Future.set_exception() have been
called.
set_result(result)
Sets the result of the work associated with the Future.
set_exception(exception)
Sets the result of the work associated with the Future to the
given Exception.
Module Functions
wait(fs, timeout=None, return_when=ALL_COMPLETED)
Wait for the Future instances (possibly created by different
Executor instances) given by fs to complete. Returns a named
2-tuple of sets. The first set, named “done”, contains the
futures that completed (finished or were cancelled) before the
wait completed. The second set, named “not_done”, contains
uncompleted futures.timeout can be used to control the maximum number of seconds to
wait before returning. If timeout is not specified or None then
there is no limit to the wait time.
return_when indicates when the method should return. It must be
one of the following constants:
Constant
Description
FIRST_COMPLETED
The method will return when any future finishes or
is cancelled.
FIRST_EXCEPTION
The method will return when any future finishes by
raising an exception. If not future raises an
exception then it is equivalent to ALL_COMPLETED.
ALL_COMPLETED
The method will return when all calls finish.
as_completed(fs, timeout=None)
Returns an iterator over the Future instances given by fs that
yields futures as they complete (finished or were cancelled). Any
futures that completed before as_completed() was called will be
yielded first. The returned iterator raises a TimeoutError if
__next__() is called and the result isn’t available after
timeout seconds from the original call to as_completed(). If
timeout is not specified or None then there is no limit to the
wait time.The Future instances can have been created by different
Executor instances.
Check Prime Example
from concurrent import futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime,
PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()
Web Crawl Example
from concurrent import futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
def main():
with futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = dict(
(executor.submit(load_url, url, 60), url)
for url in URLS)
for future in futures.as_completed(future_to_url):
url = future_to_url[future]
try:
print('%r page is %d bytes' % (
url, len(future.result())))
except Exception as e:
print('%r generated an exception: %s' % (
url, e))
if __name__ == '__main__':
main()
Rationale
The proposed design of this module was heavily influenced by the
Java java.util.concurrent package [1]. The conceptual basis of the
module, as in Java, is the Future class, which represents the progress
and result of an asynchronous computation. The Future class makes
little commitment to the evaluation mode being used e.g. it can be
used to represent lazy or eager evaluation, for evaluation using
threads, processes or remote procedure call.
Futures are created by concrete implementations of the Executor class
(called ExecutorService in Java). The reference implementation
provides classes that use either a process or a thread pool to eagerly
evaluate computations.
Futures have already been seen in Python as part of a popular Python
cookbook recipe [2] and have discussed on the Python-3000 mailing
list [3].
The proposed design is explicit, i.e. it requires that clients be
aware that they are consuming Futures. It would be possible to design
a module that would return proxy objects (in the style of weakref)
that could be used transparently. It is possible to build a proxy
implementation on top of the proposed explicit mechanism.
The proposed design does not introduce any changes to Python language
syntax or semantics. Special syntax could be introduced [4] to mark
function and method calls as asynchronous. A proxy result would be
returned while the operation is eagerly evaluated asynchronously, and
execution would only block if the proxy object were used before the
operation completed.
Anh Hai Trinh proposed a simpler but more limited API concept [5] and
the API has been discussed in some detail on stdlib-sig [6].
The proposed design was discussed on the Python-Dev mailing list [7].
Following those discussions, the following changes were made:
The Executor class was made into an abstract base class
The Future.remove_done_callback method was removed due to a lack
of convincing use cases
The Future.add_done_callback method was modified to allow the
same callable to be added many times
The Future class’s mutation methods were better documented to
indicate that they are private to the Executor that created them
Reference Implementation
The reference implementation [8] contains a complete implementation
of the proposed design. It has been tested on Linux and Mac OS X.
References
[1]
java.util.concurrent package documentation
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html
[2]
Python Cookbook recipe 84317, “Easy threading with Futures”
http://code.activestate.com/recipes/84317/
[3]
Python-3000 thread, “mechanism for handling asynchronous concurrency”
https://mail.python.org/pipermail/python-3000/2006-April/000960.html
[4]
Python 3000 thread, “Futures in Python 3000 (was Re: mechanism for handling asynchronous concurrency)”
https://mail.python.org/pipermail/python-3000/2006-April/000970.html
[5]
A discussion of stream, a similar concept proposed by Anh Hai Trinh
http://www.mail-archive.com/[email protected]/msg00480.html
[6]
A discussion of the proposed API on stdlib-sig
https://mail.python.org/pipermail/stdlib-sig/2009-November/000731.html
[7]
A discussion of the PEP on python-dev
https://mail.python.org/pipermail/python-dev/2010-March/098169.html
[8]
Reference futures implementation
http://code.google.com/p/pythonfutures/source/browse/#svn/branches/feedback
Copyright
This document has been placed in the public domain.
| Final | PEP 3148 – futures - execute computations asynchronously | Standards Track | This PEP proposes a design for a package that facilitates the
evaluation of callables using threads and processes. |
PEP 3149 – ABI version tagged .so files
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Standards Track
Created:
09-Jul-2010
Python-Version:
3.2
Post-History:
14-Jul-2010, 22-Jul-2010
Resolution:
Python-Dev message
Table of Contents
Abstract
Background
Rationale
Proposal
Proven approach
Windows
PEP 384
Alternatives
Independent directories or symlinks
Don’t share packages with extension modules
Reference implementation
References
Copyright
Abstract
PEP 3147 described an extension to Python’s import machinery that
improved the sharing of Python source code, by allowing more than one
byte compilation file (.pyc) to be co-located with each source file.
This PEP defines an adjunct feature which allows the co-location of
extension module files (.so) in a similar manner. This optional,
build-time feature will enable downstream distributions of Python to
more easily provide more than one Python major version at a time.
Background
PEP 3147 defined the file system layout for a pure-Python package,
where multiple versions of Python are available on the system. For
example, where the alpha package containing source modules one.py
and two.py exist on a system with Python 3.2 and 3.3, the post-byte
compilation file system layout would be:
alpha/
__pycache__/
__init__.cpython-32.pyc
__init__.cpython-33.pyc
one.cpython-32.pyc
one.cpython-33.pyc
two.cpython-32.pyc
two.cpython-33.pyc
__init__.py
one.py
two.py
For packages with extension modules, a similar differentiation is
needed for the module’s .so files. Extension modules compiled for
different Python major versions are incompatible with each other due
to changes in the ABI. Different configuration/compilation options
for the same Python version can result in different ABIs
(e.g. –with-wide-unicode).
While PEP 384 defines a stable ABI, it will minimize, but not
eliminate extension module incompatibilities between Python builds or
major versions. Thus a mechanism for discriminating extension module
file names is proposed.
Rationale
Linux distributions such as Ubuntu [3] and Debian [4] provide more
than one Python version at the same time to their users. For example,
Ubuntu 9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1,
with Python 2.6 being the default.
In order to share as much as possible between the available Python
versions, these distributions install third party package modules
(.pyc and .so files) into /usr/share/pyshared and symlink to
them from /usr/lib/pythonX.Y/dist-packages. The symlinks exist
because in a pre-PEP 3147 world (i.e < Python 3.2), the .pyc files
resulting from byte compilation by the various installed Pythons will
name collide with each other. For Python versions >= 3.2, all
pure-Python packages can be shared, because the .pyc files will no
longer cause file system naming conflicts. Eliminating these symlinks
makes for a simpler, more robust Python distribution.
A similar situation arises with shared library extensions. Because
extension modules are typically named foo.so for a foo extension
module, these would also name collide if foo was provided for more
than one Python version.
In addition, because different configuration/compilation options for
the same Python version can cause different ABIs to be presented to
extension modules. On POSIX systems for example, the configure
options --with-pydebug, --with-pymalloc, and
--with-wide-unicode all change the ABI. This PEP proposes to
encode build-time options in the file name of the .so extension
module files.
PyPy [5] can also benefit from this PEP, allowing it to avoid name
collisions in extension modules built for its API, but with a
different .so tag.
Proposal
The configure/compilation options chosen at Python interpreter
build-time will be encoded in the shared library file name for
extension modules. This “tag” will appear between the module base
name and the operation file system extension for shared libraries.
The following information MUST be included in the shared library
file name:
The Python implementation (e.g. cpython, pypy, jython, etc.)
The interpreter’s major and minor version numbers
These two fields are separated by a hyphen and no dots are to appear
between the major and minor version numbers. E.g. cpython-32.
Python implementations MAY include additional flags in the file name
tag as appropriate. For example, on POSIX systems these flags will
also contribute to the file name:
--with-pydebug (flag: d)
--with-pymalloc (flag: m)
--with-wide-unicode (flag: u)
By default in Python 3.2, configure enables --with-pymalloc so
shared library file names would appear as foo.cpython-32m.so.
When the other two flags are also enabled, the file names would be
foo.cpython-32dmu.so.
The shared library file name tag is used unconditionally; it cannot be
changed. The tag and extension module suffix are available through
the sysconfig modules via the following variables:
>>> sysconfig.get_config_var('EXT_SUFFIX')
'.cpython-32mu.so'
>>> sysconfig.get_config_var('SOABI')
'cpython-32mu'
Note that $SOABI contains just the tag, while $EXT_SUFFIX includes the
platform extension for shared library files, and is the exact suffix
added to the extension module name.
For an arbitrary package foo, you might see these files when the
distribution package was installed:
/usr/lib/python/foo.cpython-32m.so
/usr/lib/python/foo.cpython-33m.so
(These paths are for example purposes only. Distributions are free to
use whatever filesystem layout they choose, and nothing in this PEP
changes the locations where from-source builds of Python are
installed.)
Python’s dynamic module loader will recognize and import shared
library extension modules with a tag that matches its build-time
options. For backward compatibility, Python will also continue to
import untagged extension modules, e.g. foo.so.
This shared library tag would be used globally for all distutils-based
extension modules, regardless of where on the file system they are
built. Extension modules built by means other than distutils would
either have to calculate the tag manually, or fallback to the
non-tagged .so file name.
Proven approach
The approach described here is already proven, in a sense, on Debian
and Ubuntu system where different extensions are used for debug builds
of Python and extension modules. Debug builds on Windows also already
use a different file extension for dynamic libraries, and in fact
encoded (in a different way than proposed in this PEP) the Python
major and minor version in the .dll file name.
Windows
This PEP only addresses build issues on POSIX systems that use the
configure script. While Windows or other platform support is not
explicitly disallowed under this PEP, platform expertise is needed in
order to evaluate, describe, and implement support on such platforms.
It is not currently clear that the facilities in this PEP are even
useful for Windows.
PEP 384
PEP 384 defines a stable ABI for extension modules. In theory,
universal adoption of PEP 384 would eliminate the need for this PEP
because all extension modules could be compatible with any Python
version. In practice of course, it will be impossible to achieve
universal adoption, and as described above, different build-time flags
still affect the ABI. Thus even with a stable ABI, this PEP may still
be necessary. While a complete specification is reserved for PEP 384,
here is a discussion of the relevant issues.
PEP 384 describes a change to PyModule_Create() where 3 is
passed as the API version if the extension was compiled with
Py_LIMITED_API. This should be formalized into an official macro
called PYTHON_ABI_VERSION to mirror PYTHON_API_VERSION. If
and when the ABI changes in an incompatible way, this version number
would be bumped. To facilitate sharing, Python would be extended to
search for extension modules with the PYTHON_ABI_VERSION number in
its name. The prefix abi is reserved for Python’s use.
Thus, an initial implementation of PEP 384, when Python is configured
with the default set of flags, would search for the following file
names when extension module foo is imported (in this order):
foo.cpython-XYm.so
foo.abi3.so
foo.so
The distutils [6] build_ext command would also have to be
extended to compile to shared library files with the abi3 tag,
when the module author indicates that their extension supports that
version of the ABI. This could be done in a backward compatible way
by adding a keyword argument to the Extension class, such as:
Extension('foo', ['foo.c'], abi=3)
Martin v. Löwis describes his thoughts [7] about the applicability of this
PEP to PEP 384. In summary:
--with-pydebug would not be supported by the stable ABI because
this changes the layout of PyObject, which is an exposed
structure.
--with-pymalloc has no bearing on the issue.
--with-wide-unicode is trickier, though Martin’s inclination is
to force the stable ABI to use a Py_UNICODE that matches the
platform’s wchar_t.
Alternatives
In the initial python-dev thread [8] where this idea was first
introduced, several alternatives were suggested. For completeness
they are listed here, along with the reasons for not adopting them.
Independent directories or symlinks
Debian and Ubuntu could simply add a version-specific directory to
sys.path that would contain just the extension modules for that
version of Python. Or the symlink trick eliminated in PEP 3147 could
be retained for just shared libraries. This approach is rejected
because it propagates the essential complexity that PEP 3147 tries to
avoid, and adds potentially several additional directories to search
for all modules, even when the number of extension modules is much
fewer than the total number of Python packages. For example, builds
were made available both with and without wide unicode, with and
without pydebug, and with and without pymalloc, the total number of
directories search increases substantially.
Don’t share packages with extension modules
It has been suggested that Python packages with extension modules not
be shared among all supported Python versions on a distribution. Even
with adoption of PEP 3149, extension modules will have to be compiled
for every supported Python version, so perhaps sharing of such
packages isn’t useful anyway. Not sharing packages with extensions
though is infeasible for several reasons.
If a pure-Python package is shared in one version, should it suddenly
be not-shared if the next release adds an extension module for speed?
Also, even though all extension shared libraries will be compiled and
distributed once for every supported Python, there’s a big difference
between duplicating the .so files and duplicating all .py files.
The extra size increases the download time for such packages, and more
immediately, increases the space pressures on already constrained
distribution CD-ROMs.
Reference implementation
Work on this code is tracked in a Bazaar branch on Launchpad [9]
until it’s ready for merge into Python 3.2. The work-in-progress diff
can also be viewed [10] and is updated automatically as new changes
are uploaded.
References
[3]
Ubuntu: <http://www.ubuntu.com>
[4]
Debian: <http://www.debian.org>
[5]
http://codespeak.net/pypy/dist/pypy/doc/
[6]
http://docs.python.org/py3k/distutils/index.html
[7]
https://mail.python.org/pipermail/python-dev/2010-August/103330.html
[8]
https://mail.python.org/pipermail/python-dev/2010-June/100998.html
[9]
https://code.edge.launchpad.net/~barry/python/sovers
[10]
https://code.edge.launchpad.net/~barry/python/sovers/+merge/29411
Copyright
This document has been placed in the public domain.
| Final | PEP 3149 – ABI version tagged .so files | Standards Track | PEP 3147 described an extension to Python’s import machinery that
improved the sharing of Python source code, by allowing more than one
byte compilation file (.pyc) to be co-located with each source file. |
PEP 3150 – Statement local namespaces (aka “given” clause)
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
09-Jul-2010
Python-Version:
3.4
Post-History:
14-Jul-2010, 21-Apr-2011, 13-Jun-2011
Table of Contents
Abstract
Proposal
Semantics
Syntax Change
New PEP 8 Guidelines
Rationale
Design Discussion
Keyword Choice
Relation to PEP 403
Explaining Container Comprehensions and Generator Expressions
Explaining Decorator Clause Evaluation and Application
Anticipated Objections
Two Ways To Do It
Out of Order Execution
Harmful to Introspection
Lack of Real World Impact Assessment
Open Questions
Syntax for Forward References
Handling of nonlocal and global
Handling of break and continue
Handling of return and yield
Examples
Possible Additions
Rejected Alternatives
Reference Implementation
TO-DO
References
Copyright
Abstract
This PEP proposes the addition of an optional given clause to several
Python statements that do not currently have an associated code suite. This
clause will create a statement local namespace for additional names that are
accessible in the associated statement, but do not become part of the
containing namespace.
Adoption of a new symbol, ?, is proposed to denote a forward reference
to the namespace created by running the associated code suite. It will be
a reference to a types.SimpleNamespace object.
The primary motivation is to enable a more declarative style of programming,
where the operation to be performed is presented to the reader first, and the
details of the necessary subcalculations are presented in the following
indented suite. As a key example, this would elevate ordinary assignment
statements to be on par with class and def statements where the name
of the item to be defined is presented to the reader in advance of the
details of how the value of that item is calculated. It also allows named
functions to be used in a “multi-line lambda” fashion, where the name is used
solely as a placeholder in the current expression and then defined in the
following suite.
A secondary motivation is to simplify interim calculations in module and
class level code without polluting the resulting namespaces.
The intent is that the relationship between a given clause and a separate
function definition that performs the specified operation will be similar to
the existing relationship between an explicit while loop and a generator that
produces the same sequence of operations as that while loop.
The specific proposal in this PEP has been informed by various explorations
of this and related concepts over the years (e.g. [1], [2], [3], [6],
[8]), and is inspired to some degree by the where and let clauses in
Haskell. It avoids some problems that have been identified in past proposals,
but has not yet itself been subject to the test of implementation.
Proposal
This PEP proposes the addition of an optional given clause to the
syntax for simple statements which may contain an expression, or may
substitute for such a statement for purely syntactic purposes. The
current list of simple statements that would be affected by this
addition is as follows:
expression statement
assignment statement
augmented assignment statement
del statement
return statement
yield statement
raise statement
assert statement
pass statement
The given clause would allow subexpressions to be referenced by
name in the header line, with the actual definitions following in
the indented clause. As a simple example:
sorted_data = sorted(data, key=?.sort_key) given:
def sort_key(item):
return item.attr1, item.attr2
The new symbol ? is used to refer to the given namespace. It would be a
types.SimpleNamespace instance, so ?.sort_key functions as
a forward reference to a name defined in the given clause.
A docstring would be permitted in the given clause, and would be attached
to the result namespace as its __doc__ attribute.
The pass statement is included to provide a consistent way to skip
inclusion of a meaningful expression in the header line. While this is not
an intended use case, it isn’t one that can be prevented as multiple
alternatives (such as ... and ()) remain available even if pass
itself is disallowed.
The body of the given clause will execute in a new scope, using normal
function closure semantics. To support early binding of loop variables
and global references, as well as to allow access to other names defined at
class scope, the given clause will also allow explicit
binding operations in the header line:
# Explicit early binding via given clause
seq = []
for i in range(10):
seq.append(?.f) given i=i in:
def f():
return i
assert [f() for f in seq] == list(range(10))
Semantics
The following statement:
op(?.f, ?.g) given bound_a=a, bound_b=b in:
def f():
return bound_a + bound_b
def g():
return bound_a - bound_b
Would be roughly equivalent to the following code (__var denotes a
hidden compiler variable or simply an entry on the interpreter stack):
__arg1 = a
__arg2 = b
def __scope(bound_a, bound_b):
def f():
return bound_a + bound_b
def g():
return bound_a - bound_b
return types.SimpleNamespace(**locals())
__ref = __scope(__arg1, __arg2)
__ref.__doc__ = __scope.__doc__
op(__ref.f, __ref.g)
A given clause is essentially a nested function which is created and
then immediately executed. Unless explicitly passed in, names are looked
up using normal scoping rules, and thus names defined at class scope will
not be visible. Names declared as forward references are returned and
used in the header statement, without being bound locally in the
surrounding namespace.
Syntax Change
Current:
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
del_stmt: 'del' exprlist
pass_stmt: 'pass'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
assert_stmt: 'assert' test [',' test]
New:
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*) [given_clause]
del_stmt: 'del' exprlist [given_clause]
pass_stmt: 'pass' [given_clause]
return_stmt: 'return' [testlist] [given_clause]
yield_stmt: yield_expr [given_clause]
raise_stmt: 'raise' [test ['from' test]] [given_clause]
assert_stmt: 'assert' test [',' test] [given_clause]
given_clause: "given" [(NAME '=' test)+ "in"]":" suite
(Note that expr_stmt in the grammar is a slight misnomer, as it covers
assignment and augmented assignment in addition to simple expression
statements)
Note
These proposed grammar changes don’t yet cover the forward reference
expression syntax for accessing names defined in the statement local
namespace.
The new clause is added as an optional element of the existing statements
rather than as a new kind of compound statement in order to avoid creating
an ambiguity in the grammar. It is applied only to the specific elements
listed so that nonsense like the following is disallowed:
break given:
a = b = 1
import sys given:
a = b = 1
However, the precise Grammar change described above is inadequate, as it
creates problems for the definition of simple_stmt (which allows chaining of
multiple single line statements with “;” rather than “\n”).
So the above syntax change should instead be taken as a statement of intent.
Any actual proposal would need to resolve the simple_stmt parsing problem
before it could be seriously considered. This would likely require a
non-trivial restructuring of the grammar, breaking up small_stmt and
flow_stmt to separate the statements that potentially contain arbitrary
subexpressions and then allowing a single one of those statements with
a given clause at the simple_stmt level. Something along the lines of:
stmt: simple_stmt | given_stmt | compound_stmt
simple_stmt: small_stmt (';' (small_stmt | subexpr_stmt))* [';'] NEWLINE
small_stmt: (pass_stmt | flow_stmt | import_stmt |
global_stmt | nonlocal_stmt)
flow_stmt: break_stmt | continue_stmt
given_stmt: subexpr_stmt (given_clause |
(';' (small_stmt | subexpr_stmt))* [';']) NEWLINE
subexpr_stmt: expr_stmt | del_stmt | flow_subexpr_stmt | assert_stmt
flow_subexpr_stmt: return_stmt | raise_stmt | yield_stmt
given_clause: "given" (NAME '=' test)* ":" suite
For reference, here are the current definitions at that level:
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
In addition to the above changes, the definition of atom would be changed
to also allow ?. The restriction of this usage to statements with
an associated given clause would be handled by a later stage of the
compilation process (likely AST construction, which already enforces
other restrictions where the grammar is overly permissive in order to
simplify the initial parsing step).
New PEP 8 Guidelines
As discussed on python-ideas ([7], [9]) new PEP 8 guidelines would also
need to be developed to provide appropriate direction on when to use the
given clause over ordinary variable assignments.
Based on the similar guidelines already present for try statements, this
PEP proposes the following additions for given statements to the
“Programming Conventions” section of PEP 8:
for code that could reasonably be factored out into a separate function,
but is not currently reused anywhere, consider using a given clause.
This clearly indicates which variables are being used only to define
subcomponents of another statement rather than to hold algorithm or
application state. This is an especially useful technique when
passing multi-line functions to operations which take callable
arguments.
keep given clauses concise. If they become unwieldy, either break
them up into multiple steps or else move the details into a separate
function.
Rationale
Function and class statements in Python have a unique property
relative to ordinary assignment statements: to some degree, they are
declarative. They present the reader of the code with some critical
information about a name that is about to be defined, before
proceeding on with the details of the actual definition in the
function or class body.
The name of the object being declared is the first thing stated
after the keyword. Other important information is also given the
honour of preceding the implementation details:
decorators (which can greatly affect the behaviour of the created
object, and were placed ahead of even the keyword and name as a matter
of practicality more so than aesthetics)
the docstring (on the first line immediately following the header line)
parameters, default values and annotations for function definitions
parent classes, metaclass and optionally other details (depending on
the metaclass) for class definitions
This PEP proposes to make a similar declarative style available for
arbitrary assignment operations, by permitting the inclusion of a
“given” suite following any simple assignment statement:
TARGET = [TARGET2 = ... TARGETN =] EXPR given:
SUITE
By convention, code in the body of the suite should be oriented solely
towards correctly defining the assignment operation carried out in the
header line. The header line operation should also be adequately
descriptive (e.g. through appropriate choices of variable names) to
give a reader a reasonable idea of the purpose of the operation
without reading the body of the suite.
However, while they are the initial motivating use case, limiting this
feature solely to simple assignments would be overly restrictive. Once the
feature is defined at all, it would be quite arbitrary to prevent its use
for augmented assignments, return statements, yield expressions,
comprehensions and arbitrary expressions that may modify the
application state.
The given clause may also function as a more readable
alternative to some uses of lambda expressions and similar
constructs when passing one-off functions to operations
like sorted() or in callback based event-driven programming.
In module and class level code, the given clause will serve as a
clear and reliable replacement for usage of the del statement to keep
interim working variables from polluting the resulting namespace.
One potentially useful way to think of the proposed clause is as a middle
ground between conventional in-line code and separation of an
operation out into a dedicated function, just as an inline while loop may
eventually be factored out into a dedicated generator.
Design Discussion
Keyword Choice
This proposal initially used where based on the name of a similar
construct in Haskell. However, it has been pointed out that there
are existing Python libraries (such as Numpy [4]) that already use
where in the SQL query condition sense, making that keyword choice
potentially confusing.
While given may also be used as a variable name (and hence would be
deprecated using the usual __future__ dance for introducing
new keywords), it is associated much more strongly with the desired
“here are some extra variables this expression may use” semantics
for the new clause.
Reusing the with keyword has also been proposed. This has the
advantage of avoiding the addition of a new keyword, but also has
a high potential for confusion as the with clause and with
statement would look similar but do completely different things.
That way lies C++ and Perl :)
Relation to PEP 403
PEP 403 (General Purpose Decorator Clause) attempts to achieve the main
goals of this PEP using a less radical language change inspired by the
existing decorator syntax.
Despite having the same author, the two PEPs are in direct competition with
each other. PEP 403 represents a minimalist approach that attempts to achieve
useful functionality with a minimum of change from the status quo. This PEP
instead aims for a more flexible standalone statement design, which requires
a larger degree of change to the language.
Note that where PEP 403 is better suited to explaining the behaviour of
generator expressions correctly, this PEP is better able to explain the
behaviour of decorator clauses in general. Both PEPs support adequate
explanations for the semantics of container comprehensions.
Explaining Container Comprehensions and Generator Expressions
One interesting feature of the proposed construct is that it can be used as
a primitive to explain the scoping and execution order semantics of
container comprehensions:
seq2 = [x for x in y if q(x) for y in seq if p(y)]
# would be equivalent to
seq2 = ?.result given seq=seq:
result = []
for y in seq:
if p(y):
for x in y:
if q(x):
result.append(x)
The important point in this expansion is that it explains why comprehensions
appear to misbehave at class scope: only the outermost iterator is evaluated
at class scope, while all predicates, nested iterators and value expressions
are evaluated inside a nested scope.
Not that, unlike PEP 403, the current version of this PEP cannot
provide a precisely equivalent expansion for a generator expression. The
closest it can get is to define an additional level of scoping:
seq2 = ?.g(seq) given:
def g(seq):
for y in seq:
if p(y):
for x in y:
if q(x):
yield x
This limitation could be remedied by permitting the given clause to be
a generator function, in which case ? would refer to a generator-iterator
object rather than a simple namespace:
seq2 = ? given seq=seq in:
for y in seq:
if p(y):
for x in y:
if q(x):
yield x
However, this would make the meaning of “?” quite ambiguous, even more so
than is already the case for the meaning of def statements (which will
usually have a docstring indicating whether or not a function definition is
actually a generator)
Explaining Decorator Clause Evaluation and Application
The standard explanation of decorator clause evaluation and application
has to deal with the idea of hidden compiler variables in order to show
steps in their order of execution. The given statement allows a decorated
function definition like:
@classmethod
def classname(cls):
return cls.__name__
To instead be explained as roughly equivalent to:
classname = .d1(classname) given:
d1 = classmethod
def classname(cls):
return cls.__name__
Anticipated Objections
Two Ways To Do It
A lot of code may now be written with values defined either before the
expression where they are used or afterwards in a given clause, creating
two ways to do it, perhaps without an obvious way of choosing between them.
On reflection, I feel this is a misapplication of the “one obvious way”
aphorism. Python already offers lots of ways to write code. We can use
a for loop or a while loop, a functional style or an imperative style or an
object oriented style. The language, in general, is designed to let people
write code that matches the way they think. Since different people think
differently, the way they write their code will change accordingly.
Such stylistic questions in a code base are rightly left to the development
group responsible for that code. When does an expression get so complicated
that the subexpressions should be taken out and assigned to variables, even
though those variables are only going to be used once? When should an inline
while loop be replaced with a generator that implements the same logic?
Opinions differ, and that’s OK.
However, explicit PEP 8 guidance will be needed for CPython and the standard
library, and that is discussed in the proposal above.
Out of Order Execution
The given clause makes execution jump around a little strangely, as the
body of the given clause is executed before the simple statement in the
clause header. The closest any other part of Python comes to this is the out
of order evaluation in list comprehensions, generator expressions and
conditional expressions and the delayed application of decorator functions to
the function they decorate (the decorator expressions themselves are executed
in the order they are written).
While this is true, the syntax is intended for cases where people are
themselves thinking about a problem out of sequence (at least as far as
the language is concerned). As an example of this, consider the following
thought in the mind of a Python user:
I want to sort the items in this sequence according to the values of
attr1 and attr2 on each item.
If they’re comfortable with Python’s lambda expressions, then they might
choose to write it like this:
sorted_list = sorted(original, key=(lambda v: v.attr1, v.attr2))
That gets the job done, but it hardly reaches the standard of executable
pseudocode that fits Python’s reputation.
If they don’t like lambda specifically, the operator module offers an
alternative that still allows the key function to be defined inline:
sorted_list = sorted(original,
key=operator.attrgetter(v. 'attr1', 'attr2'))
Again, it gets the job done, but even the most generous of readers would
not consider that to be “executable pseudocode”.
If they think both of the above options are ugly and confusing, or they need
logic in their key function that can’t be expressed as an expression (such
as catching an exception), then Python currently forces them to reverse the
order of their original thought and define the sorting criteria first:
def sort_key(item):
return item.attr1, item.attr2
sorted_list = sorted(original, key=sort_key)
“Just define a function” has been the rote response to requests for multi-line
lambda support for years. As with the above options, it gets the job done,
but it really does represent a break between what the user is thinking and
what the language allows them to express.
I believe the proposal in this PEP would finally let Python get close to the
“executable pseudocode” bar for the kind of thought expressed above:
sorted_list = sorted(original, key=?.key) given:
def key(item):
return item.attr1, item.attr2
Everything is in the same order as it was in the user’s original thought, and
they don’t even need to come up with a name for the sorting criteria: it is
possible to reuse the keyword argument name directly.
A possible enhancement to those proposal would be to provide a convenient
shorthand syntax to say “use the given clause contents as keyword
arguments”. Even without dedicated syntax, that can be written simply as
**vars(?).
Harmful to Introspection
Poking around in module and class internals is an invaluable tool for
white-box testing and interactive debugging. The given clause will be
quite effective at preventing access to temporary state used during
calculations (although no more so than current usage of del statements
in that regard).
While this is a valid concern, design for testability is an issue that
cuts across many aspects of programming. If a component needs to be tested
independently, then a given statement should be refactored in to separate
statements so that information is exposed to the test suite. This isn’t
significantly different from refactoring an operation hidden inside a
function or generator out into its own function purely to allow it to be
tested in isolation.
Lack of Real World Impact Assessment
The examples in the current PEP are almost all relatively small “toy”
examples. The proposal in this PEP needs to be subjected to the test of
application to a large code base (such as the standard library or a large
Twisted application) in a search for examples where the readability of real
world code is genuinely enhanced.
This is more of a deficiency in the PEP rather than the idea, though. If
it wasn’t a real world problem, we wouldn’t get so many complaints about
the lack of multi-line lambda support and Ruby’s block construct
probably wouldn’t be quite so popular.
Open Questions
Syntax for Forward References
The ? symbol is proposed for forward references to the given namespace
as it is short, currently unused and suggests “there’s something missing
here that will be filled in later”.
The proposal in the PEP doesn’t neatly parallel any existing Python feature,
so reusing an already used symbol has been deliberately avoided.
Handling of nonlocal and global
nonlocal and global are explicitly disallowed in the given clause
suite and will be syntax errors if they occur. They will work normally if
they appear within a def statement within that suite.
Alternatively, they could be defined as operating as if the anonymous
functions were defined as in the expansion above.
Handling of break and continue
break and continue will operate as if the anonymous functions were
defined as in the expansion above. They will be syntax errors if they occur
in the given clause suite but will work normally if they appear within
a for or while loop as part of that suite.
Handling of return and yield
return and yield are explicitly disallowed in the given clause
suite and will be syntax errors if they occur. They will work normally if
they appear within a def statement within that suite.
Examples
Defining callbacks for event driven programming:
# Current Python (definition before use)
def cb(sock):
# Do something with socket
def eb(exc):
logging.exception(
"Failed connecting to %s:%s", host, port)
loop.create_connection((host, port), cb, eb) given:
# Becomes:
loop.create_connection((host, port), ?.cb, ?.eb) given:
def cb(sock):
# Do something with socket
def eb(exc):
logging.exception(
"Failed connecting to %s:%s", host, port)
Defining “one-off” classes which typically only have a single instance:
# Current Python (instantiation after definition)
class public_name():
... # However many lines
public_name = public_name(*params)
# Current Python (custom decorator)
def singleton(*args, **kwds):
def decorator(cls):
return cls(*args, **kwds)
return decorator
@singleton(*params)
class public_name():
... # However many lines
# Becomes:
public_name = ?.MeaningfulClassName(*params) given:
class MeaningfulClassName():
... # Should trawl the stdlib for an example of doing this
Calculating attributes without polluting the local namespace (from os.py):
# Current Python (manual namespace cleanup)
def _createenviron():
... # 27 line function
environ = _createenviron()
del _createenviron
# Becomes:
environ = ?._createenviron() given:
def _createenviron():
... # 27 line function
Replacing default argument hack (from functools.lru_cache):
# Current Python (default argument hack)
def decorating_function(user_function,
tuple=tuple, sorted=sorted, len=len, KeyError=KeyError):
... # 60 line function
return decorating_function
# Becomes:
return ?.decorating_function given:
# Cell variables rather than locals, but should give similar speedup
tuple, sorted, len, KeyError = tuple, sorted, len, KeyError
def decorating_function(user_function):
... # 60 line function
# This example also nicely makes it clear that there is nothing in the
# function after the nested function definition. Due to additional
# nested functions, that isn't entirely clear in the current code.
Possible Additions
The current proposal allows the addition of a given clause only
for simple statements. Extending the idea to allow the use of
compound statements would be quite possible (by appending the given
clause as an independent suite at the end), but doing so raises
serious readability concerns (as values defined in the given
clause may be used well before they are defined, exactly the kind
of readability trap that other features like decorators and with
statements are designed to eliminate)
The “explicit early binding” variant may be applicable to the discussions
on python-ideas on how to eliminate the default argument hack. A given
clause in the header line for functions (after the return type annotation)
may be the answer to that question.
Rejected Alternatives
An earlier version of this PEP allowed implicit forward references to the
names in the trailing suite, and also used implicit early binding
semantics. Both of these ideas substantially complicated the proposal
without providing a sufficient increase in expressive power. The current
proposing with explicit forward references and early binding brings the
new construct into line with existing scoping semantics, greatly
improving the chances the idea can actually be implemented.
In addition to the proposals made here, there have also been suggestions
of two suite “in-order” variants which provide the limited scoping of
names without supporting out-of-order execution. I believe these
suggestions largely miss the point of what people are complaining about
when they ask for multi-line lambda support - it isn’t that coming up
with a name for the subexpression is especially difficult, it’s that
naming the function before the statement that uses it means the code
no longer matches the way the developer thinks about the problem at hand.
I’ve made some unpublished attempts to allow direct references to the
closure implicitly created by the given clause, while still retaining
the general structure of the syntax as defined in this PEP (For example,
allowing a subexpression like ?given or :given to be used in
expressions to indicate a direct reference to the implied closure, thus
preventing it from being called automatically to create the local namespace).
All such attempts have appeared unattractive and confusing compared to
the simpler decorator-inspired proposal in PEP 403.
Reference Implementation
None as yet. If you want a crash course in Python namespace
semantics and code compilation, feel free to try ;)
TO-DO
Mention PEP 359 and possible uses for locals() in the given clause
Figure out if this can be used internally to make the implementation of
zero-argument super() calls less awful
References
[1]
Explicitation lines in Python
[2]
‘where’ statement in Python
[3]
Where-statement (Proposal for function expressions)
[4]
Name conflict with NumPy for ‘where’ keyword choice
[6]
Assignments in list/generator expressions
[7]
Possible PEP 3150 style guidelines (#1)
[8]
Discussion of PEP 403 (statement local function definition)
[9]
Possible PEP 3150 style guidelines (#2)
The “Status quo wins a stalemate” design principle
Multi-line lambdas (again!)
Copyright
This document has been placed in the public domain.
| Deferred | PEP 3150 – Statement local namespaces (aka “given” clause) | Standards Track | This PEP proposes the addition of an optional given clause to several
Python statements that do not currently have an associated code suite. This
clause will create a statement local namespace for additional names that are
accessible in the associated statement, but do not become part of the
containing namespace. |
PEP 3152 – Cofunctions
Author:
Gregory Ewing <greg.ewing at canterbury.ac.nz>
Status:
Rejected
Type:
Standards Track
Created:
13-Feb-2009
Python-Version:
3.3
Post-History:
Table of Contents
Abstract
Rejection
Specification
Cofunction definitions
Cocalls
New builtins, attributes and C API functions
Motivation and Rationale
Prototype Implementation
Copyright
Abstract
A syntax is proposed for defining and calling a special type of
generator called a ‘cofunction’. It is designed to provide a
streamlined way of writing generator-based coroutines, and allow the
early detection of certain kinds of error that are easily made when
writing such code, which otherwise tend to cause hard-to-diagnose
symptoms.
This proposal builds on the ‘yield from’ mechanism described in PEP
380, and describes some of the semantics of cofunctions in terms of
it. However, it would be possible to define and implement cofunctions
independently of PEP 380 if so desired.
Rejection
See https://mail.python.org/pipermail/python-dev/2015-April/139503.html
Specification
Cofunction definitions
A new keyword codef is introduced which is used in place of
def to define a cofunction. A cofunction is a special kind of
generator having the following characteristics:
A cofunction is always a generator, even if it does not contain any
yield or yield from expressions.
A cofunction cannot be called the same way as an ordinary function.
An exception is raised if an ordinary call to a cofunction is
attempted.
Cocalls
Calls from one cofunction to another are made by marking the call with
a new keyword cocall. The expression
cocall f(*args, **kwds)
is semantically equivalent to
yield from f.__cocall__(*args, **kwds)
except that the object returned by __cocall__ is expected to be an
iterator, so the step of calling iter() on it is skipped.
The full syntax of a cocall expression is described by the following
grammar lines:
atom: cocall | <existing alternatives for atom>
cocall: 'cocall' atom cotrailer* '(' [arglist] ')'
cotrailer: '[' subscriptlist ']' | '.' NAME
The cocall keyword is syntactically valid only inside a
cofunction. A SyntaxError will result if it is used in any other
context.
Objects which implement __cocall__ are expected to return an object
obeying the iterator protocol. Cofunctions respond to __cocall__ the
same way as ordinary generator functions respond to __call__, i.e. by
returning a generator-iterator.
Certain objects that wrap other callable objects, notably bound
methods, will be given __cocall__ implementations that delegate to the
underlying object.
New builtins, attributes and C API functions
To facilitate interfacing cofunctions with non-coroutine code, there will
be a built-in function costart whose definition is equivalent to
def costart(obj, *args, **kwds):
return obj.__cocall__(*args, **kwds)
There will also be a corresponding C API function
PyObject *PyObject_CoCall(PyObject *obj, PyObject *args, PyObject *kwds)
It is left unspecified for now whether a cofunction is a distinct type
of object or, like a generator function, is simply a specially-marked
function instance. If the latter, a read-only boolean attribute
__iscofunction__ should be provided to allow testing whether a
given function object is a cofunction.
Motivation and Rationale
The yield from syntax is reasonably self-explanatory when used for
the purpose of delegating part of the work of a generator to another
function. It can also be used to good effect in the implementation of
generator-based coroutines, but it reads somewhat awkwardly when used
for that purpose, and tends to obscure the true intent of the code.
Furthermore, using generators as coroutines is somewhat error-prone.
If one forgets to use yield from when it should have been used, or
uses it when it shouldn’t have, the symptoms that result can be
obscure and confusing.
Finally, sometimes there is a need for a function to be a coroutine
even though it does not yield anything, and in these cases it is
necessary to resort to kludges such as if 0: yield to force it to
be a generator.
The codef and cocall constructs address the first issue by
making the syntax directly reflect the intent, that is, that the
function forms part of a coroutine.
The second issue is addressed by making it impossible to mix coroutine
and non-coroutine code in ways that don’t make sense. If the rules
are violated, an exception is raised that points out exactly what and
where the problem is.
Lastly, the need for dummy yields is eliminated by making the form of
definition determine whether the function is a coroutine, rather than
what it contains.
Prototype Implementation
An implementation in the form of patches to Python 3.1.2 can be found
here:
http://www.cosc.canterbury.ac.nz/greg.ewing/python/generators/cofunctions.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3152 – Cofunctions | Standards Track | A syntax is proposed for defining and calling a special type of
generator called a ‘cofunction’. It is designed to provide a
streamlined way of writing generator-based coroutines, and allow the
early detection of certain kinds of error that are easily made when
writing such code, which otherwise tend to cause hard-to-diagnose
symptoms. |
PEP 3153 – Asynchronous IO support
Author:
Laurens Van Houtven <_ at lvh.cc>
Status:
Superseded
Type:
Standards Track
Created:
29-May-2011
Post-History:
Superseded-By:
3156
Table of Contents
Abstract
Rationale
Communication abstractions
Transports
Protocols
Why separate protocols and transports?
Flow control
Consumers
Producers
Considered API alternatives
Generators as producers
References
Copyright
Abstract
This PEP describes an abstraction of asynchronous IO for the Python
standard library.
The goal is to reach an abstraction that can be implemented by many
different asynchronous IO backends and provides a target for library
developers to write code portable between those different backends.
Rationale
People who want to write asynchronous code in Python right now have a
few options:
asyncore and asynchat
something bespoke, most likely based on the select module
using a third party library, such as Twisted or gevent
Unfortunately, each of these options has its downsides, which this PEP
tries to address.
Despite having been part of the Python standard library for a long
time, the asyncore module suffers from fundamental flaws following
from an inflexible API that does not stand up to the expectations of a
modern asynchronous networking module.
Moreover, its approach is too simplistic to provide developers with
all the tools they need in order to fully exploit the potential of
asynchronous networking.
The most popular solution right now used in production involves the
use of third party libraries. These often provide satisfactory
solutions, but there is a lack of compatibility between these
libraries, which tends to make codebases very tightly coupled to the
library they use.
This current lack of portability between different asynchronous IO
libraries causes a lot of duplicated effort for third party library
developers. A sufficiently powerful abstraction could mean that
asynchronous code gets written once, but used everywhere.
An eventual added goal would be for standard library implementations
of wire and network protocols to evolve towards being real protocol
implementations, as opposed to standalone libraries that do everything
including calling recv() blockingly. This means they could be
easily reused for both synchronous and asynchronous code.
Communication abstractions
Transports
Transports provide a uniform API for reading bytes from and writing
bytes to different kinds of connections. Transports in this PEP are
always ordered, reliable, bidirectional, stream-oriented two-endpoint
connections. This might be a TCP socket, an SSL connection, a pipe
(named or otherwise), a serial port… It may abstract a file
descriptor on POSIX platforms or a Handle on Windows or some other
data structure appropriate to a particular platform. It encapsulates
all of the particular implementation details of using that platform
data structure and presents a uniform interface for application
developers.
Transports talk to two things: the other side of the connection on one
hand, and a protocol on the other. It’s a bridge between the specific
underlying transfer mechanism and the protocol. Its job can be
described as allowing the protocol to just send and receive bytes,
taking care of all of the magic that needs to happen to those bytes to
be eventually sent across the wire.
The primary feature of a transport is sending bytes to a protocol and
receiving bytes from the underlying protocol. Writing to the
transport is done using the write and write_sequence methods.
The latter method is a performance optimization, to allow software to
take advantage of specific capabilities in some transport mechanisms.
Specifically, this allows transports to use writev instead of write
or send, also known as scatter/gather IO.
A transport can be paused and resumed. This will cause it to buffer
data coming from protocols and stop sending received data to the
protocol.
A transport can also be closed, half-closed and aborted. A closed
transport will finish writing all of the data queued in it to the
underlying mechanism, and will then stop reading or writing data.
Aborting a transport stops it, closing the connection without sending
any data that is still queued.
Further writes will result in exceptions being thrown. A half-closed
transport may not be written to anymore, but will still accept
incoming data.
Protocols
Protocols are probably more familiar to new users. The terminology is
consistent with what you would expect from something called a
protocol: the protocols most people think of first, like HTTP, IRC,
SMTP… are all examples of something that would be implemented in a
protocol.
The shortest useful definition of a protocol is a (usually two-way)
bridge between the transport and the rest of the application logic. A
protocol will receive bytes from a transport and translates that
information into some behavior, typically resulting in some method
calls on an object. Similarly, application logic calls some methods
on the protocol, which the protocol translates into bytes and
communicates to the transport.
One of the simplest protocols is a line-based protocol, where data is
delimited by \r\n. The protocol will receive bytes from the
transport and buffer them until there is at least one complete line.
Once that’s done, it will pass this line along to some object.
Ideally that would be accomplished using a callable or even a
completely separate object composed by the protocol, but it could also
be implemented by subclassing (as is the case with Twisted’s
LineReceiver). For the other direction, the protocol could have a
write_line method, which adds the required \r\n and passes the
new bytes buffer on to the transport.
This PEP suggests a generalized LineReceiver called
ChunkProtocol, where a “chunk” is a message in a stream, delimited
by the specified delimiter. Instances take a delimiter and a callable
that will be called with a chunk of data once it’s received (as
opposed to Twisted’s subclassing behavior). ChunkProtocol also
has a write_chunk method analogous to the write_line method
described above.
Why separate protocols and transports?
This separation between protocol and transport often confuses people
who first come across it. In fact, the standard library itself does
not make this distinction in many cases, particularly not in the API
it provides to users.
It is nonetheless a very useful distinction. In the worst case, it
simplifies the implementation by clear separation of concerns.
However, it often serves the far more useful purpose of being able to
reuse protocols across different transports.
Consider a simple RPC protocol. The same bytes may be transferred
across many different transports, for example pipes or sockets. To
help with this, we separate the protocol out from the transport. The
protocol just reads and writes bytes, and doesn’t really care what
mechanism is used to eventually transfer those bytes.
This also allows for protocols to be stacked or nested easily,
allowing for even more code reuse. A common example of this is
JSON-RPC: according to the specification, it can be used across both
sockets and HTTP [1]. In practice, it tends to be primarily
encapsulated in HTTP. The protocol-transport abstraction allows us to
build a stack of protocols and transports that allow you to use HTTP
as if it were a transport. For JSON-RPC, that might get you a stack
somewhat like this:
TCP socket transport
HTTP protocol
HTTP-based transport
JSON-RPC protocol
Application code
Flow control
Consumers
Consumers consume bytes produced by producers. Together with
producers, they make flow control possible.
Consumers primarily play a passive role in flow control. They get
called whenever a producer has some data available. They then process
that data, and typically yield control back to the producer.
Consumers typically implement buffers of some sort. They make flow
control possible by telling their producer about the current status of
those buffers. A consumer can instruct a producer to stop producing
entirely, stop producing temporarily, or resume producing if it has
been told to pause previously.
Producers are registered to the consumer using the register
method.
Producers
Where consumers consume bytes, producers produce them.
Producers are modeled after the IPushProducer interface found in
Twisted. Although there is an IPullProducer as well, it is on the
whole far less interesting and therefore probably out of the scope of
this PEP.
Although producers can be told to stop producing entirely, the two
most interesting methods they have are pause and resume.
These are usually called by the consumer, to signify whether it is
ready to process (“consume”) more data or not. Consumers and
producers cooperate to make flow control possible.
In addition to the Twisted IPushProducer interface, producers have a
half_register method which is called with the consumer when the
consumer tries to register that producer. In most cases, this will
just be a case of setting self.consumer = consumer, but some
producers may require more complex preconditions or behavior when a
consumer is registered. End-users are not supposed to call this
method directly.
Considered API alternatives
Generators as producers
Generators have been suggested as way to implement producers.
However, there appear to be a few problems with this.
First of all, there is a conceptual problem. A generator, in a sense,
is “passive”. It needs to be told, through a method call, to take
action. A producer is “active”: it initiates those method calls. A
real producer has a symmetric relationship with its consumer. In the
case of a generator-turned-producer, only the consumer would have a
reference, and the producer is blissfully unaware of the consumer’s
existence.
This conceptual problem translates into a few technical issues as
well. After a successful write method call on its consumer, a
(push) producer is free to take action once more. In the case of a
generator, it would need to be told, either by asking for the next
object through the iteration protocol (a process which could block
indefinitely), or perhaps by throwing some kind of signal exception
into it.
This signaling setup may provide a technically feasible solution, but
it is still unsatisfactory. For one, this introduces unwarranted
complexity in the consumer, which now not only needs to understand how
to receive and process data, but also how to ask for new data and deal
with the case of no new data being available.
This latter edge case is particularly problematic. It needs to be
taken care of, since the entire operation is not allowed to block.
However, generators can not raise an exception on iteration without
terminating, thereby losing the state of the generator. As a result,
signaling a lack of available data would have to be done using a
sentinel value, instead of being done using th exception mechanism.
Last but not least, nobody produced actually working code
demonstrating how they could be used.
References
[1]
Sections 2.1 and
2.2 .
Copyright
This document has been placed in the public domain.
| Superseded | PEP 3153 – Asynchronous IO support | Standards Track | This PEP describes an abstraction of asynchronous IO for the Python
standard library. |
PEP 3154 – Pickle protocol version 4
Author:
Antoine Pitrou <solipsis at pitrou.net>
Status:
Final
Type:
Standards Track
Created:
11-Aug-2011
Python-Version:
3.4
Post-History:
12-Aug-2011
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Proposed changes
Framing
Binary encoding for all opcodes
Serializing more “lookupable” objects
64-bit opcodes for large objects
Native opcodes for sets and frozensets
Calling __new__ with keyword arguments
Better string encoding
Smaller memoization
Summary of new opcodes
Alternative ideas
Prefetching
Acknowledgments
References
Copyright
Abstract
Data serialized using the pickle module must be portable across Python
versions. It should also support the latest language features as well
as implementation-specific features. For this reason, the pickle
module knows about several protocols (currently numbered from 0 to 3),
each of which appeared in a different Python version. Using a
low-numbered protocol version allows to exchange data with old Python
versions, while using a high-numbered protocol allows access to newer
features and sometimes more efficient resource use (both CPU time
required for (de)serializing, and disk size / network bandwidth
required for data transfer).
Rationale
The latest current protocol, coincidentally named protocol 3, appeared
with Python 3.0 and supports the new incompatible features in the
language (mainly, unicode strings by default and the new bytes
object). The opportunity was not taken at the time to improve the
protocol in other ways.
This PEP is an attempt to foster a number of incremental improvements
in a new pickle protocol version. The PEP process is used in order to
gather as many improvements as possible, because the introduction of a
new pickle protocol should be a rare occurrence.
Proposed changes
Framing
Traditionally, when unpickling an object from a stream (by calling
load() rather than loads()), many small read()
calls can be issued on the file-like object, with a potentially huge
performance impact.
Protocol 4, by contrast, features binary framing. The general structure
of a pickle is thus the following:
+------+------+
| 0x80 | 0x04 | protocol header (2 bytes)
+------+------+
| OP | FRAME opcode (1 byte)
+------+------+-----------+
| MM MM MM MM MM MM MM MM | frame size (8 bytes, little-endian)
+------+------------------+
| .... | first frame contents (M bytes)
+------+
| OP | FRAME opcode (1 byte)
+------+------+-----------+
| NN NN NN NN NN NN NN NN | frame size (8 bytes, little-endian)
+------+------------------+
| .... | second frame contents (N bytes)
+------+
etc.
To keep the implementation simple, it is forbidden for a pickle opcode
to straddle frame boundaries. The pickler takes care not to produce such
pickles, and the unpickler refuses them. Also, there is no “last frame”
marker. The last frame is simply the one which ends with a STOP opcode.
A well-written C implementation doesn’t need additional memory copies
for the framing layer, preserving general (un)pickling efficiency.
Note
How the pickler decides to partition the pickle stream into frames is an
implementation detail. For example, “closing” a frame as soon as it
reaches ~64 KiB is a reasonable choice for both performance and pickle
size overhead.
Binary encoding for all opcodes
The GLOBAL opcode, which is still used in protocol 3, uses the
so-called “text” mode of the pickle protocol, which involves looking
for newlines in the pickle stream. It also complicates the implementation
of binary framing.
Protocol 4 forbids use of the GLOBAL opcode and replaces it with
STACK_GLOBAL, a new opcode which takes its operand from the stack.
Serializing more “lookupable” objects
By default, pickle is only able to serialize module-global functions and
classes. Supporting other kinds of objects, such as unbound methods [4],
is a common request. Actually, third-party support for some of them, such
as bound methods, is implemented in the multiprocessing module [5].
The __qualname__ attribute from PEP 3155 makes it possible to
lookup many more objects by name. Making the STACK_GLOBAL opcode accept
dot-separated names would allow the standard pickle implementation to
support all those kinds of objects.
64-bit opcodes for large objects
Current protocol versions export object sizes for various built-in
types (str, bytes) as 32-bit ints. This forbids serialization of
large data [1]. New opcodes are required to support very large bytes
and str objects.
Native opcodes for sets and frozensets
Many common built-in types (such as str, bytes, dict, list, tuple)
have dedicated opcodes to improve resource consumption when
serializing and deserializing them; however, sets and frozensets
don’t. Adding such opcodes would be an obvious improvement. Also,
dedicated set support could help remove the current impossibility of
pickling self-referential sets [2].
Calling __new__ with keyword arguments
Currently, classes whose __new__ mandates the use of keyword-only
arguments can not be pickled (or, rather, unpickled) [3]. Both a new
special method (__getnewargs_ex__) and a new opcode (NEWOBJ_EX)
are needed. The __getnewargs_ex__ method, if it exists, must
return a two-tuple (args, kwargs) where the first item is the
tuple of positional arguments and the second item is the dict of
keyword arguments for the class’s __new__ method.
Better string encoding
Short str objects currently have their length coded as a 4-bytes
integer, which is wasteful. A specific opcode with a 1-byte length
would make many pickles smaller.
Smaller memoization
The PUT opcodes all require an explicit index to select in which entry
of the memo dictionary the top-of-stack is memoized. However, in practice
those numbers are allocated in sequential order. A new opcode, MEMOIZE,
will instead store the top-of-stack in at the index equal to the current
size of the memo dictionary. This allows for shorter pickles, since PUT
opcodes are emitted for all non-atomic datatypes.
Summary of new opcodes
These reflect the state of the proposed implementation (thanks mostly
to Alexandre Vassalotti’s work):
FRAME: introduce a new frame (followed by the 8-byte frame size
and the frame contents).
SHORT_BINUNICODE: push a utf8-encoded str object with a one-byte
size prefix (therefore less than 256 bytes long).
BINUNICODE8: push a utf8-encoded str object with an eight-byte
size prefix (for strings longer than 2**32 bytes, which therefore cannot
be serialized using BINUNICODE).
BINBYTES8: push a bytes object with an eight-byte size prefix
(for bytes objects longer than 2**32 bytes, which therefore cannot be
serialized using BINBYTES).
EMPTY_SET: push a new empty set object on the stack.
ADDITEMS: add the topmost stack items to the set (to be used with
EMPTY_SET).
FROZENSET: create a frozenset object from the topmost stack items,
and push it on the stack.
NEWOBJ_EX: take the three topmost stack items cls, args
and kwargs, and push the result of calling
cls.__new__(*args, **kwargs).
STACK_GLOBAL: take the two topmost stack items module_name and
qualname, and push the result of looking up the dotted qualname
in the module named module_name.
MEMOIZE: store the top-of-stack object in the memo dictionary with
an index equal to the current size of the memo dictionary.
Alternative ideas
Prefetching
Serhiy Storchaka suggested to replace framing with a special PREFETCH
opcode (with a 2- or 4-bytes argument) to declare known pickle chunks
explicitly. Large data may be pickled outside such chunks. A naïve
unpickler should be able to skip the PREFETCH opcode and still decode
pickles properly, but good error handling would require checking that
the PREFETCH length falls on an opcode boundary.
Acknowledgments
In alphabetic order:
Alexandre Vassalotti, for starting the second PEP 3154 implementation [6]
Serhiy Storchaka, for discussing the framing proposal [6]
Stefan Mihaila, for starting the first PEP 3154 implementation as a
Google Summer of Code project mentored by Alexandre Vassalotti [7].
References
[1]
“pickle not 64-bit ready”:
http://bugs.python.org/issue11564
[2]
“Cannot pickle self-referencing sets”:
http://bugs.python.org/issue9269
[3]
“pickle/copyreg doesn’t support keyword only arguments in __new__”:
http://bugs.python.org/issue4727
[4]
“pickle should support methods”:
http://bugs.python.org/issue9276
[5]
Lib/multiprocessing/forking.py:
http://hg.python.org/cpython/file/baea9f5f973c/Lib/multiprocessing/forking.py#l54
[6] (1, 2)
Implement PEP 3154, by Alexandre Vassalotti
http://bugs.python.org/issue17810
[7]
Implement PEP 3154, by Stefan Mihaila
http://bugs.python.org/issue15642
Copyright
This document has been placed in the public domain.
| Final | PEP 3154 – Pickle protocol version 4 | Standards Track | Data serialized using the pickle module must be portable across Python
versions. It should also support the latest language features as well
as implementation-specific features. For this reason, the pickle
module knows about several protocols (currently numbered from 0 to 3),
each of which appeared in a different Python version. Using a
low-numbered protocol version allows to exchange data with old Python
versions, while using a high-numbered protocol allows access to newer
features and sometimes more efficient resource use (both CPU time
required for (de)serializing, and disk size / network bandwidth
required for data transfer). |
PEP 8000 – Python Language Governance Proposal Overview
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Informational
Topic:
Governance
Created:
24-Aug-2018
Table of Contents
Abstract
Copyright
Abstract
This PEP provides an overview of the selection process for a new model of
Python language governance in the wake of Guido’s retirement.
Once the governance model is selected, it will be codified in PEP 13.
Here is a list of PEPs related to the governance model selection process.
PEPs in the lower 8000s describe the general process for selecting a
governance model.
PEP 8001 - Python Governance Voting ProcessThis PEP describes how the vote for the new governance model will be
conducted. It outlines the voting method, timeline, criteria for
participation, and explicit list of eligible voters.
PEP 8002 - Open Source Governance SurveySurveys will be conducted of governance models for similar open source and
free software projects, and summaries of these models will be outlined in
this PEP. These surveys will serve as useful barometers for how such
projects can be successfully governed, and may serve as inspiration for
Python’s own governance model. Python is unique, so it’s expected that it
will have its own spin on governance, rather than directly adopting any of
those surveyed.
PEPs in the 801Xs describe the actual proposals for Python governance. It is
expected that these PEPs will cover the broad scope of governance, and that
differences in details (such as the size of a governing council) will be
covered in the same PEP, rather than in potentially vote-splitting individual
PEPs.
PEP 8010 - The Technical Leader Governance ModelThis PEP proposes a continuation of the singular technical project
leader model. Also within scope is whether an advisory council aids
or supports the BDFL. This PEP does not name either the next
BDFL, nor members of such an advisory council. For that, see PEP
13.
PEP 8011 - Python Governance Model Lead by Trio of PythonistasThis PEP describes a new model of Python governance lead by a Trio of Pythonistas
(TOP). It describes the role and responsibilities of the Trio.
This PEP does not name members of the Trio. For that, see PEP 13.
PEP 8012 - The Community Governance ModelThis is a placeholder PEP for a new model of Python governance based on
consensus and voting, without the role of a centralized singular leader or a
governing council. It describes how, when, and why votes are conducted for
decisions affecting the Python language. It also describes the criteria for
voting eligibility.
PEP 8013 - The External Governance ModelThis PEP describes a new model of Python governance based on an external
council who are responsible for ensuring good process. Elected by the core
development team, this council may reject proposals that are not
sufficiently detailed, do not consider all affected users, or are not
appropriate for the upcoming release. This PEP does not name members of
such a council. For that, see PEP 13.
PEP 8014 - The Commons Governance ModelThis PEP describes a new model of Python governance based on a council of
elders who are responsible for ensuring a PEP is supported by a sufficient
majority of the Python community before being accepted. Unlike some of the
other governance PEPs it explicitly does not specify who has voting
rights and what a majority vote consists of. In stead this is determined
by the council of elders on a case-by-case basis.
PEP 8015 - Organization of the Python communityThis PEP formalizes the current organization of the Python community
and proposes 3 main changes: formalize the existing concept of
“Python teams”; give more autonomy to Python teams; replace the BDFL
(Guido van Rossum) with a new “Python board” of 3 members which has
limited roles, mostly decide how a PEP is approved (or rejected).
PEP 8016 - The Steering Council ModelThis PEP proposes a model of Python governance based around a
steering council. The council has broad authority, which they seek
to exercise as rarely as possible; instead, they use this power to
establish standard processes, like those proposed in the other
801x-series PEPs. This follows the general philosophy that it’s
better to split up large changes into a series of small changes that
can be reviewed independently: instead of trying to do everything in
one PEP, we focus on providing a minimal-but-solid foundation for
further governance decisions.
Additional governance models may be added before the final selection.
Copyright
This document has been placed in the public domain.
| Final | PEP 8000 – Python Language Governance Proposal Overview | Informational | This PEP provides an overview of the selection process for a new model of
Python language governance in the wake of Guido’s retirement.
Once the governance model is selected, it will be codified in PEP 13. |
PEP 8001 – Python Governance Voting Process
Author:
Brett Cannon <brett at python.org>,
Christian Heimes <christian at python.org>,
Donald Stufft <donald at stufft.io>,
Eric Snow <ericsnowcurrently at gmail.com>,
Gregory P. Smith <greg at krypto.org>,
Łukasz Langa <lukasz at python.org>,
Mariatta <mariatta at python.org>,
Nathaniel J. Smith <njs at pobox.com>,
Pablo Galindo Salgado <pablogsal at python.org>,
Raymond Hettinger <python at rcn.com>,
Tal Einat <tal at python.org>,
Tim Peters <tim.peters at gmail.com>,
Zachary Ware <zach at python.org>
Status:
Final
Type:
Process
Topic:
Governance
Created:
24-Aug-2018
Table of Contents
Abstract
Motivation and Rationale
Implementation
What are we voting for?
Who gets to vote?
When is the vote?
Where is the vote?
Voting mechanics
Questions and Answers
Why the Condorcet method?
Is omitting any candidate PEPs in the ranking allowed?
Why recommend for dormant core developers to not vote?
Why should the vote be private?
Why the use of CIVS?
Why cannot voters change their vote?
Are there any deficiencies in the Condorcet method?
References
Copyright
Abstract
This PEP outlines the process for how the new model of Python governance is
selected, in the wake of Guido’s retirement.
Once the model is chosen by the procedures outlined here, it will be codified
in PEP 13.
Motivation and Rationale
Guido’s stepping down from the BDFL role left us with a meta-problem of
having to choose how we will choose how the Python project should be
governed from now on.
This document presents a concrete proposal how this choice can be made.
It summarizes discussion and conclusions of the proceedings of a working
group at the core sprint in Redmond in September 2018 (names of all
attendees are listed as authors). This PEP also summarizes a
subsequent thread
that took place on discuss.python.org .
The governance situation should be resolved in a timely fashion.
Ideally that should happen by the end of the 2018 which unblocks
substantial improvements to be merged in time for Python 3.8. At the
latest, the governance situation needs to be resolved by PyCon US 2019 to
avoid a PR crisis.
Implementation
What are we voting for?
We are voting to choose which governance PEP should be implemented by
the Python project. The list of candidate PEPs is listed in PEP 8000
and consists of all PEPs numbered in the 801X range.
To ensure the vote is legitimate, the aforementioned PEPs must not be
modified during the voting period.
Who gets to vote?
Every CPython core developer is invited to vote. In the interest of
transparency and fairness, we are asking core developers to self-select
based on whether the governance situation will affect them directly.
In other words, we are recommending for inactive core developers who
intend to remain inactive to abstain from voting.
When is the vote?
November 16th, 2018 to November 30th, 2018 is the official governance
PEP review period. We discourage the PEP authors from making major
substantive changes during this period, although it is expected that
minor tweaks may occur, as the result of this discussion period.
The vote will happen in a 2-week-long window from December 1st, 2018
to December 16th, 2018
(Anywhere on Earth).
Where is the vote?
The vote will happen using a “private” poll on the
Condorcet Internet Voting Service. Every committer
will receive an email with a link allowing them to rank the PEPs in their order of
preference.
The election will be supervised by Ee Durbin, The PSF Director of Infrastructure.
The results of the election, including anonymized ballots, will be made public on
December 17th, after the election has closed.
The following settings will be used for the vote in the CIVS system:
Name of the poll: Python governance vote (December 2018)
Description of the poll:
This is the vote to choose how the CPython project will govern
itself, now that Guido has announced his retirement as BDFL. For
full details, see <a
href="https://peps.python.org/pep-8001/">PEP
8001</a>. Many discussions have occurred under <a
href="https://discuss.python.org/tags/governance">the "governance"
tag</a> on discuss.python.org.
<p>
All votes must be received <b>by the end of December 16th, 2018, <a
href="https://en.wikipedia.org/wiki/Anywhere_on_Earth">Anywhere on
Earth</a></b>. All CPython core developers are <a
href="https://github.com/python/voters">eligible to vote</a>.
It is asked that inactive core developers <i>who intend to remain
inactive</i> abstain from voting.
<p>
<b>Note: You can only vote once, and all votes are final.</b> Once
you click "Submit ranking", it's too late to change your mind.
<p>
All ballots will be published at the end of voting, but <b>without
any names attached</b>. No-one associated with the Python project or
the PSF will know how you voted, or even whether you voted.
<p>
If you have any questions, you can post in <a
href="https://discuss.python.org/c/committers">the Committers
topic</a>, on <a href="mailto:[email protected]">the
python-committers list</a>, or <a
href="mailto:[email protected]">contact the vote administrator
directly</a>.
<p>
<h1>Options</h1>
<p>
We're selecting between seven PEPs, each proposing a different
governance model.
<p>
The options below include links to the text of each PEP, as well
as their complete change history. The text of these PEPs was
frozen on December 1, when the vote started. But if you looked at
the PEPs before that, they might have changed. Please take the
time to check the current text of the PEPs if you read an older
draft.
<p>
A "Further discussion" option is also included. It represents the
option of not making a choice at all at this time, and continuing
the discussion instead. Including this option lets us demonstrate
the core team's readiness to move forward.
<p>
If you think a proposal is a particularly bad idea, you can
express that by ranking it below "Further discussion". If you
think all of the proposals are better than further discussion,
then you should rank "Further discussion" last.
Candidates (note: linebreaks are significant here):
<a href="https://peps.python.org/pep-8010/">PEP 8010: The Technical Leader Governance Model</a> (Warsaw) (<a href="https://github.com/python/peps/commits/main/pep-8010.rst">changelog</a>)
<a href="https://peps.python.org/pep-8011/">PEP 8011: Python Governance Model Lead by Trio of Pythonistas</a> (Mariatta, Warsaw) (<a href="https://github.com/python/peps/commits/main/pep-8011.rst">changelog</a>)
<a href="https://peps.python.org/pep-8012/">PEP 8012: The Community Governance Model</a> (Langa) (<a href="https://github.com/python/peps/commits/main/pep-8012.rst">changelog</a>)
<a href="https://peps.python.org/pep-8013/">PEP 8013: The External Council Governance Model</a> (Dower) (<a href="https://github.com/python/peps/commits/main/pep-8013.rst">changelog</a>)
<a href="https://peps.python.org/pep-8014/">PEP 8014: The Commons Governance Model</a> (Jansen) (<a href="https://github.com/python/peps/commits/main/pep-8014.rst">changelog</a>)
<a href="https://peps.python.org/pep-8015/">PEP 8015: Organization of the Python community</a> (Stinner) (<a href="https://github.com/python/peps/commits/main/pep-8015.rst">changelog</a>)
<a href="https://peps.python.org/pep-8016/">PEP 8016: The Steering Council Model</a> (Smith, Stufft) (<a href="https://github.com/python/peps/commits/main/pep-8016.rst">changelog</a>)
Further discussion
Options:
[x] Private
[ ] Make this a test poll: read all votes from a file.
[ ] Do not release results to all voters.
[x] Enable detailed ballot reporting.
[ ] In detailed ballot report, also reveal the identity of the voter with each ballot.
[ ] Allow voters to write in new choices.
[ ] Present choices on voting page in exactly the given order.
[ ] Allow voters to select “no opinion” for some choices.
[ ] Enforce proportional representation
These options will have the effect of:
Making the election “private”, or in other words, invite only.
The results of the election will be released to all voters.
The contents of every ballot will be released to the public, along
with a detailed report going over how the winner was elected.
The detailed ballots will not include any identifying information
and the email addresses of the voters will be thrown away by the CIVS
system as soon as the email with their voting link has been sent.
Voters will not be able to write in new choices, meaning they will
be limited only to the options specified in the election.
Voters will not have the ability to change their vote after casting
a ballot. [no-changes]
The default ordering for each ballot will be randomized to remove
any influence that the order of the ballot may have on the election.
Voters will have to rank all choices somehow, but may rank multiple
choices as equal.
Voting mechanics
The vote will be by ranked ballot. Every voter
orders all candidate PEPs from the most preferred to the least
preferred. The vote will be tallied and a winner chosen using the
Condorcet method.
Note: each voter can only cast a single vote with no ability to
revise their vote later. [no-changes] If you are not absolutely
sure of your choices, hold off casting your ballot until later in
the voting period. Votes cast on the last day of the election are
just as valid as the ones cast on the first day.
While the CIVS system does not provide an option for a “Pure”
Condorcet election, any Condorcet method will select the “Pure”
Condorcet winner if one exists and otherwise only vary if one
doesn’t exist. The CIVS system differentiates between a Condorcet
winner and a non Condorcet winner by stating if the winner was a
Condorcet winner, or if it merely wasn’t defeated versus any other
option. So a winner in the CIVS system will only be accepted if
it states it was a Condorcet winner.
In the unlikely case of a tie (or cycle as is possible under the
Condorcet method), a new election will be opened, limited to the
options involved in the tie or cycle, to select a new winner from
amongst the tied options. This new election will be open for a
week, and will be repeated until a single winner is determined.
Questions and Answers
Why the Condorcet method?
It allows voters to express preference by ranking PEPs
It is consensus decision-making
In a poll
open to only core developers and run using Approval voting, it was
the clear preference
Is omitting any candidate PEPs in the ranking allowed?
A vote which omits candidates in the ranking is invalid. This is
because such votes are incompatible with the desired properties listed
above, namely:
Making voters consider alternatives, as well as
Doing everything possible to reach a conclusion in a single election.
Why recommend for dormant core developers to not vote?
The choice of the governance model will have far reaching and long-term
consequences for Python and its community. We are inviting core
developers to assess their skin in the game.
Note: this is not an edict and will not be policed. We trust all
members of the core team to act in the best interest of Python.
Why should the vote be private?
When discussing the election system, a number of core developers expressed
concerns with the idea of having public ballots, with at least one core
developer stating that they were planning on abstaining from voting
altogether due to the use of a public ballot. A poll ran on Discourse
identified the overwhelming majority of voters prefer private ballots.
[private-vote]
A secret ballot is considered by many to be a requirement for a free and
fair election, allowing members to vote their true preferences without
worry about social pressure or possible fallout for how they may have
voted.
Why the use of CIVS?
In the resulting discussion of this PEP, it was determined that core
developers wished to have a secret ballot. [private-vote] Unfortunately
a secret ballot requires either novel cryptography or a trusted party to
anonymize the ballots. Since there is not known to be any existing novel
cryptographic systems for Condorcet ballots, the CIVS system was chosen to
act as a trusted party.
More information about the security and privacy afforded by CIVS, including
how a malicious voter, election supervisor, or CIVS administrator can
influence the election can be found
here.
Why cannot voters change their vote?
CIVS does not allow voters to update their vote and as part of its goal
to prevent the election supervisor from being able to influence the
votes.
Are there any deficiencies in the Condorcet method?
There is no perfect voting method. It has been shown by the
Gibbard-Satterthwaite theorem
that any single-winner ranked voting method which is not dictatorial
must be susceptible to so-called “tactical voting”. This can lead to
people not voting as they truly believe in order to influence the
outcome.
The Condorcet method also has the possibility of having cycles (known as
the Condorcet paradox).
Due to the fact that the Condorcet method chooses a winner based on whether
they would win against the other options in a 1-on-1 race, there is a
possibility that PEP A > PEP B > PEP C > PEP A (or in terms of the game
rock-paper-scissors, imagine a three-player game where someone played rock,
another played paper, and the last person played scissors; no one wins that
game as everyone is defeated by someone). For one analyzed set of real-world
elections with 21 voters or more, a cycle occurred
less than 1.5% of the time..
References
[no-changes] (1, 2)
https://discuss.python.org/t/pep-8001-public-or-private-ballots/374/20
[private-vote] (1, 2)
https://discuss.python.org/t/pep-8001-public-or-private-ballots/374/4
Copyright
This document has been placed in the public domain.
| Final | PEP 8001 – Python Governance Voting Process | Process | This PEP outlines the process for how the new model of Python governance is
selected, in the wake of Guido’s retirement.
Once the model is chosen by the procedures outlined here, it will be codified
in PEP 13. |
PEP 8002 – Open Source Governance Survey
Author:
Barry Warsaw <barry at python.org>, Łukasz Langa <lukasz at python.org>,
Antoine Pitrou <solipsis at pitrou.net>, Doug Hellmann <doug at doughellmann.com>,
Carol Willing <willingc at gmail.com>
Status:
Final
Type:
Informational
Topic:
Governance
Created:
24-Aug-2018
Table of Contents
Abstract
Rationale
Project choice
Rust
Key people and their functions
Regular decision process
Controversial decision process
Planning a new release
Changes in the process over time
OpenStack
Key people and their functions
Regular decision process
Controversial decision process
Planning a new release
Changes in the process over time
Jupyter
Key people and their functions
Regular decision process
Controversial decision process
Voting
Planning releases
Changes in the process over time
Django
Key people and their functions
Regular decision process
Controversial decision process
Differences between DEPs and PEPs
Planning a new release
Changes in the process over time
TypeScript
Key people and their functions
Regular decision process
Controversial decision process
Planning a new release
Changes in the process over time
Astropy
Key people and their functions
Regular decision process
Code-level decisions
Non-code decisions
Voting
Controversial decision process
Ethical issues
Planning a new release
Changes in the process over time
Self-appreciation
References
Bonus: Microsoft
Key people and their functions
Regular decision process
Controversial decision process
Planning a new release
Acknowledgements
Annex 1: Template questions
Copyright
Abstract
This PEP surveys existing and similar open source and free software projects
for their governance models, providing summaries that will serve as useful
references for Python’s own selection of a new governance model in the wake of
Guido’s retirement.
Rather than an individual PEP for each of these community surveys, they will
all be collected here in this PEP.
Rationale
CPython is not the first open source project to undergo a governance crisis.
Other projects have experimented various governance options, sometimes several
times during their existence. There are useful lessons to take away of their
experience, which will help inform our own decision.
Project choice
There are many open source projects out there, but it will be most fruitful
to survey those which are similar enough to CPython on a couple key metrics:
the number of contributors and their activity (there are scaling issues that
don’t make the governance models of very small projects very enlightening
for our purposes) ;
being mostly or partly community-driven (company-driven projects can afford
different governance options, since the company hierarchy has power over
the main participants) ;
being faced with important design decisions that require a somewhat formal
decision process.
Rust
The governance structure is documented in Rust RFC #1068.
The effective governance process grows organically over time without being entirely
codified as RFCs, especially in case of day-to-day operation details. One example is
the formation of Domain Working Groups in
February 2018.
Key people and their functions
In the Rust project there are teams responsible for certain areas. For language features
there is a “lang team”, for tooling there’s “dev tools” and “Cargo”, and so on.
Contentious issues have facilitators to drive discussion who often aren’t the decision
makers. Typically the facilitators are authors of the proposed changes (see
“Controversial decision process” below). They ensure all key decision makers are
involved along with interested community members. They push towards an agreeable
outcome via iteration.
In practice this means decisions are rarely escalated to the core team.
The most common role of a contributor is team membership. Issue triage/code review
privileges without team membership is rare. Contributors have full commit access,
code ownership separation is based on trust. Writing to the compiler repository is
frowned upon, all changes go through pull requests and get merged by an integration
bot after they were reviewed and approved.
New team members are added by nomination by an existing team member.
Regular decision process
Primary work happens via GitHub issues and pull requests. Approving a pull request
by any team member allows it to be merged without further process. All merged pull
requests end up in the next stable version of Rust.
Notifying relevant people by mentions is important. Listening to the firehose of
e-mails for all GitHub activity is not popular.
There are planning and triage meetings open to the public happening on IRC and Discord.
They are not very popular because most of work happens through GitHub. Discussions also
happen on official Rust forums (https://users.rust-lang.org/ and
https://internals.rust-lang.org/). There is a dedicated moderation team responsible for
taking notes and enforcing code of conduct.
Controversial decision process
Larger or controversial work goes through a RFC process. It allows everyone to express their thoughts and
iterates towards a resolution. At some point when all blocking concerns of relevant
team members are addressed, they sign off on the RFC and it reaches a “final comment
period”. That does not require consensus amongst all participants, rather there should
not be a strong consensus against the proposal.
After 10 days the RFC is merged unless any new blocking concerns are raised by team
members. A “merge” signifies that work towards implementing the feature and integrating
it can now happen without interruption. An RFC doesn’t have to have a reference
implementation for it to be accepted.
The other possible results of the “final comment period” are to:
postpone the RFC (similar to the Deferred status in PEPs),
get it back into discussion if blocking concerns can be addressed, or
close it if blocking concerns are not solvable. When an RFC is marked as
closed, there is a 7-day grace period to debate whether it should be closed.
In practice registering concerns with an RFC happens very often initially but rarely
causes for the RFC to be entirely killed.
This process scales well for small-contention changes and/or smaller changes. For the
largest controversial changes the discussion gets unwieldy. This is a topic currently
(as of August 2018) on the minds of the Rust team (see:
“Listening and Trust, part 1”,
“Listening and Trust, part 2”,
“Listening and Trust, part 3”,
“Proposal for a staged RFC process”).
Planning a new release
Every six weeks the Rust compiler is released with whatever it contained at the time.
There are no LTS channels or releases yet but this concept is planned to make
redistributors able to keep up with development better.
Every few years a so-called “Edition” is released.
Those are milestone releases with full sets of updated documentation and tooling. They
can be backwards incompatible with previous editions. External packages opt into
breaking changes in their crate metadata. The Rust compiler supports all editions that
existed prior to its release. Linking between crates of any supported edition is
possible.
Changes in the process over time
The Rust programming language was started by Graydon Hoare who developed it as
a personal project for a few years. When Mozilla started sponsoring the project,
the team slowly grew with Graydon as a BDFL-style figure. He left the project
in 2013. Rust functions without a BDFL since. The RFC process was put in place later.
Initially some design discussions happened during closed-door weekly video meetings
which was shut down
in May 2015 (before the 1.0 release of Rust), organically replaced with open discussion
and direct influence of teams.
The number of teams is growing in time. The number of technical decisions made by the
core team is decreasing, instead those get delegated to respective teams.
The concept of a “final comment period” was introduced to encourage more public
discussion and enable reacting to a change about to being made, instead of having to
revert a rushed decision that was already made.
OpenStack
The OpenStack Foundation Bylaws lay out the basic structure for
project governance, with Article IV
delegating day-to-day management of the open source project to the
OpenStack Technical Committee (TC), and The TC member policy
defining broadly how the Technical Committee shall be elected. The TC
publishes a set of more detailed governance documents, including the TC charter, which
describes the team structure, precise rules for establishing
eligibility to run for office, and criteria for establishing the
various electorates.
Key people and their functions
The OpenStack community is made up of many distinct project teams,
responsible for producing different components of the software (block
storage management, compute management, etc.) or managing different
parts of the processes the community follows (such as tracking the
release schedule). Each team is led by a Project Team Lead (PTL),
elected by the Active Project Contributors for that project.
Active Project Contributors (APCs) are recent contributors to a given
project team. APC status formally requires two things: becoming an
individual member of the OpenStack Foundation (membership is free) and
having a change merged within the last year (two development cycles)
in a repository managed by a project team.
The elected PTL serves a term equal to one development cycle (roughly
6 months). There is no restriction on the number of consecutive terms
a person may serve as PTL, and it is common for someone to serve for
several terms in a row. It is also not unusual for a team to have only
one candidate volunteer to serve as PTL for a given cycle, in which
case there is no need for an election.
The PTL represents the team in all cases except where they have
explicitly delegated some responsibility. For example, many teams
designate a separate release liaison to manage the release process
for a development cycle. The PTL also serves as a final decision
maker in cases where consensus cannot be reached between the team
members.
While the APCs all vote for the PTL of a team, in many other cases
only the core reviewer team will be consulted on policy decisions
for the team. Anyone may review any patch for any OpenStack
project. After someone demonstrates that they have a good grasp of the
technical issues of a project, that they provide useful feedback on
reviews, and that they understand the direction the project is going,
they may be invited to become a member of the core review team. Unlike
in many other communities, this status does not grant them the right
to submit code without having it reviewed. Rather, it asks them to
commit to reviewing code written by other contributors, and to
participate in team decision-making discussions. Asking someone to
become a member of the core review team is a strong indication of
trust.
The Technical Committee (TC) is responsible for managing the
development of OpenStack as a whole. The 13 members of the Technical
Committee are directly elected by APCs from all project teams. Each
member serves a term of two development cycles (roughly 1 year), with
the elections split so that only about half of the members’ terms
expire at any time, to ensure continuity. The TC establishes overall
policies, such as the criteria for adding new project teams, the
deprecation policy for Python 2, testing requirements, etc.
Regular decision process
All elections for PTL or TC members use https://civs.cs.cornell.edu to
run a Condorcet election. This system was selected because it
emphasizes consensus candidates over strict popularity.
The OpenStack contributor community relies on 3 primary tools for
discussion: the openstack-dev mailing list,
a gerrit code review instance at https://review.openstack.org, and a
set of OpenStack-specific IRC channels on Freenode. There are a few teams
whose contributors are based primarily in China, and they have trouble
accessing IRC. Those teams tend to use alternative platforms such as
WeChat, instead.
The tool used for discussing any given decision will vary based on its
weight and impact. Everyone is encouraged to use either the mailing
list or gerrit to support asynchronous discussion across a wider range
of timezones and firewalls, especially for publicizing final
decisions for the rest of the community.
Policy decisions limited to a single team are usually made by the core
review team for a project, and the policies and decision processes may
vary between teams. Some groups write down their team policies in
their documentation repository, and use the code review tool (gerrit)
to vote on them. Some teams discuss policies on IRC, either ad hoc or
during a regularly scheduled meeting, and make decisions there. Some
teams use the mailing list for those discussions. The PTL for the team
is responsible for ensuring the discussion is managed and the outcome
is communicated (either by doing so directly or ensuring that the task
is delegated to someone else).
All team policy decisions need to be compatible with the overall
policies set by the Technical Committee. Because the TC tends to make
broader governance decisions that apply to the entire contributor
community, the process for discussing and voting on those decisions is
described more formally, including specifying the number of votes
needed to pass and the minimum length of time required for
discussion. For example, most motions require 1/3 of the members (5)
to pass and must stay open at least 3 days after receiving sufficient
votes to pass, ensuring that there is time for dissent to be
registered. See the Technical Committee Charter
and house rules
for more details.
Significant design decisions are usually discussed by reviewing a
specification document, somewhat
similar to a PEP, that covers the requirements, alternatives, and
implementation details. Feedback is solicited from all contributors,
and then specifications are eventually approved or rejected by members
of the core review team for a project. Some teams require only 2
reviewers to approve a design, while other teams require a stronger
indication of consensus before a design is approved. Each team sets a
deadline for approving specifications within each development cycle, to encourage
contributors to work out designs for significant new features early
and avoid risk from changes late in the cycle.
Smaller technical decisions are typically made by reviewing the
patch(es) needed to implement the change. Anyone may review any patch
and provide technical feedback, but ultimately two core reviewers for
a team are needed to approve most changes (exceptions are often made
for trivial changes such as typos or for fixes that unblock the CI
gating system).
Controversial decision process
Controversial, or merely complicated, decisions frequently expand
outside of specification reviews to mailing list discussions. They
often also result in discussions at one of the regularly scheduled
in-person community gatherings. Because many members of the community
cannot attend these events, the discussions are summarized and final
decisions are made using on-line tools as much as possible.
The PTL is responsible for deciding when consensus has been reached
for decisions that affect a single team, and to make a final call in
rare cases where consensus has not been reached and a decision
absolutely needs to be made. The TC acts as a similar decision-making
group of last resort for cases where issues between teams cannot be
resolved in another way. Such escalation of decision-making ends up
being rarely necessary, because the contributors directly involved
generally prefer to come to a consensual agreement rather than
escalate the decision to others.
Planning a new release
OpenStack has a major release about every 6 months. These are
coordinated date-based releases, which include the work finished up to
that point in time in all of the member projects. Some project teams
release more often than every 6 months (this is especially true for
teams building libraries consumed by other teams). Those smaller
releases tend to be produced when there is content (new features or
bug fixes) to justify them.
The schedule for each development cycle, with deadlines and a final
release date, is proposed by the release management team, in
coordination with the Foundation staff (releases are generally aligned
with the calendar of in-person events), and then the community has an
opportunity to provide feedback before the final dates are set.
Decisions about priorities for each development cycle are made at the
team level and the TC level. Core review teams prioritize internal
work, such as fixing bugs and implementing new features. The TC
selects community goals, which
usually require some amount of work from all teams. Agreeing to these
priorities at the start of each cycle helps the teams coordinate their
work, which is especially important because the implementation will
require reviews from multiple team members.
Changes in the process over time
Over the last 8 years the number of OpenStack project teams has grown
from 2 to 63. The makeup of the Technical Committee has changed to
accommodate that growth. Originally the TC was made up of PTLs, but as
the membership grew it became impractical for the group to function
effectively.
The community also used to be organized around “program areas” rather
than project teams. A program area covered a feature set, such as
gathering telemetry or managing block storage. This organization
failed when multiple teams of people wanted to work on the same
feature set using different solutions. Organizing teams around the
code they deliver allows different teams to have different
interpretations of the same requirements. For example, there are now
several teams working on different deployment tools.
Jupyter
The governance structure is documented in the Main Governance Document
within the Jupyter Governance repo.
The effective governance process grows organically over time as the needs of
the project evolve. Formal changes to the Governance Document are submitted via
Pull Request, with an open period for comments. After the open period, a
Steering Council may call for a vote to ratify the PR changes. Acceptance
requires a minimum of 80% of the Steering Council to vote and at least 2/3 of
the vote must be positive. The BDFL can act alone to accept or reject changes
or override the Steering Council decision; though this would be an extremely
rare event.
Key people and their functions
The key people in Jupyter’s Governance are the BDFL, Fernando Perez, and the
Steering Council. Contributors can be given a special status of core contributor.
Some may also be Institutional Contributors, who are individuals who contribute
to the project as part of their official duties at an Institutional Partner.
Fernando Perez, the project founder, is the current and first BDFL. The BDFL
may serve as long as desired. The BDFL succession plan
is described in the Main Governance Document. In summary, the BDFL may appoint
the next BDFL. As a courtesy, it is expected that the BDFL will consult with the
Steering Council. In the event that the BDFL can not appoint a successor, the
Steering Council will recommend one.
Core contributors are individuals who are given rights, such as commit privileges,
to act in the best interest of the project within their area of expertise or
subproject.
An existing core contributor typically recommends someone be given
core contributor rights by gathering consensus from project leads, who are
experienced core contributors as listed in the README of the project repo.
To be recommended and invited as a Steering Council member, an individual must
be a Project Contributor who has produced contributions that are substantial in
quality and quantity, and sustained over at least one year. Potential Council
Members are nominated by existing Council members and voted upon by the
existing Council after asking if the potential Member is interested and willing
to serve in that capacity.
Regular decision process
Project Jupyter is made up of a number of GitHub organizations and subprojects
within those organizations. Primary work happens via GitHub issues and pull
requests. Approving a pull request by any team member allows it to be merged
without further process. All merged pull requests end up in the next stable
release of a subproject.
There is a weekly, public Project-wide meeting that is recorded and posted on
YouTube. Some larger GitHub organizations, which are subprojects of
Project Jupyter, e.g. JupyterLab and JupyterHub, may
have additional public team meetings on a weekly or monthly schedule.
Discussions occur on Gitter, the Jupyter mailing list, and most frequently an
open issue and/or pull request on GitHub.
Controversial decision process
The foundations of Project Jupyter’s governance are:
Openness & Transparency
Active Contribution
Institutional Neutrality
During the everyday project activities, Steering Council members participate in
all discussions, code review and other project activities as peers with all
other Contributors and the Community. In these everyday activities,
Council Members do not have any special power or privilege through their
membership on the Council. However, it is expected that because of the quality
and quantity of their contributions and their expert knowledge of the
Project Software and Services that Council Members will provide useful guidance,
both technical and in terms of project direction, to potentially less
experienced contributors.
For controversial issues, the contributor community works together to refine
potential solutions, iterate as necessary, and build consensus by sharing
information and views constructively and openly. The Steering Council may
make decisions when regular community discussion doesn’t produce consensus
on an issue in a reasonable time frame.
Voting
Rarely, if ever, is voting done for technical decisions.
For other Project issues, the Steering Council may call for a vote for a
decision via a Governance PR or email proposal. Acceptance
requires a minimum of 80% of the Steering Council to vote and at least 2/3 of
the vote must be positive.
The BDFL can act alone to accept or reject changes or override the Steering
Council decision; though this would be an extremely rare event. As Benevolent,
the BDFL, in practice chooses to defer that authority to the consensus of the
community discussion channels and the Steering Council.
Planning releases
Since Project Jupyter has a number of projects, not just a single project, the
release planning is largely driven by the core contributors of a project.
Changes in the process over time
The process has remained consistent over time, and the approach has served us
well. Moving forward The Project leadership will consist of a BDFL and
Steering Council. This governance model was a formalization of what
the Project was doing (prior to 2015 when the Main Governance Document was
adopted by the Steering Council), rather than a change in direction.
Django
The governance structure is documented in Organization of the Django Project.
Key people and their functions
The project recognizes three kinds of contributors. Members of the
core team, the Technical Board, and Fellows. Regular core committers
no longer exercise their “commit bit”, instead they rely on pull
requests being reviewed and accepted. The Technical Board steers
technical choices. Fellows are hired contractors who triage new
tickets, review and merge patches from the committers and community,
including non-trivial ones.
Core team members are added by nomination and vote within the core
team, with technical board veto (so far not exercised). Technical
board is elected by and from the core team membership every 18 months
(every major Django release). Sub-teams within the core team are
self-selected by interest.
Regular decision process
Most day-to-day decisions are made by Fellows and sometimes other active
core team members.
The core team votes on new members which requires a 4/5 majority of
votes cast, no quorum requirement. The Technical Board has veto power.
This power was never exercised
Controversial decision process
The Technical Board occasionally approves Django
Enhancement Proposals (DEPs) but those are rare. The DEP process is
roughly modeled after PEPs and documented in DEP 1.
DEPs are mostly used to design major new features, but also for
information on general guidelines and process.
An idea for a DEP should be first publicly vetted on the
django-developers mailing list. After it was roughly validated, the
author forms a team with three roles:
authors who write the DEP and steers the discussion;
implementers who prepare the implementation of the DEP;
a shepherd who is a core developer and will be the primary reviewer
of the DEP.
The DEP’s draft is submitted, assigned a number, and discussed. Authors
collect feedback and steer discussion as they see fit. Suggested venues
to avoid endless open-ended discussions are: separate mailing lists,
Wiki pages, working off of pull requests on the DEP.
Once the feedback round is over, the shepherd asks the Technical Board
for review and pronouncement. The Board can rule on a DEP as a team or
designate one member to review and decide.
In any case where consensus can’t be reached, the Technical Board has
final say. This was never exercised.
Differences between DEPs and PEPs
The main difference is that the entire workflow is based on pull
requests rather than e-mail. They are pronounced upon by the Technical
Board. They need to have the key roles identified before submission
and throughout the process. The shepherd role exists to guide a DEP
to completion without engaging the Technical Board.
Those changes to the process make it more distributed and workable in
a governance model without a BDFL.
Planning a new release
Releases are done on a fixed time-based schedule, with a major version
every 18 months. With paid Fellows to ensure the necessary work gets
down, on-time releases are routine.
Changes in the process over time
Django originally had two BDFLs: Jacob Kaplan-Moss and Adrian Holovaty.
They retired (Adrian’s post, Jacob’s post)
9 years into the project’s history. Following the stepping down,
the DEP process was defined.
TypeScript
The governance structure is not externally documented besides the
CONTRIBUTING.md
document in the main TypeScript repository.
Key people and their functions
There is a formal design team and a release management team working at
Microsoft. The main person behind the project is currently Anders
Hejlsberg as some of the original members of the team have left the
company.
Regular decision process
Microsoft, where the project is developed, has a strong planning culture
so development roadmaps are released long in advanced, notes from
design discussions held at Microsoft get published quickly and meetings
are sometimes broadcast using Skype.
External contributions are encouraged through pull requests on GitHub.
Suggestions for new use cases or features are given by issues on GitHub.
This serves like an ad-hoc PEP-like process. There is some discussion
over social media (Twitter) as well.
Controversial decision process
Hejlsberg is the central figure of the project in terms of language
design, synthesizing community needs into a cohesive whole. There is
no formal process to externally contribute to the design of the
language.
The TypeScript team filters through and integrates community
suggestions. The main advantages of this setup are that there is
strong and consistent design with dependable scheduling and
execution. While there is transparency of intentions and plans, the
disadvantage of this model is that community involvement is limited
to pull requests and suggestions.
Planning a new release
Microsoft determines the release schedule, communicates dates and
features well in advance. Nightly builds are usually stable (with
a significant portion of users on this release form).
Versioned releases are done every 1 - 3 months, with a roadmap available
on GitHub.
Changes in the process over time
TypeScript is likely the first notable project by Microsoft developed
fully in the open (versus source-available).
Open-sourcing of TypeScript by Microsoft was a planned feature from the
inception of the project. Before the first open release was made, the
language was driven fully by needs identified by the original teams and
the early in-house users. The initial open-sourcing happened via
the now-defunct Microsoft CodePlex platform. It didn’t have
a well-defined routine of accepting external contributions. Community
engagement rose significantly after the project got moved.
Astropy
Key people and their functions
The Astropy Project team’s responsibilities are spread over many different
roles [1], though frequently a person will have several roles.
The main body overseeing the Astropy Project is the Astropy
Coordination Committee (CoCo) . Its key roles are dealing with any
financial issues, approving new packages wanting to join the Astropy
Project, approving or rejecting Astropy Proposals for Enhancement
(APEs) [2], and generally anything that’s “leadership”-oriented
or time-sensitive. As of this writing, the committee has four members,
and might grow or shrink as the demands on the committee change.
Regular decision process
Code-level decisions
The Astropy Project includes the core Astropy package and other
affiliated packages. For the sake of simplicity, we will avoid
discussing affiliated packages, which can have their own rules.
Therefore, everything below will concern the core Astropy package.
The core Astropy package is organized as sub-packages. Each sub-package
has an official maintainer as well as one or more deputies, who are
responsible for ensuring code is reviewed and generally architecting the
subpackage. Code-level decisions are therefore made in GitHub issues or
pull requests (PRs), usually on the basis of consensus, moderated by the
maintainer and deputies of that sub-package.
When there is specific disagreement, majority vote of those who are involved
in the discussion (e.g. PR) determines the winner, with the CoCo called on
to break ties or mediate disagreements.
Non-code decisions
Non-code decisions (like sprint scheduling, bugfix release timing, etc)
are usually announced on the astropy-dev mailing list [3] with
a vote-by-message format, or a “if there are no objections”-style message
for highly uncontroversial items. In general, on astropy-dev the expectation
is a concrete proposal which other members are welcome to comment or vote on.
Voting
Voting usually involves either using the +1/-1 format on GitHub or the
astropy-dev mailing list. There, any interested person can vote regardless
of their official role in the project, or lack thereof. Furthermore, there
is no veto mechanism for the CoCo to override decisions of the majority.
Controversial decision process
Simpler controversial decisions are generally discussed on the astropy-dev
mailing list [3], and after a reasonable time either there is
a clear consensus/compromise (this happens most of the time), or the CoCo
makes a decision to avoid stalling.
More complicated decisions follow the APE process, which is modeled after the
PEP process. Here, the CoCo makes the final decision after a discussion
period, open to everyone, has passed. Generally the CoCo would follow the
consensus or majority will.
Ethical issues
The Project has an Ombudsperson who ensures there is an alternate contact
for sensitive issues, such as Code of Conduct violations, independent
from the Coordination Committee. In practice, the CoCo, the Community
engagement coordinators and the Ombudsperson would work together privately
to try and communicate with the violator to address the situation.
Planning a new release
The major release timing is on a fixed schedule (every 6 months); whatever
is in at that time goes in.
Changes in the process over time
The CoCo and the “Open Development” ethos came from the inception of the
Project after a series of votes by interested Python-oriented astronomers
and allied software engineers. The core results of that initial discussion
were embodied in the Vision for Astropy document [4].
The existence of the formal roles and most of the rest of the above
came as evolutionary steps as the community grew larger, each following
either the APE process, or the regular process of a proposal being brought
up for discussion and vote in astropy-dev [3]. In general all
evolved as sort of ratification of already-existing practices, only after
they were first tested in the wild.
Self-appreciation
The fact that anyone who has the time can step in and suggest something
(usually via PR) or vote on their preference, leads to a sense that
“we are all in this together”, leading to better-coordinated effort.
Additionally, the function of the CoCo as mostly a tie-breaking body means
that there’s no sense of a dictator who enforces their will, while still
giving clear points of contact for external organizations that are
leery of fully-democratic governance models.
References
[1]
Astropy roles and responsibilities
https://www.astropy.org/team.html
[2]
Astropy Proposals for Enhancement
https://github.com/astropy/astropy-APEs
[3] (1, 2, 3)
Astropy-dev mailing list
https://groups.google.com/forum/#!forum/astropy-dev
[4]
Vision for a Common Astronomy Python Package
https://docs.astropy.org/en/stable/development/vision.html
Bonus: Microsoft
Despite the selection process for “relevant projects” described above,
it is worthwhile considering how companies that are held financially
accountable for their decisions go about making them. This is not
intended as a readily-usable model for Python, but as additional insight
that may influence the final design or selection.
This section is not taken from any official documentation, but has been
abstracted by Steve Dower, a current Microsoft employee, to reflect the
processes that are most applicable to individual projects in the
engineering departments. Role titles are used (and defined) rather than
identifying specific individuals, and all names are examples and should
not be taken as a precise description of the company at any particular
time in history.
This is also highly simplified and idealised. There are plenty of
unhealthy teams that do not look like this description, and those
typically have high attrition (people leave the team more frequently
than other teams). Teams that retain their people are usually closer to
the model described here, but ultimately everything involving humans is
imperfect and Microsoft is no exception.
Key people and their functions
Microsoft has a hierarchy that ultimately reports to the CEO. Below the
CEO are a number of organisations, some of which are focused on
engineering projects (as opposed to sales, marketing or other functions).
These engineering organisations roughly break down into significant
product families - for example, there has been a “Windows group”, an
“Xbox group”, and a “server and tools group”. These are typically led by
Executive Vice Presidents (EVPs), who report to the CEO.
Below each EVP are many Corporate Vice Presidents (CVPs), each of which
is responsible for one or more products. This level is where the hierarchy
becomes relevant for the purposes of this PEP - the CEO and EVPs are
rarely involved in most decision processes, but set the direction under
which CVPs make decisions.
Each product under a CVP has a team consisting of Program Managers
(PMs) and Engineering Managers. Engineering Managers have teams of
engineers who are largely uninvolved in decision making, though may be
used as specialists in some cases. For the rest of this section,
Engineering refers to anyone from the engineering team who is
contributing with a technical-focus, and PM refers to anyone from the
program management team contributing with a customer-focus. After
decisions are made, Engineering does the implementation and testing work,
and PM validates with users that their problem has been solved.
(This is actually a huge simplification, to the point where some people
in these roles are offended by this characterisation. In reality, most
people in PM or Engineering do work that crosses the boundary between
the two roles, and so they should be treated as a term describing the
work that somebody is doing in the moment, rather than an identifier or
restriction for a person.)
Teams generally represent a feature, while the CVP represents a product.
For example, Visual Studio Code has a CVP who is ultimately responsible
for decisions about that product and its overall direction (in the context
set by their EVP). But many teams contribute features into Visual Studio
Code.
For complete clarity, the CEO, EVPs, and CVPs do not ever directly
modify source code. Their entire role is to provide direction for
whoever is immediately below them and to adjudicate on controversial
decisions.
Regular decision process
Changes to product code that are not visible to external users are made
solely by Engineering. Individual engineers will be assigned tasks by a
designated engineering manager, or may self-assign. Promotion to
increasingly senior positions generally reflects trust in the
individual’s decision-making ability, and more senior engineers are
trusted to make decisions with less validation from the rest of the team.
Most bugs are covered by this process (that is, fixing a user-visible
problem without changing the intended experience is an Engineering
decision).
Decisions affecting users of a particular feature are made by the PM
team for that feature. They will use whatever data sources available to
identify an issue, experiment with alternatives, and ultimately prepare
a design document. Senior members from PM and Engineering will review
designs to clarify the details, and ultimately an artifact is created
that the feature team agrees on. Engineering will use this artifact to
implement the work, and PM will later use this artifact to validate that
the original issue has been resolved.
Senior members of Engineering and PM teams for a feature are expected to
make decisions in the spirit of the direction set by their CVP. Teams
have regular meetings with their CVP to discuss recent decisions and
ensure consistency. Decisions that are not obviously in line with CVP
expectations are escalated to the controversial process.
Controversial decision process
When decisions require cross-team coordination, or do not obviously
align with previous CVP guidance, teams will escalate decision making.
These often include decisions that involve changing direction,
attempting to reach a new or different group of users, deprecating and
removing significant features (or on a short timeframe), or changes that
require quick releases.
In general, CVPs are not intimately familiar with all aspects of the
feature team’s work. As a result, the feature team must provide both a
recommendation and sufficient context for the decision that the CVP can
decide without additional knowledge. Most of the time, the first
attempt results in a series of questions from the CVP, which the team
will research, answer and attempt the decision again at a later date.
Common questions asked by CVPs are:
how many users are affected by this decision?
what is the plan for minimizing impact on current users?
how will the change be “sold”/described to potential users?
CVPs are expected to have a strong understanding of the entire field, so
that they can evaluate some questions for themselves, such as:
what similar decisions have been made by other projects within Microsoft?
what other projects have plans that may impact this decision?
what similar decisions have been made by projects outside Microsoft?
do users need it?
is it in line with the direction set by their EVP?
Decisions made by CVPs are generally arbitrary and final, though they
typically will provide their rationale.
Planning a new release
Releases involve coordinating a number of feature teams, and so rarely
attempt to include input from all teams. A schedule will be determined
based on broader ecosystem needs, such as planned events/conferences or
opportunities to take advantage of media attention.
Teams are informed of the release date, the theme of the release, and
make their own plans around it following the above decision making
process. Changing the release date is considered a controversial
decision.
Acknowledgements
Thank you to Alex Crichton from the Rust team for an extensive explanation of how the
core team governs the project.
Jeremy Stanley, Chris Dent, Julia Kreger, Sean McGinnis, Emmet Hikory,
and Thierry Carrez contributed to the OpenStack section.
The Project Jupyter Steering Council created the Main Governance Document for
Project Jupyter, and Carol Willing summarized the key points of that document
for the Jupyter section.
Thank you to Carl Meyer from the Django team for explanation how their
project’s governance is set up.
The TypeScript and Swift sections were created after conversations with
Joe Pamer and Vlad Matveev. Thanks!
Answers about the Astropy project were kindly contributed, in significant
detail, by Erik Tollerud and reviewed by other members of the project.
Annex 1: Template questions
The following set of questions was used as a template to guide evaluation and
interaction with the surveyed projects:
Do you have any open documentation on how the governance model is set up?
How does the process look like in practice?
Who are the key people?
What “special statuses” can contributors have?
How are they elected/how are the statuses assigned?
How are regular decisions made?
How are controversial decisions made?
Is there a voting mechanism? how does it work? how often do votes actually happen?
Is there a veto mechanism? how often was it actually used?
How do you like the process?
Which parts work well?
Which parts could work better?
When it doesn’t work well, how does it look like?
What would you change if it were only up to you?
Related project work:
How do you decide when a release happens and what goes into it?
How do you decide who gets commit access?
Where do you hold discussions? (GitHub, mailing lists, face-to-face meetings, and so on)
Do you have a RFC/PEP-like process?
Who has access to those discussion channels?
How is this access granted/revoked?
Who moderates those discussions?
Do you (and how) censure participants and how?
Process evolution
How did this process evolve historically?
How can it be changed in the future?
Copyright
This document has been placed in the public domain.
| Final | PEP 8002 – Open Source Governance Survey | Informational | This PEP surveys existing and similar open source and free software projects
for their governance models, providing summaries that will serve as useful
references for Python’s own selection of a new governance model in the wake of
Guido’s retirement. |
PEP 8010 – The Technical Leader Governance Model
Author:
Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
24-Aug-2018
Table of Contents
Abstract
PEP Rejection
Open discussion points
Why a singular technical leader?
Flexibility
The role of the GUIDO
Authority comes from the community
Length of service and term limits
Choosing a GUIDO
The Council of Pythonistas (CoP)
No confidence votes
Day-to-day operations
PEP considerations
Version History
Copyright
Abstract
This PEP proposes a continuation of the singular technical project
leader model, euphemistically called the Benevolent Dictator For Life (BDFL)
model of Python governance, to be henceforth called in this PEP the
Gracious Umpire Influencing Decisions Officer (GUIDO). This change in
name reflects both the expanded view of the GUIDO as final arbiter for
the Python language decision making process in consultation with the
wider development community, and the recognition that “for life” while
perhaps aspirational, is not necessarily in the best interest of the
well-being of either the language or the GUIDO themselves.
This PEP describes:
The rationale for maintaining the singular technical leader model
The process for how the GUIDO will be selected, elected, retained,
recalled, and succeeded;
The roles of the GUIDO in the Python language evolution process;
The term length of service;
The relationship of the GUIDO with a Council of Pythonistas (CoP)
that advise the GUIDO on technical matters;
The size, election, and roles of the CoP;
The decision delegation process;
Any changes to the PEP process to fit the new governance model;
This PEP does not name a new BDFL. Should this model be adopted, it
will be codified in PEP 13 along with the names of all officeholders
described in this PEP.
PEP Rejection
PEP 8010 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
Open discussion points
Various tweaks to the parameters of this PEP are allowed during the
governance discussion process, such as the exact size of the CoP, term
lengths of service, and voting procedures. These will be codified by
the time the PEP is ready to be voted on.
The voting procedures and events described in this PEP will default to
the voting method specified in PEP 8001, although as that PEP is still
in discussion at the time of this writing, this is subject to change.
It is allowed, and perhaps even expected, that as experience is gained
with this model, these parameters may be tweaked as future GUIDOs are
named, in order to provide for a smoother governing process.
Why a singular technical leader?
Why this model rather than any other? It comes down to “vision”.
Design by committee has many known downsides, leading to a language
that accretes new features based on the varied interests of the
contributors at the time. A famous aphorism is “a camel is a horse
designed by committee”. Can a language that is designed by committee
“hang together”? Does it feel like a coherent, self-consistent
language where the rules make sense and are easily remembered?
A singular technical leader can promote that vision more than a
committee can, whether that committee is small (e.g. 3 or 5 persons)
or spans the entire Python community. Every participant will have
their own vision of what “Python” is, and this can lead to indecision
or illogical choices when those individual visions are in conflict.
Should CPython be 3x faster or should we preserve the C API? That’s a
very difficult question to get consensus on, since neither choice is
right or wrong. But worse than making the wrong decision might be
accepting the status quo because no consensus could be found.
Flexibility
Degrees of flexibility are given to both the GUIDO and CoP by way of
underspecification. This PEP describes how conflicts will be
resolved, but expects all participants, including core developers,
community members, and office holders, to always have the best
interest of Python and its users at heart. The PEP assumes that
mutual respect and the best intentions will always lead to consensus,
and that the Code of Conduct governs all interactions and discussions.
The role of the GUIDO
One of the most important roles of the GUIDO is to provide an
overarching, broad, coherent vision for the evolution of the Python
language, spanning multiple releases. This is especially important
when decision have lasting impact and competing benefits. For
example, if backward incompatible changes to the C API leads to a 2x
improvement in Python performance, different community members will
likely advocate convincingly on both sides of the debate, and a clear
consensus may not emerge. Either choice is equally valid. In
consultation with the CoP, it will be the GUIDO’s vision that guides
the ultimate decision.
The GUIDO is the ultimate authority for decisions on PEPs and other
issues, including whether any particular change is PEP-worthy. As is
the case today, many –in fact perhaps most– decisions are handled by
discussion and resolution on the issue tracker, merge requests, and
discussion forums, usually with input or lead by experts in the
particular field. Where this operating procedure works perfectly
well, it can continue unchanged. This also helps reduce the workload
on the CoP and GUIDO, leaving only the most important decisions and
broadest view of the landscape to the central authority.
Similarly, should a particular change be deemed to require a PEP, but
the GUIDO, in consultation with the CoP, identifies experts that have
the full confidence to make the final decision, the GUIDO can name a
Delegate for the PEP. While the GUIDO remains the ultimate authority,
it is expected that the GUIDO will not undermine, and in fact will
support the authority of the Delegate as the final arbiter of the PEP.
The GUIDO has full authority to shut down unproductive discussions,
ideas, and proposals, when it is clear that the proposal runs counter
to the long-term vision for Python. This is done with compassion for
the advocates of the change, but with the health and well-being of all
community members in mind. A toxic discussion on a dead-end proposal
does no one any good, and they can be terminated by fiat.
To sum up: the GUIDO has the authority to make a final pronouncement
on any topic, technical or non-technical, except for changing to the
governance PEP itself.
Authority comes from the community
The GUIDO’s authority ultimately resides with the community. A rogue
GUIDO that loses the confidence of the majority of the community can
be recalled and a new vote conducted. This is an exceedingly rare and
unlikely event. This is a sufficient stopgap for the worst-case
scenario, so it should not be undertaken lightly. The GUIDO should
not fear being deposed because of one decision, even if that decision
isn’t favored by the majority of Python developers. Recall should be
reserved for actions severely detrimental to the Python language or
community.
The Council of Pythonistas (see below) has the responsibility to
initiate a vote of no-confidence.
Length of service and term limits
The GUIDO shall serve for three Python releases, approximately 4.5
years given the current release cadence. If Python’s release cadence
changes, the length of GUIDO’s term should change to 4.5 years rounded
to whole releases. How the rounding is done is left to the potential
release cadence PEP. After this time, a new election is held
according to the procedures outlined below. There are no term limits,
so the GUIDO may run for re-election for as long as they like.
We expect GUIDOs to serve out their entire term of office, but of
course, Life Happens. Should the GUIDO need to step down before their
term ends, the vacancy will be filled by the process outlined below as
per choosing a new GUIDO. However, the new GUIDO will only serve for
the remainder of the original GUIDO’s term, at which time a new
election is conducted. The GUIDO stepping down may continue to serve
until their replacement is selected.
During the transition period, the CoP (see below) may carry out the
GUIDO’s duties, however they may also prefer to leave substantive
decisions (such as technical PEP approvals) to the incoming GUIDO.
Choosing a GUIDO
The selection process is triggered whenever a vacancy exists for a new
GUIDO, or when the GUIDO is up for re-election in the normal course of
events. When the selection process is triggered, either by the GUIDO
stepping down, or two months before the end of the GUIDO’s regular
term, a new election process begins.
For three weeks prior to the vote, nominations are open. Candidates
must be chosen from the current list of core Python developers.
Non-core developers are ineligible to serve as the GUIDO. Candidates
may self-nominate, but all nominations must be seconded. Nominations
and seconds are conducted as merge requests on a private repository.
Once they accept their nomination, nominees may post short position
statements using the same private repository, and may also post them
to the committers discussion forum. Maybe we’ll even have debates!
This phase of the election runs for two weeks.
Core developers then have three weeks to vote, using the process
described in PEP 8001.
The Council of Pythonistas (CoP)
Assisting the GUIDO is a small team of elected Python experts. They
serve on a team of technical committee members. They provide insight
and offer discussion of the choices before the GUIDO. Consultation
can be triggered from either side. For example, if the GUIDO is still
undecided about any particular choice, discussions with the CoP can
help clarify the remaining issues, identify the right questions to
ask, and provide insight into the impact on other users of Python that
the GUIDO may not be as familiar with. The CoP are the GUIDO’s
trusted advisers, and a close working relationship is expected.
The CoP shall consist of 3 members, elected from among the core
developers. Their term runs for 3 years and members may run for
re-election as many times as they want. To ensure continuity, CoP
members are elected on a rotating basis; every year, one CoP member is
up for re-election.
In order to bootstrap the stagger for the initial election, the CoP
member with the most votes shall serve for 3 years, the second most
popular vote getter shall serve for 2 years, and CoP member with the
least number of votes shall serve initially for 1 year.
All ties in voting will be broken with a procedure to be determined in
PEP 8001.
The nomination and voting process is similar as with the GUIDO. There
is a three-week nomination period, where self-nominations are allowed
and must be seconded, followed by a period of time for posting
position statements, followed by a vote.
By unanimous decision, the CoP may begin a no-confidence vote on the
GUIDO, triggering the procedure in that section.
No confidence votes
As mentioned above, the CoP may, by unanimous decision, initiate a
vote of no-confidence in the GUIDO. This process should not be
undertaken lightly, but once begun, it triggers up to two votes. In
both cases, voting is done by the same procedure as in PEP 8001, and
all core developers may participate in no confidence votes.
The first vote is whether to recall the current GUIDO or not. Should
a super majority of Python developers vote “no confidence”, the GUIDO
is recalled. A second vote is then conducted to select the new GUIDO,
in accordance with the procedures for initial section of this office
holder. During the time in which there is no GUIDO, major decisions
are put on hold, but normal Python operations may of course continue.
Day-to-day operations
The GUIDO is not needed for all – or even most – decisions. Python
developers already have plenty of opportunity for delegation,
responsibility, and self-direction. The issue tracker and pull
requests serve exactly the same function as they did before this
governance model was chosen. Most discussions of bug fixes and minor
improvements can just happen on these forums, as they always have.
PEP considerations
The GUIDO, members of the CoP, and anyone else in the Python community
may propose a PEP. Treatment of the prospective PEP is handled the
same regardless of the author of the PEP.
However, in the case of the GUIDO authoring a PEP, an impartial PEP
Delegate should be selected, and given the authority to accept or
reject the PEP. The GUIDO should recuse themselves from the decision
making process. In the case of controversial PEPs where a clear
consensus does not arrive, ultimate authority on PEPs authored by the
GUIDO rests with the CoP.
The PEP propose is further enhanced such that a core developer must
always be chose as the PEP Shepherd. This person ensure that proper
procedure is maintained. The Shepherd must be chosen from among the
core developers. This means that while anyone can author a PEP, all
PEPs must have some level of sponsorship from at least one core
developer.
Version History
Version 2
Renamed to “The Technical Leader Governance Model”
“singular leader” -> “singular technical leader”
The adoption of PEP 8001 voting procedures is tentative until that
PEP is approved
Describe what happens if the GUIDO steps down
Recall votes require a super majority of core devs to succeed
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8010 – The Technical Leader Governance Model | Informational | This PEP proposes a continuation of the singular technical project
leader model, euphemistically called the Benevolent Dictator For Life (BDFL)
model of Python governance, to be henceforth called in this PEP the
Gracious Umpire Influencing Decisions Officer (GUIDO). This change in
name reflects both the expanded view of the GUIDO as final arbiter for
the Python language decision making process in consultation with the
wider development community, and the recognition that “for life” while
perhaps aspirational, is not necessarily in the best interest of the
well-being of either the language or the GUIDO themselves. |
PEP 8011 – Python Governance Model Lead by Trio of Pythonistas
Author:
Mariatta <mariatta at python.org>, Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
24-Aug-2018
Table of Contents
Abstract
PEP Rejection
Open discussion points
Roles and responsibilities of the leadership trio
Authority of the trio
What are NOT considered as the role responsibilities of the trio
Guidelines for the formation of the trio
Diversity and inclusivity
Sustainability
Additional guidelines
Why not other governance model
Why not more than three
Roles and responsibilities of Python Core Developers to the trio
Term Limit
Succession planning of the trio (open for discussion)
Scenario if one member of the trio needs to quit
Formation of working groups/area of expertise/ownership (previously BDFL delegate)
Why these workgroups are necessary
Affirmation as being a member of the PSF
Reasoning for choosing the name trio
References
Copyright
Abstract
This PEP proposes a governance model for the Core Python development community,
led by a trio of equally authoritative leaders. The Trio of Pythonistas
(ToP, or simply Trio) is tasked with making final decisions for the language.
It differs from PEP 8010 by specifically not proposing a central singular leader,
but instead a group of three people as the leaders.
This PEP also proposes a formation of specialized workgroups to assist the leadership
trio in making decisions.
This PEP does not name the members of the Trio. Should this model be adopted,
it will be codified in PEP 13 along with the names of all officeholders
described in this PEP.
This PEP describes:
The role and responsibilities of the Trio
Guidelines of how trio members should be formed
Reasoning of the group of three, instead of a singular leader
Role and responsibilities of Python core developers to the trio
Sustainability considerations
Diversity and inclusivity considerations
PEP Rejection
PEP 8011 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
Open discussion points
Various tweaks to the parameters of this PEP are allowed during the governance
discussion process, such as the exact responsibilities of the Trio, term lengths
of service, voting procedures, and trio disbandment.
These will be codified by the time the PEP is ready to be voted on.
It is allowed, and perhaps even expected, that as experience is gained with this
model, these parameters may be tweaked in order to provide for a smoother
governing process. The process for tweaking these parameters will generally
be the same voting process as described in PEP 8001.
Roles and responsibilities of the leadership trio
Be open, considerate, respectful. In other words, adhering to The PSF’s code of conduct.
Pronounce on PEPs, either as a team, or individually if the other trio members agree.
Provide vision and leadership for Python, the programming language and the community.
Understand their own limitation, and seek advice whenever necessary.
Provide mentorship to the next generation of leaders.
Be a Python core developer
Be a voting member of The PSF (one of Contributing / Manager / Fellow / Supporter). [2]
Understand that Python is not just a language but also a community. They need
to be aware of issues in Python not just the technical aspects, but also
other issues in the community.
Facilitate the formation of specialized working groups within Core Python.
See “formation of specialized working groups” section below.
Set good example of behavior, culture, and tone to Python community.
Just as Python looks at and learn from other communities for inspiration, other
communities will look at Python and learn from us.
Authority of the trio
To be clear, in case any dispute arises: the trio has the final
authority to pronounce on PEPs (except for the governance PEP), to
decide whether a particular decision requires a PEP, and to resolve
technical disputes in general. The trio’s authority does not include
changing the governance itself, or other non-technical disputes that
may arise; these should be handled through the process described in
PEP 8001.
What are NOT considered as the role responsibilities of the trio
The following are not the expected out of the trio, however they can do these if they wish.
They are not always the ones coming up with all the ideas, vision, problems to
solve, and what not. The trio will be open and accepting suggestions from core developers
and community.
Day to day bug reports do not require the trio to intervene. Any core devs are able
to make decisions, but will defer to the respective focused workgroups, and
will eventually defer to the trio when there are major disagreements among core developers.
Does not run / decide on Python language summit and its logistics.
Does not run / decide on Python core sprint and its logistics.
Does not handle CoC cases. Those are responsibilities of the PSF CoC workgroup,
but will speak out if they witness those cases.
Does not make decisions about other Python implementations (Cython, IronPython, etc).
Does not run / decide on Python conferences and its logistics.
Not an evangelist of Python. The trio is not expected to preach/advertise for
Python. They can if they want to, but not expected.
Not an educator of Python. The trio is not expected to be the ones teaching/writing
about Python. They can if they want to, but not expected.
The trio is not expected to be available 24/7, 365 days a year. They are free
to decide for themselves their availability for Python.
Not a PEP editor.
Guidelines for the formation of the trio
The success of this governance model relies on the members of the trio, and the
ability of the trio members to collaborate and work well together.
The three people need to have similar vision to Python, and each can have
different skills that complement one another.
With such a team, disagreements and conflict should be rare, but can still happen.
We will need to trust the people we select that they are able to resolve this among
themselves.
When it comes to select the members of the trio, instead of nominating various
individuals and choosing the top three, core developers will nominate trios
and vote for groups of threes who they believe can form this united trio. There
is no restriction that an individual can only be nominated in one slate.
This PEP will not name or nominate anyone into the trio.
Only once this PEP is accepted, any active core developers (who are eligible to vote)
can submit nomination of groups of three.
Once this PEP is accepted and core devs have submitted their nominations, voting
can begin, and the voting mechanism described in PEP 8001 will be used.
Qualities desired out of the trio:
Be a Python core developer.
Be a voting PSF member (one of Contributing / Manager / Fellow / Supporter). [2]
Be a member of the community with good standing.
Adhere to The PSF’s code of conduct (Be open, considerate, and respectful). [1]
Be willing to accept the said roles and responsibilities.
Able to communicate and articulate their thoughts effectively.
The following are not requirements when considering someone into the trio:
“Experience being a BDFL of something” is not a requirement.
“Be a genius” is not a requirement.
Diversity and inclusivity
The core Python development team fully supports the Python Software Foundation’s
diversity statement, and welcomes participation and contribution from people
from diverse backgrounds. When nominating people to be part of the trio,
Python core developers will take every effort into including members from
underrepresented group into consideration.
Ideally, nomination should include and reflect the diversity of core Python
contributors.
Sustainability
Lack of employer support or lack of luxury of free time should not be a factor
when identifying who should be in a trio. If there are individuals who the core
devs have identified as having the necessary skills for being a member of the
trio, but they are unable to do it because of lack of time, lack of financial
support, then we should open discussion with The PSF or other parties into
providing the needed support.
Additional guidelines
When nominating someone other than yourself, please first ask privately if
they are ok with being nominated, and if they are ok with nominated in that
group of three. This is so people don’t feel pressured to accept nomination
just because it happens publicly.
Why not other governance model
Core Python community are familiar with the singular BDFL model for over
two decades, it was a model that has “worked” for Python. Shifting to a completely
different model all of the sudden, could be disruptive to the stability of
the community. However, the community can continue to evolve
in the future.
If this PEP is chosen, it is not meant to be the only governance model for Python
going forward.
This PEP proposed a transition into a community led by a group of people (although small),
while also introducing the concept of additional specialized workgroups.
Why not more than three
Too many chefs spoil the soup.
The goal of having a leadership team is for team Python core developers to be
able to come to consensus and decisions. The larger the leadership team is,
the more difficult it will be in coming up with decision.
This is also for the benefit of the members of the trio. Learning to
collaborate with other people in a team is not something that happen organically
and takes a lot of effort. It is expected that members of the trio will be part
of the team for a long-term period. Having to deal with two other people is
probably difficult enough. We want the trio to be able to do their duties and
responsibilities as efficient as possible.
The more people in the group, the more difficult it is to try to come up
with time to meet, discuss, and coming up with decision.
Roles and responsibilities of Python Core Developers to the trio
Be open, considerate, and respectful. In other words, adhere to The PSF’s Code of Conduct
Decisions and pronouncements made by individual members of the trio are to
be seen as authoritative and coming from the trio.
Once the trio has pronounced a decision, core devs will be supportive, even if
they were not supportive in the beginning (before the trio made such decision)
Continue with day-to-day decision making in the bug tracker, and defer to the
trio if there is major disagreement
Python core developers do not handle CoC cases, those are responsibilities of
the CoC workgroup, but will speak out if they witness those cases
Aware that they are part of the larger Python community, not just the technical
aspect of it.
Be a voting PSF member (one of Contributing / Manager / Fellow / Supporter).
Set good example of behavior, culture, and tone to Python community.
Term Limit
The trio is not expected to serve for life, however a longer term is
desired. The purpose of longer term service is to avoid unnecessary churns of
needing to “elect”, and to provide stability and consistency in the language and
the community.
Currently, Python release managers hold their position for 5 years (one release
cycle), and that seems to work so far. Therefore, this PEP proposes that the
trio hold their position for 5 years.
Succession planning of the trio (open for discussion)
The trio should notify core devs of their intention to disband/retire/quit
from their roles at least one year in advance, to allow for them to actively
mentor and train the next generation of successors, and to avoid power vacuum.
The trio do not necessarily have to be the ones choosing who the next leaders will
be.
This PEP does not enforce that the same governance model be chosen for
the next generation. Python as language and community can continue to evolve.
By giving one year advance notice to disband, the trio is giving the core
Python community an opportunity to reflect on the success/failure of
this governance model, and choose a different governance model if needed.
However, the next governance model and leaders should be chosen/elected within
one year after the trio announced their desire to disband.
If it was decided to continue with this model of governance, the next
generation of trio will be nominated and elected similar to how the first
trio were nominated/chosen.
The trio should act as advisor/mentor to the next generation chosen
leaders for at least X months.
Since future trio will be chosen out of Python core developers,
it will make sense for future Python core developers to possess some but
not necessarily all, qualities of the trio as laid out in this PEP.
Therefore, the guidelines for selecting trio members can also be used
as guidelines when identifying future Python core developers.
Scenario if one member of the trio needs to quit
Effective governance models provide off-ramps or temporary breaks for leaders
who need to step down or pause their leadership service.
What if one member of the chosen trio has to quit, for unforeseen reasons?
There are several possible options:
The remaining duo can select another member to fill in the role
The trio can choose to disband, core developers can nominate other trios
Core developers can choose a different governance model
Since the trio were elected as a slate and so the loss of one breaks that unit
that was elected. Therefore, a new election should be held.
Formation of working groups/area of expertise/ownership (previously BDFL delegate)
(Open for discussion).
Certain areas and topic of Core Python and Python community require leaders
with specific skills of specialty. It will be recommended that there will be several
working groups with more authority in that specific area to assist the trio
in making decisions.
The role of these “specialized work groups/council” is to be the final decision
maker for controversial discussions that arise in their respective areas.
These working groups should be small (3-5 people), for similar reasons that the
leadership trio is a small group.
These working groups should consist of both Python core developers and external
experts. This is to ensure that decision made does not favor only Python core
developers.
Python Core developers will defer decisions to these working groups on their
respective topic. However these groups will answer/defer to the trio.
These working groups can be selected and members voted only after this PEP gets
accepted.
If this PEP is accepted, the working group can be decided within a year or two
after the PEP’s acceptance.
When selecting members of these special work groups, the trio will take
every effort into including members from underrepresented group into consideration.
Ideally, the workgroup members should include and reflect the diversity of
the wider Python community.
Members of this workgroup do not need to be a Python core developer, but they
need to be at least a basic member of the PSF [2].
These workgroup are active as long as the trio are active.
Several suggested working groups to start:
Documentation of CPython
Security of CPython
Performance of CPython
The workgroup can be seen as having similar role as the previously known role
of “BDFL-delegate” or PEP czars. The difference is, instead of appointing a
single person as decision maker, there will be a small team of decision makers.
Another difference with the previous “BDFL-delegate” role, the group can be
active as long as the trio is active, as opposed to only when there is a PEP
that requires their expertise.
When the trio disbands, these workgroups are disbanded too.
Why these workgroups are necessary
This is an effort to ‘refactor the large role’ of the previous Python BDFL.
Affirmation as being a member of the PSF
This PEP proposes that core developers and the trio members self-certify
themselves as being a member of The PSF.
Being part of the PSF means being part of the Python community, and support
The PSF’s mission and diversity statement.
By being a member of The PSF, Python core developers declare their support for
Python and agree to the community Code of Conduct.
For more details of The PSF membership, see: PSF Membership FAQ [2].
Reasoning for choosing the name trio
Not to be confused with Python trio (an async library).
The “trio” is short and easy to pronounce, unlike other words that are
long and can have negative interpretations, like triad, trinity, triumvirate,
threesome, etc.
References
[1]
The PSF’s Code of Conduct (https://www.python.org/psf/codeofconduct/)
[2] (1, 2, 3, 4)
PSF Membership FAQ (https://www.python.org/psf/membership/)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8011 – Python Governance Model Lead by Trio of Pythonistas | Informational | This PEP proposes a governance model for the Core Python development community,
led by a trio of equally authoritative leaders. The Trio of Pythonistas
(ToP, or simply Trio) is tasked with making final decisions for the language.
It differs from PEP 8010 by specifically not proposing a central singular leader,
but instead a group of three people as the leaders. |
PEP 8012 – The Community Governance Model
Author:
Łukasz Langa <lukasz at python.org>
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
03-Oct-2018
Table of Contents
PEP Rejection
Abstract
Rejected Models
Let’s have another BDFL
Challenge: There is no other Guido
Risk: Malevolent Dictator For Life
Observation: We don’t actually need a Dictator
Risk: The warm and fuzzy feeling of a vague proposal
Let’s have a Council
Risk: Dilution and confusion
Risk: Internal Conflict
Motivation
Rationale
Specification
Key people and their functions
The core team
Experts
Moderators
Regular decision process
Controversial decision process
PEP, Enhanced
Very controversial PEPs
Revisiting deferred and rejected PEPs
Other Voting Situations
Nominating a new core developer
Votes of no confidence
Voting Mechanics
Omissions
Acknowledgements
Copyright
PEP Rejection
PEP 8012 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
Abstract
This PEP proposes a new model of Python governance based on consensus
and voting by the Python community. This model relies on workgroups to carry
out the governance of the Python language. This governance model works without
the role of a centralized singular leader or a governing council.
It describes how, when, and why votes are conducted for decisions affecting
the Python language. It also describes the criteria for voting eligibility.
Should this model be adopted, it will be codified in PEP 13.
This model can be affectionately called “The Least Worst Governance
Model” by its property that while far from ideal, it’s still the most
robust one compared to the others. Since avoiding issues inherent to
the other models is a paramount feature of the Community Governance
Model, we start the discussion a bit unusually: by rejecting the
other models.
Rejected Models
Let’s have another BDFL
This seems like a very attractive idea because it’s a model we know.
One Dictator to rule us all.
Challenge: There is no other Guido
There is no other single person with the unique skillset of Guido van
Rossum. Such a person would need to have the technical, communication, and
organizational experience to lead the project successfully. Specifically, the
person would need to:
set and articulate a cohesive long-term vision for the project;
possess deep technical understanding of the runtime, the standard library,
and the wider third-party library context;
negotiate and resolve contentious issues in ways acceptable to all
parties involved;
have free time and possess the energy to sustain continuous involvement
over periods of years.
Risk: Malevolent Dictator For Life
What if we got somebody who is not as well suited for the position as
our first Dictator? There are possible scenarios in which this could
lead to severe consequences.
The Dictator could gather insufficient trust due to missing technical
depth, a “close” election, inconsistent vision, poor ability to deal
with conflict or burnout, and so on. Given a controversial decision
decided by the Dictator in a specific way, a Dictator with
insufficient trust may cause a split within the project.
The Dictator setup invites lobbying concentrated on a single person.
Unless that person is immune to leverage due to wealth, health, and
a stable life situation, this poses risk of malicious actors steering
the project from behind the curtain.
Finally, the Dictator coming from a particular part of the community
may put more weight on the needs and interests of that particular part
of the user base, alienating others.
Observation: We don’t actually need a Dictator
The irony of the Dictator model is that it requires an election. Better
yet, we need an election to even decide on which governance model to
use.
If we are already able solve two problems of this gravity via the
community process, why not keep using it for all subsequent decisions?
Risk: The warm and fuzzy feeling of a vague proposal
One last thing worth mentioning is that when a BDFL model is suggested,
it’s easy to bypass the criticism above by not mentioning who the BDFL
should be. That way the hopeful reader can project their best
expectations and wants onto the abstract BDFL, making the idea appear
more attractive. This is a mistake.
Without naming the BDFL in the model proposal we are not talking about
a concrete model. We can avoid asking and answering the hard questions.
We can imagine our best-case scenario, a candidate we’d like to serve
the role.
Omitting a name for the BDFL also puts the Community Model at an unfair disadvantage.
We already know the good, the bad, and the ugly of our core developer
group. It’s no platonic ideal, no perfect sphere with no friction. In
fact, we expect there to be a fair amount of friction and imperfections.
Thus, to fairly assess the BDFL model proposal, dear reader, you
should imagine the worst possible person within our team as that
BDFL. A concrete human being. Imagine it’s me.
Conclusion While this has been our history, without Guido, this model
does not serve the best interests of the language into the future.
Let’s have a Council
This group of people roughly shares the responsibilities of a Dictator. The
group can also be called a Triumvirate, a Quorum, Elders, Steering Committee,
and so on.
Risk: Dilution and confusion
This model favors a small group, between three and five people.
That way it shares most of the criticism with the Dictator model,
amplified. Having not one but, say, three people in position of power
dilutes responsibility while still providing high risk of lobbying,
insufficient trust, or alienating parts of the community.
Risk: Internal Conflict
Additionally, having multiple people share the responsibility of
governance creates ample opportunity for internal conflict,
inconsistent long-term vision of the project, and multiplies the
required continuous time involvement by its members (it’s no Quorum
if they can’t “reach quorum” due to other time commitments).
Just like with a frictionless spherical BDFL, reject ideas of
Councils without considering how would it work for you if that
Council consisted of three people you find inadequate for the role.
Imagine if I had two friends.
Most importantly, just like with a Dictator, we don’t need a Council.
By the time we had one, we would have already had two successful
elections. Why not keep voting?
Conclusion This model has similar risks like a Dictator, only worse.
Motivation
Now that we rejected the basics of other governance models, let’s talk why we
even need a governance model on top of a loosely defined group of committers.
Stability and Reliability We want to prevent single committers from
making wide-reaching changes that impact the future of the language or its
usability. Coherent vision and backwards compatibility are important in any
programming language, but they are doubly important for Python which is very
dynamic (e.g. has very complex backwards compatibility implications).
Diverse Uses of Python Moreover, Python is used by a
diverse group of users, from school children through scientists to
corporations with multi-million line codebases. We want to include
all our varied audiences.
Vitality We want to avoid stagnation. Python is a mature project but it
needs to keep evolving to stay relevant, both the runtime and the programming
language. To do that, people interested in improving a particular part
of the project should be able to do so without needless friction.
But for substantial changes, we want some discourse and reflection to ensure
the changes are wise.
Rationale
Inclusive The Community Model is the most inclusive model. No single person
or a small group of people is in a distinguished position of power over
others. Contributors and any workgroups in this model are self-selecting.
Pragmatic This model ensures no user group is put at a disadvantage due to
the interests of a single person or a small group of people.
Proven This model works. There is a number of large open-source projects
run this way (two of which, Rust and Django, are described in PEP 8002).
ECMAScript and C++ are similarly developed.
Specification
Key people and their functions
The core team
The Python project is developed by a team of core developers.
While membership is determined by presence in the “Python core” team
in the “python” organization on GitHub, contribution takes many forms:
committing changes to the repository;
reviewing pull requests by others;
triaging bug reports on the issue tracker;
discussing topics on official Python communication channels.
Some contributors are may be considered dormant, in other words they did not
contribute to the last two releases of CPython. Any dormant contributor can at
any time resume contribution.
Experts
The Python Developer’s Guide lists a number of interest areas along with
names of core developers who are recognized as experts in the given
area. An expert or a sub-team of experts has the following
responsibilities:
responding to issues on the bug tracker triaged to the given interest
area on a timely basis;
reviewing pull requests identified as belonging to the given interest
area on a timely basis;
overviewing cohesive design in the evolution of the given interest
area.
A core developer can assign and unassign themselves at will to
a given interest area. Existing experts listed for the given interest
area must be made aware of this change and have to unanimously agree to
it.
If a given interest area lists multiple experts, they form a sub-team
within the core team. They are responsible for the given interest area
together.
A core developer should avoid membership as an expert in too many
interest areas at the same time. This document deliberately doesn’t
specify a maximum number, it simply signals that overexertion leads to
burnout and is a risk to the project’s ability to function without
a given contributor.
Moderators
There is a group of people, some of which are not core developers,
responsible for ensuring that discussions on official communication
channels adhere to the Code of Conduct. They take action in view of
violations.
Regular decision process
Primary work happens through bug tracker issues and pull requests.
Core developers should avoid pushing their changes directly to the cpython
repository, instead relying on pull requests. Approving a pull
request by a core developer allows it to be merged without further
process.
Notifying relevant experts about a bug tracker issue or a pull request
is important. Reviews from experts in the given interest area are
strongly preferred, especially on pull request approvals. Failure to
do so might end up with the change being reverted by the relevant
expert.
Experts are not required to listen to the firehose of GitHub and bug
tracker activity at all times. Notifying an expert explicitly during
triage or bug/pull request creation may be necessary to get their
attention.
Controversial decision process
Substantial changes in a given interest area require a PEP. This
includes:
Any semantic or syntactic change to the language.
Backwards-incompatible changes to the standard library or the C API.
Additions to the standard library, including substantial new
functionality within an existing library.
Removing language, standard library, or C API features.
Failure to get a substantial change through the PEP process might result
with the change being reverted.
Changes that are bug fixes can be exempt from the PEP requirement. Use
your best judgement.
PEP, Enhanced
The PEP process is augmented with the following changes and clarifications
over information already present in PEP 1:
PEPs are not merged until the final decision is made on them; they are
open pull requests on GitHub until that moment;
to make review easier, all changes to the PEP under review should be
made as separate commits, allowing for granular comparison;
a submitted PEP needs to identify the area of interest and relevant
experts as the body that makes the final decision on it;
if the PEP author is one of the experts of the relevant area of
interest, they must name another person from outside of that interest
area to contribute to the final decision in their place;
the PEP author is responsible for gathering and integrating feedback
on the PEP using the official communication channels, with the goal of
building consensus;
all community members must be enabled to give feedback;
at some point, one of the named experts posts a “summary comment” that
lays out the current state of discussion, especially major points of
disagreement and tradeoffs; at the same time the expert proposes
a “motion for final comment period” (FCP), along with a proposed
disposition to either:
accept;
accept provisionally;
reject; or
defer the PEP.
to enter the FCP, the PEP must be signed off by all experts of the
relevant area of interest;
the FCP lasts for fourteen calendar days to allow stakeholders to file
any final objections before a decision is reached.
Very controversial PEPs
If a core contributor feels strongly against a particular PEP, during
its FCP they may raise a motion to reject it by vote. Voting details
are described below in “Voting Mechanics”.
This should be a last resort and thus a rare occurrence. It splits the
core team and is a stressful event for all involved. However, the
experts filing for a FCP for a PEP should have a good sense whether
a motion to reject it by vote is likely. In such a case, care should be
taken to avoid prematurely filing for a FCP.
There is no recourse for the opposite situation, i.e. when the
experts want to reject a PEP but others would like it accepted. This
ensures that the relevant experts have the last say on what goes in.
If you really want that change, find a way to convince them.
Moderators on official communication channels enforce the Code of
Conduct first and foremost, to ensure healthy interaction between all
interested parties. Enforcement can result in a given participant
being excluded from further discussion and thus the decision process.
Revisiting deferred and rejected PEPs
If a PEP is deferred or rejected, the relevant experts should be
contacted first before another attempt at the same idea is made.
If the experts agree there is substantial evidence to justify
revisiting the idea, a pull request editing the deferred or rejected
PEP can be opened.
Failure to get proper expert buy-in beforehand will likely result in
immediate rejection of a pull request on a deferred or rejected PEP.
Other Voting Situations
Nominating a new core developer
A champion nominates a person to become a new core developer by posting
on official communication channels. A vote is opened.
If any existing core developer does not feel comfortable with the nominee
receiving the commit bit, they should preferably address this concern in
the nomination thread. If there is no satisfactory resolution, they can
cast a negative vote.
In practice, nominating a person for a core developer should often meet
with surprise by others that this person is not a core developer yet.
In other words, it should be done when the candidate is already known
and trusted well enough by others. We should avoid nominations based on
potential.
Votes of no confidence
Removing a core developer from the core team;
Disbanding the experts team for a given area of interest.
Those describe a situation where a core developer is forcefully
removed from the core team or an experts team is forcefully disbanded.
Hopefully those will never have to be exercised but they are explicitly
mentioned to demonstrate how a dysfunctional area of interest can be
healed.
If a core developer is removed by vote from the core team, they lose
the ability to interact with the project. It’s up to the Moderators’
discretion to remove their ability to post on the bug tracker and GitHub
or just moderate their future behavior on a case-by-case basis.
If the experts team for an area of interest is disbanded, other core
developers can step up to fill the void at will. Members of the
disbanded experts team cannot self-nominate to return.
Voting Mechanics
All votes described in this document are +1/-1/0 (“Yea”/”Nay”/”Present”)
recorded votes. There are no other vote values, in particular values
out of range or fractions (like +0.5) are invalid.
Votes take fourteen calendar days. The starting date is taken looking at
the timezone of the person who filed for the motion to vote. The end
date is fourteen days later Anywhere-On-Earth.
Dormant core developers as defined in “Key people and their functions”
above are not counted towards the totals if they abstain. However, they
can vote if they choose to do so and that way they count as active.
Voting is a form of contribution.
Voting is done by a commit to a private repository in the “python”
organization on GitHub. The repository is archived and publicized after
the voting period is over. The repository’s name should start with
“vote-“.
Changes to one’s vote during the voting period is allowed. Peeking
at other developers’ cast votes during the time of the vote is possible.
Every situation requires a different vote percentage:
PEP rejection by vote requires over 1/3rd of the non-dormant core
developer population to explicitly vote to reject. Note that if
more than 1/3rd of core developers decide against a PEP, this means
there exists no super-majority of core developers who are in favor
of the change. This strongly suggests the change should not be made
in the shape described by the PEP.
New core developer nomination requires there to be no votes cast
against it.
Votes of no confidence require a super-majority of at least 2/3rds of
the non-dormant core developer population to explicitly vote in favor
of the motion.
Omissions
This document deliberately omits listing possible areas of interest
within the project. It also does not address election and management
of Moderators which are done by the Python Software Foundation and its
Code of Conduct Working Group which can be contacted by mailing
[email protected].
Acknowledgements
Thank you to the authors of PEP 8002 which was a helpful resource in
shaping this document.
Thank you to Alex Crichton and the Rust team for a governance model
that was a major inspiration for this document.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8012 – The Community Governance Model | Informational | This PEP proposes a new model of Python governance based on consensus
and voting by the Python community. This model relies on workgroups to carry
out the governance of the Python language. This governance model works without
the role of a centralized singular leader or a governing council. |
PEP 8013 – The External Council Governance Model
Author:
Steve Dower <steve.dower at python.org>
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
14-Sep-2018
Table of Contents
Abstract
PEP Rejection
The Importance of the Grey Area
Model Overview
Key people and their functions
Regular decision process
Controversial decision process
Election terms
Election voting process
No-confidence voting process
Examples of intended behaviour
Scenario 1 - The Case of the Vague PEP
Scenario 2 - The Case of the Endless Discussion
Scenario 3 - The Case of the Unconsidered Users
Scenario 4 - The Case of the Delegated Decision
Copyright
Abstract
This PEP proposes a new model of Python governance based on a Council
of Auditors (CoA) tasked with making final decisions for the language.
It differs from PEP 8010 by specifically not proposing a central
singular leader, and from PEP 8011 by disallowing core committers from
being council members. It describes the size and role of the council,
how the initial group of council members will be chosen, any term
limits of the council members, and how successors will be elected.
It also spends significant time discussing the intended behaviour of
this model. By design, many processes are not specified here but are
left to the people involved. In order to select people who will make
the best decisions, it is important for those involved to understand
the expectations of the CoA but it is equally important to allow the
CoA the freedom to adjust process requirements for varying
circumstances. This only works when process is unspecified, but all
participants have similar expectations.
This PEP does not name the members of the CoA. Should this model be
adopted, it will be codified in PEP 13 along with the names of all
officeholders described in this PEP.
PEP Rejection
PEP 8013 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
The Importance of the Grey Area
In any actual decision-making process, there is going to be grey area.
This includes unexpected scenarios, and cases where there is no
“correct” answer.
Many process plans attempt to minimise grey area by defining processes
clearly enough that no flexibility is required.
This proposal deliberately goes the other way. The aim is to provide a
robust framework for choosing the best people to handle unexpected
situations, without defining how those people should handle those
situations.
Examples are provided of “good” responses to some situations as an
illustration. The hope is that the “best” people are the best because
they would live up to those examples. The process that is proposed has
been designed to minimise the damage that may be caused when those
people turn out not to be the best.
Grey area is guaranteed to exist. This proposal deliberately embraces
and works within that, rather than attempting to prevent it.
Model Overview
Key people and their functions
The Council of Auditors (CoA) is a council of varying size, typically
two to four people, who are elected for the duration of a Python
release. One member of the CoA is considered the President, who has
some minor points of authority over the other members.
The CoA has responsibility for reviewing controversial decisions in
the form of PEPs written by members of the core development team. The
CoA may choose to accept a PEP exactly as presented, or may request
clarification or changes. These changes may be of any form and for any
reason. This flexibility is intentional, and allows the process to
change over time as different members are elected to the CoA. See the
later sections of this document for examples of the kinds of requests
that are expected.
The CoA only pronounces on PEPs submitted to python-committers. There
is no expectation that the CoA follows or participates on any other
mailing lists. (Note that this implies that only core developers may
submit PEPs. Non-core developers may write and discuss proposals on
other mailing lists, but without a core developer willing to support
the proposal by requesting pronouncement, it cannot proceed to
acceptance. This is essentially the same as the current system, but is
made explicit here to ensure that members of the CoA are not expected
to deal with proposals that are not supported by at least one core
developer.)
The CoA may not delegate authority to individuals who have not been
elected by the core developer team. (One relevant case here is that
this changes the implementation of the existing BDFL-Delegate system,
though without necessarily changing the spirit of that system. See the
later sections, particularly example scenario four, for more
discussion on this point.)
The Release Manager (RM) is also permitted the same ability to request
changes on any PEPs that specify the release they are responsible for.
After feature freeze, the RM retains this responsibility for their
release, while the CoA rotates and begins to focus on the subsequent
release. This is no different from the current process. The process
for selection of a RM is not changed in this proposal.
Core developers are responsible for electing members of the CoA, and
have the ability to call a “vote of no confidence” against a member of
the CoA. The details of these votes are discussed in a later section.
Where discussions between core developers and members of the CoA
appear to be ongoing but unfruitful, the President may step in to
overrule either party. Where the discussion involves the President, it
should be handled using a vote of no confidence.
Members of the CoA may choose to resign at any point. If at least two
members of the CoA remain, they may request a new election to refill
the group. If only one member remains, the election is triggered
automatically. (The scenario when the President resigns is described
in a later section.)
The intended balance of power is that the core developers will elect
members of the CoA who reflect the direction and have the trust of the
development team, and also have the ability to remove members who do
not honour commitments made prior to election.
Regular decision process
Regular decisions continue to be made as at present.
For the sake of clarity, controversial decisions require a PEP, and
any decisions requiring a PEP are considered as controversial.
The CoA may be asked to advise on whether a decision would be better
made using the controversial decision process, or individual members
of the CoA may volunteer such a suggestion, but the core development
team is not bound by this advice.
Controversial decision process
Controversial decisions are always written up as PEPs, following the
existing process. The approver (formerly “BDFL-Delegate”) is always
the CoA, and can no longer be delegated. Note that this does not
prevent the CoA from deciding to nominate a core developer to assess
the proposal and provide the CoA with a recommendation, which is
essentially the same as the current delegation process.
The CoA will pronounce on PEPs submitted to python-committers with a
request for pronouncement. Any member of the CoA, or the current RM,
may request changes to a PEP for any reason, provided they include
some indication of what additional work is required to meet their
expectations. See later sections for examples of expected reasons.
When all members of the CoA and the RM indicate that they have no
concerns with a PEP, it is formally accepted. When one or more members
of the CoA fail to respond in a reasonable time, the President of the
CoA may choose to interpret that as implied approval. Failure of the
President to respond should be handled using a vote of no confidence.
Election terms
Members of the CoA are elected for the duration of a release. The
members are elected prior to feature freeze for the previous release,
and hold their position until feature freeze for their release.
Members may seek re-election as many times as they like. There are no
term limits. It is up to the core developers to prevent re-election of
the CoA members where there is consensus that the individual should
not serve again.
Election voting process
The election process for each member of the CoA proceeds as follows:
a nomination email is sent to python-committers
a seconding email is sent
the nominee is temporarily added to python-committers for the
purpose of introducing themselves and presenting their position
voting opens two weeks prior to the scheduled feature freeze of the
previous release
votes are contributed by modifying a document in a private github
repository
each core developer may add +1 votes for as many candidates as they
like
after seven days, voting closes
the nominee with the most votes is elected as President of the CoA
the next three nominees with the most votes and also at least 50%
the number of votes received by the President are elected as the
other members of the CoA
where ties need to be resolved, the RM may apply one extra vote for
their preferred candidates
accepted nominees remain on python-committers; others are removed
No-confidence voting process
A vote of no confidence proceeds as follows:
a vote of no confidence email is sent to python-committers, naming
the affected member of the CoA, justifying the nomination, and
optionally listing accepted PEPs that the nominator believes should
be reverted
a seconding email is sent within seven days
the nominated member of the CoA is allowed seven days to respond,
after which the nominator or the seconder may withdraw
if no nominator or seconder is available, no further action is
taken
voting opens immediately
each core developer may add a +1 vote (remove the CoA member) or
a -1 vote (keep the CoA member) by modifying a document in a
private github repository
after seven days, voting closes
if +1 votes exceed -1 votes, the CoA member is removed from
python-committers and any nominated PEPs are reverted
if requested by the remaining members of the CoA, or if only one
member of the CoA remains, a new election to replace the removed
member may be held following the usual process.
in the case of removing the President of the CoA, the candidate
who originally received the second-most votes becomes President
Examples of intended behaviour
This section describes some examples of the kind of interactions that
we hope to see between the CoA and the core developers. None of these
are binding descriptions, but are intended to achieve some consensus
on the types of processes we expect. The CoA candidates may campaign
on the basis of whatever process they prefer, and core developers
should allocate votes on this basis.
Scenario 1 - The Case of the Vague PEP
Often in the past, initial proposals have lacked sufficient detail to
be implementable by anyone other than the proposer. To avoid this,
the CoA should read proposals “fresh” when submitted, and without
inferring or using any implied context. Then, when an aspect of a PEP
is not clear, the CoA can reject the proposal and request
clarifications.
Since the proposal is rejected, it must be modified and resubmitted in
order to be reviewed again. The CoA will determine how much guidance
to provide when rejecting the PEP, as that will affect how many times
it will likely be resubmitted (and hence affect the CoA’s own
workload). This ensures that the final PEP text stands alone with all
required information.
Scenario 2 - The Case of the Endless Discussion
From time to time, a discussion between Python contributors may seem
to be no longer providing value. For example, when a large number of
emails are repeating points that have already been dealt with, or are
actively hostile towards others, there is no point continuing the
“discussion”.
When such a discussion is occurring on python-committers as part of a
request for pronouncement, a member of the CoA should simply declare
the thread over by rejecting the proposal. In most known cases,
discussion of this sort indicates that not all concerns have been
sufficiently addressed in the proposal and the author may need to
enhance some sections.
Alternatively, and in the absence of any rejection from the other
members of the CoA, the President may declare the thread over by
accepting the proposal. Ideally this would occur after directly
confirming with the rest of the CoA and the RM that there are no
concerns among them.
When such a discussion is occurring on another list, members of the
CoA should be viewed as respected voices similar to other core
developers (particularly those core developers who are the named
experts for the subject area). While none have specific authority to
end a thread, preemptively stating an intent to block a proposal is a
useful way to defuse potentially useless discussions. Members of the
CoA who voluntarily follow discussions other than on python-committers
are allowed to suggest the proposer withdraw, but can only actually
approve or reject a proposal that is formally submitted for
pronouncement.
Scenario 3 - The Case of the Unconsidered Users
Some proposals in the past may be written up and submitted for
pronouncement without considering the impact on particular groups of
users. For example, a proposal that affects the dependencies required
to use Python on various machines may have an adverse impact on some
users, even if many are unaffected due to the dependencies being
typically available by default.
Where a proposal does not appear to consider all users, the CoA might
choose to use their judgement and past experience to determine that
more users are affected by the change than described in the PEP, and
request that the PEP also address these users. They should identify
the group of users clearly enough that the proposer is able to also
identify these users, and either clarify how they were addressed, or
made amendments to the PEP to explicitly address them. (Note that this
does not involve evaluating the usefulness of the feature to various
user groups, but simply whether the PEP indicates that the usefulness
of the feature has been evaluated.)
Where a proposal appears to have used flawed logic or incorrect data
to come to a certain conclusion, the CoA might choose to use other
sources of information (such as the prior discussion or a submission
from other core developers) to request reconsideration of certain
points. The proposer does not necessarily need to use the exact
information obtained by the CoA to update their proposal, provided
that whatever amendments they make are satisfactory to the CoA. For
example, a PEP may indicate that 30% of users would be affected, while
the CoA may argue that 70% of users are affected. A successful
amendment may include a different but more reliable percentage, or may
be rewritten to no longer depend on the number of affected users.
Scenario 4 - The Case of the Delegated Decision
Some proposals may require review and approval from a specialist in
the area. Historically, these would have been handled by appointing a
BDFL-Delegate to make the final decision on the proposal. However, in
this model, the CoA may not delegate the final decision making
process. When the CoA believes that a subject matter expert should
decide on a particular proposal, the CoA may nominate one or more
individuals (or accept their self-nomination) to a similar position to
a BDFL Delegate. The terms of these expert’s role may be set as the
CoA sees fit, though the CoA always retains the final approval.
As a concrete example, assume a proposal is being discussed about a
new language feature. Proponents claim that it will make the language
easier for new developers to learn. Even before an official proposal
is made, the CoA may indicate that they will not accept the proposal
unless person X approves, since person X has a long history teaching
Python and their judgement is trusted. (Note that person X need not be
a core developer.)
Having been given this role, person X is able to drive the discussion
and quickly focus it on viable alternatives. Eventually, person X
chooses the alternative they are most satisfied with and indicates to
the CoA that they approve. The proposal is submitted as usual, and the
CoA reviews and accepts it, factoring in person X’s opinion.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8013 – The External Council Governance Model | Informational | This PEP proposes a new model of Python governance based on a Council
of Auditors (CoA) tasked with making final decisions for the language.
It differs from PEP 8010 by specifically not proposing a central
singular leader, and from PEP 8011 by disallowing core committers from
being council members. It describes the size and role of the council,
how the initial group of council members will be chosen, any term
limits of the council members, and how successors will be elected. |
PEP 8014 – The Commons Governance Model
Author:
Jack Jansen
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
16-Sep-2018
Table of Contents
Abstract
PEP Rejection
Introduction
Rationale
Decision Process
Council of Elders
Council operation
Limitation of freedom
Council composition
Council membership
Discussion
Copyright
Abstract
This PEP proposes a governance model with as few procedures, defined terms and
percentages as possible. It may also be called The Anarchist Governance Model
but uses Commons for now because of possible negative connotations of the
term Anarchist to some audiences.
The basic idea is that all decisions are in principle voted on by the whole
community, but in practice voted on by only a subset of the
community. A subset, because although the whole community is
entitled to vote in practice it will always be only a small subset that vote
on a specific decision. The vote is overseen by an impartial council that
judges whether the decision has passed or not. The intention is that this
council bases its decision not only on the ratio of yes and no votes but
also on the total number of votes, on the gravity of the proposal being
voted on and possibly the individual voters and how they voted. Thereby this
council becomes responsible for ensuring that each individual decision is
carried by a sufficient majority.
PEP Rejection
PEP 8014 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
Introduction
The Commons Governance Model tries to ensure that all decisions are endorsed
by, or at least is acceptable to, a sufficient majority of the Python
community.
Unfortunately the previous paragraph has two terms that are very hard to
quantify in the general case: sufficient majority and Python community.
This is because both terms in reality depend on the specific case that is
being decided. To give an example of this difficulty: for a PEP that
proposes a backward-compatible change to some API a simple majority of the
core developers that were interested in voting on the PEP in the first place
is probably sufficient. But for a change that has more farreaching
consequences such as a Python3 to Python4 transition a real majority may be
wanted, and a demonstration that at least there seems to be sufficient
support in the user base. And for a change that transcends the
Python-the-language, such as decisions on abolishing non-inclusive language,
it becomes very vague.
The Commons Governance Model attempts to sidestep this issue by not
defining what the terms sufficient majority and Python community mean in
the general case, by proposing a body that will decide so in specific
cases.
The model proposes creating a Council of Elders that oversees the decision
process, determining whether a specific proposal has enough support on a
case-by-case basis. There will be a vote on every individual PEP,
and the Council of Elders will declare whether the
outcome of the vote is sufficient to carry the decision in this specific case.
The model addresses only the roles traditionally held by the BDFL in the
decision process, not other roles.
The term Commons in the model name is loosely based on its historic use as
a shared resource to be used by all and cared for by all. The picture you
should have in mind with this model is a sizeable group of peasants
discussing some plan for the future on the village green on a warm summer
evening, after which the vote is taken and the village elders pronounce
the outcome. Then the banquet begins.
The Commons Governance Model is different from most of the other governance
proposals (with the possible exception of 8012), because it explicitly places
supreme power with the whole community.
Rationale
The rationale for the model is that a model that casts everything in concrete will
have unintended negative side effects. For example, a governance model that
assigns voting rights to Python committers may cause an individual not
to be accepted as a committer because there are already a lot of committers
from the company the new candidate works for.
As another example, setting a fixed percentage for PEP acceptance may lead
to party-formation amongst the voters and individual PEPs no longer be being
judged on individual merit but along party lines (if you support my PEP I
will support yours).
There is also the issue that one-person-one-vote is not the best model for
something like Python. Again an example: in case of a split vote (or a vote
sufficiently close to being split) the opinion of core developer Guido
van Rossum should probably outweigh the opinion of core developer Jack
Jansen. Trying to formalize this in a voting model is going to lead to a
very complex model, that is going to be wrong on boundary cases anyway. The
model presented here leaves deciding on such issues to the (hopefully
sensible) council of elders.
Decision Process
All important decisions go through a PEP process. Each PEP has someone
responsible for it, called the author here, but that does not have to be a
single person, and it does not have to be the person that actually wrote the
text. So for author you could also read champion or shepherd or
something like that.
The PEP author is responsible for organizing a vote on the PEP. This vote is
public, i.e. the voters are identified and the results are known to all.
Voting may be simple +1/0/-1, but might also be extended with +2/-2 with a
very terse explanation why the voter feels very strong about the issue. Such
an annotation would serve as an explanation to the Council of Elders. Voters
are annotated with their community status (core developer, etc).
The vote is clearly separated from the discussion, by using a well-defined Discourse
category or tag, a special mailing list or a similar technical method
(such as a website vote.python.org where people have to log in so their
community status can be automatically added, and their identity can be somewhat
confirmed).
The PEP author presents the PEP and the vote results to the Council of Elders.
The council ponders two things:
the PEP gravity and its implications,
the measureable vote results (how many people voted, which individuals voted, what they voted).
They pronounce a tentative decision on whether the vote passed and this decision is published.
If the decision is that the vote results do not demonstrate enough support
from the community for the decision the burden is on the author to try and
gather more support and resubmit the vote at a later date. Alternatively the
author can retract the proposal. The period for gathering more support is
time-limited, a month seems a reasonable time, if no vote has been resubmitted
after that period the proposal is rejected.
If the tentative decision is that the results do demonstrate enough support
a fairly short waiting period starts (in the order of weeks). During this
period anyone can appeal to the Council of Elders, but only on the grounds
that the vote does not reflect a sufficient majority of the community.
After the waiting period the council pronounces a final decision. The PEP
is either accepted or, if the council is swayed by an appeal, goes back to
the state where more support has to be demonstrated.
Council of Elders
The intention of the Council of Elders is that they, together, are capable
of judging whether the will of the Python community is upheld in a specific
vote.
The Council of Elders is not a replacement of the BDFL by a group of
people with the same power as the BDFL: it will not provide guidance on the
direction of Python, it only attempts to ensure the outcome of a vote
represents the will of the community.
The Council of Elders is not like the US Supreme Court, which has actual
decision power, the council only oversees the voting process to ensure that
the community is represented in the vote. And the Council of Elders is most
definitely not like the Spanish Inquisition, because fear, surprise and
ruthless efficiency are things we can do without (but there is some merit in
using the cute scarlet regalia).
The council is somewhat like the Dutch
Hoge Raad (which is unfortunately often translated as Supreme Court in
English) in that they judge the process and the procedures followed and can
only send cases back for a renewed judgement.
It is also somewhat like the election commission that many countries have
(under different names) in that it oversees elections.
Council operation
The council members are volunteers, and most likely have other roles within
the Python community as well (not to mention a life outside Python). This
means that the workload on the members should be kept to a minimum. It also
means that it should be clear when an individual council members speak as
council member and when they speak as themselves. And we should care about
the emotional load: council members should not be held accountable for
decisions by random flamers on the Python mailing list.
The proposal attempts to minimize the workload through two methods:
Most of the actual work is to be done by the PEP author and the community,
the Council of Elders does not organize the vote and tally the results.
The idea behind the first tentative decision is mistakes by the Council
of elders (misjudging how far-reaching a PEP is, most likely) are not fatal, because
the community has a chance to point out these mistakes.Practically speaking this means that the tentative decision can be taken by
a subset of the council, depending on the community to correct them.
Getting seven hard-working professionals together every two weeks, even by
email, may be a bit much to ask.
Clarifying when an individual Elder speaks on behalf of the Council is
probably best done by using a special email address, or some Discourse topic
into which only Elders can post. There is an analogy here with the Pope
speaking Ex Cathedra or just as himself (in which case he is not
infallible). The elders are most likely respected members of the community
and it would be a bad idea if they feel they cannot voice their personal opinion on
a PEP because they are on the council.
Discussion of community members with the Council of Elders, i.e. when appealing a
decision, should be done in a different forum (Discourse topic, mailing list).
The decisions of the Council of Elders should be seen as decisions of the
council as a whole, not as decisions of the individual members. In a first implementation
Elders should post under their own name (with the fact that they speak as a
council member conferred by the topic they post to, or possibly a special badge).
If it turns out that Elders become individual targets for ad-hominem attacks
we should revisit this and come up with some method of anonymity.
Limitation of freedom
If a specific vote has a true majority (for or against) of core team members
(more than 50% + 1 of all core team members) that outcome passes. If a specific
vote has a true majority (for or against) of PSF voting members
(more than 50% + 1) that outcome passes. And, for completeness, if both of the
previous statements are true but with opposite outcomes the core team members
win.
The main reason for having this limitation is that it allows decisions to be
made (albeit with effort) if there is no functioning Council of Elders at
any particular moment.
Council composition
The council should not be too big nor too small, probably somewhere between
5 and 10 members. There is no reason to fix this number.
The members should be knowledgeable about Python and the
Python community, and willing to be impartial while operating as part of
the council. Council members may be core developers but this is not a requirement.
Everyone in the community should feel represented by the council so it would
be good if the council is diverse:
scientists and technologists,
progressives and conservatives (with respect to the Python language),
people with different cultural backgrounds, genders, age,
etc
But: this should hold for the council as a whole. Individual council members
should not be seen as representing a specific interest group.
Council membership
Because the powers of the council are purely procedural it is probably good
if members serve for a fairly long time. However, it would still be good if
the council was reinstated regularly. Therefore, the suggestion is to have the council
operate under the PSF umbrella and be subject of a yearly vote of confidence. This
vote is for the council as a whole: people who vote against the council should be
aware that they are basically saying “Python is better off without a Council of Elders
than with you lot”.
The council normally co-opts new Elders, probably because an individual is seen
to have knowledge about a specific part of the Python community (or language) in which
the council is lacking. Everyone is free to suggest new Elders to the council
(including themselves) but the council is free to ignore the suggestion.
Council members should be free to retire at any time. An individual council
member can be retired by a unanimous vote by the rest of the council.
There is an emergency brake procedure to get rid of a non-functioning council.
A single Elder or a group of 10 core developers or PSF voting members can ask for
an immediate reinstating vote of the council as a whole (presumably with the
intention that the council lose their mandate). If this vote has been requested by an
Elder that individual immediately lose their council position, independent of
the outcome of the vote. If the vote has been requested by community members and
the council is reinstated this procedure cannot be invoked again for a year.
If there is no functioning council (the current initial situation, or after the
council have lost their mandate after a vote of no confidence) an initial
council must be selected. Through the normal communication channels (discourse,
mailing lists) members can be suggested by anyone (including themselves). After
discussion amongst the nominees and in the whole community a group of at least
three individuals should emerge that ask for an initial vote to instate them
as Council of Elders. The intention of this procedure is that by the time such
a group of individuals emerges and asks for a vote of confidence they expect an
overwhelming mandate.
Discussion
This PEP does not handle other roles of the BDFL, only the voting process.
Most importantly, the direction of Python in the long term is not expected
to be handled by the Council of Elders. This falls to the community as a whole
(or to individual members of the community, most likely).
There is also the role of figurehead or spokesperson to represent Python and
the Python community to the outside world. Again, this is not a role that
should be handled by the Council of Elders, in my opinion, but by some
other person or body.
Note that this proposal most likely favors conservatism over progression. Or, at least, the
danger of it leading to stagnation is bigger than the danger of it leading
to reckless blazing ahead into unknown territories. So: we should realise
that it is unlikely that a PEP like PEP 572 will pass if this model is in
place.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8014 – The Commons Governance Model | Informational | This PEP proposes a governance model with as few procedures, defined terms and
percentages as possible. It may also be called The Anarchist Governance Model
but uses Commons for now because of possible negative connotations of the
term Anarchist to some audiences. |
PEP 8015 – Organization of the Python community
Author:
Victor Stinner
Status:
Rejected
Type:
Informational
Topic:
Governance
Created:
04-Oct-2018
Table of Contents
Abstract
PEP Rejection
Rationale
Common Guidelines
Community Organization
Python Users
Python Contributors
Python Teams
Python Core Developers
Promote a contributor as core developer
Python Steering Committee
Python Steering Committee Roles
Election of Python Steering Committee Members
Election Creating the Python Steering Committee Members
Special Case: Steering Committee Members And PEPs
PSF Code of Conduct Workgroup
Charter
Special Case: Ban a core developer
PEP process
Vote on a PEP
Lack of Decision
Change this PEP
Annex: Summary on votes
Annex: Examples of Python Teams
Packaging Team
IDLE Team
Mentorship Team
Documentation Team
Security Team
Performance Team
Asynchronous Programming Team
Type Hints Team
Version History
Copyright
Abstract
This PEP formalizes the current organization of the Python community and
proposes 3 main changes:
Formalize the existing concept of “Python teams”;
Give more autonomy to Python teams;
Replace the BDFL (Guido van Rossum) with a new “Python Steering
Committee” of 5 members which has limited roles: basically decide how
decisions are taken, but don’t take decisions.
PEPs are approved by a PEP delegate or by a vote (reserved to core
developers, need >= 2/3 majority).
PEP Rejection
PEP 8015 was rejected by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
PEP 8016 and the governance model it describes were chosen instead.
Rationale
This PEP describes the organization of the whole Python development
community, from Python users to the Python Steering Committee.
Describing all groups and all roles in the same document helps to make
the organization more consistent.
The number of governance changes is minimized to get a smooth transition
from the old BDFL organization to the new Steering Committee
organization.
One key design of the organization is to avoid decision bottlenecks.
Discussions and decisions are distributed into Python teams where
experts in each topic can be found. The expectation is smoother
discussions on PEPs: fewer people with better knowledge of the topic.
Previously, most decisions have been taken by the Benevolent
Dictator For Life (BDFL), Guido van Rossum. The growing popularity of
Python increased the pressure on a single person. The proposed
organization distributes decisions and responsibilities to reduce the
pressure and avoid wearing any individual down.
To keep most of the decision power within the hands of the community,
the Python Steering Committee has very limited roles. The idea is to reduce the risk
that a group of people or companies “takes over” the Python project
through just a couple individuals. The project must remain autonomous
and open to everybody.
The most sensitives PEPs are decided by democracy: a vote reserved to
core developers, see the PEP process section below for the voting
method.
Common Guidelines
The Python community is open to everyone.
Members must respect the Python Community Code of Conduct which ensures that
discussions remain constructive and that everybody feels welcomed.
Python is and will remain an autonomous project.
People with decisions power should reflect the diversity of its users
and contributors.
Community Organization
Right now, there are different group of people involved in the Python
project. The more involved you are, the more decisions power you get. It
is important that the people acceding to the deepest group are the most
trusted ones.
This PEP formalizes the following groups:
Python Users
Python Contributors
Python Teams Members
Python Core Developers
Python Steering Committee Members
PSF Code of Conduct Workgroup
Python Users
This is the largest group: anyone who uses Python.
Python Contributors
Once a Python user sends an email to a Python mailing list, comments on
the Python bug tracker, proposes or reviews a Python change, they become
a Python contributor.
Python Teams
Python became too big to work as a unique team anymore, people
naturally have grouped themselves as teams to work more closely on
specific topics, sometimes called “Special Interest Group” (SIG).
When enough developers are interested by a specific topic, they can
create a new team. Usually, the main action is to ask the Python
postmaster to create a new “SIG” mailing list, but the team can choose
to use a different communication channel.
Team members are Python contributors and Python core developers. The
team is self-organized and is responsible to select who can join the
team and how.
Team members can get the bug triage permission on the team bug tracker
component. The more involved in a team you are, the more decisions power
and responsibilities you get.
A team might become allowed to decide on their own PEPs, but only the
Python Steering Committee can allow that (and it has the power to revoke
it as well). Such a case is exceptional, currently a single team has
such permission: the Packaging Team.
See Annex: Examples of Python Teams.
Python Core Developers
One restricted definition of a core developer is the ability to merge a
change (anywhere in the code) and have the bug triage permission
(on all bug tracker components).
Core developers are developers who are proven to have the required skills to
decide if a change can be approved or must be rejected, but also (and
this is more important) what changes should not be made. Python has a
long history, big constraints on backward compatibility, high quality
standards (ex: changes require new tests). For these reasons, becoming
a core can take several months or longer.
Becoming a core developer means more responsibilities. For example, if a
developer merges a change, they become responsible for regressions and
for the maintenance of that modified code.
Core developers are expected to be exemplary when it comes to the Code
of Conduct. They are encouraged to mentor contributors.
Promote a contributor as core developer
Once an existing core developer considers that a contributor is ready to
join the core group, to become a core developer, that core developer
asks the contributor if they would like to become a core developer. If
the contributor is interested in such new responsibilities, a vote is
organized.
The vote is reserved to core developers, is public, and is open for 1
week. Usually the core developer who proposes the promotion has to
describe the work and skills of the candidate in the description of the
vote. A contributor is only promoted if two thirds (>= 2/3) of
votes approve (“+1”) the promotion. Only “+1” and “-1” votes are
accounted; other votes (ex: null, “-0”, “+0.5”) are ignored.
If the candidate is promoted, usually they get a mentor for 1 month to
help them to handle new responsibilities.
If the candidate is not promoted, a new vote can be organized later,
when the candidate gets the missing skills, for example 6 months later.
Python Steering Committee
The Python Steering Committee is made of the most trusted core
developers since it has the most decision power. The roles of this group
are strictly limited to ensure that Python keeps its autonomy and
remains open.
The Python Steering Committee is composed of 5 members. They are elected
for 3 years and 1/3 is replaced every year (first year: 1, second year:
2, third year: 2). This way, a member will stay for one full Python
release and the committee composition will be updated frequently. A
committee member can be a candidate for the seat they are leaving.
There are no term limits.
Committee members must be Python core developers. It is important that
the members of the committee reflect the diversity of Python’ users and
contributors. A small step to ensure that is to enforce that only 2
members (strictly less than 50% of the 5 members) can work for the same
employer (same company or subsidiaries of the same company).
The size of 5 members has been chosen for the members diversity and to
ensure that the committee can continue to work even if a member becomes
unavailable for an unknown duration.
Python Steering Committee Roles
Python Steering Committee roles:
Decide how a PEP is approved (or rejected or deferred).
Grant or revoke permissions to a Python team. For example, allow
a team to give the bug triage permission (on the team component) to a
contributor.
To decide how a PEP is approved (or rejected or deferred), there are two
options:
The committee elects a PEP delegate (previously known as “BDFL-delegate”):
a core developer who will take the final decision for the specific
PEP. The committee select the PEP delegate who can be proposed by the
Python team where the PEP is discussed.
The committee can organize a vote on the PEP, see PEP process
for the vote organization. The committee decides when the vote is
organized. A vote is preferred for changes affecting all Python users,
like language changes.
The committee keeps the “vision” and consistency of Python. It also makes
sure that important features reach completion. Their ability to pick PEP
delegates is meant to help them to achieve that goal.
Election of Python Steering Committee Members
The vote is organized by the Steering Committee. It is announced 3 weeks
in advance: candidates have to apply during this period. The vote is
reserved to core developers and is open for 1 week. To avoid
self-censorship, the vote uses secret ballots: avoid the risk of
hostility from someone who may get more power (if they get elected).
The vote uses the Schulze/Beatpath/CSSD variant of the Condorcet
method using an
online service like Condorcet Internet Voting Service (CIVS). This voting method reduces the risk of
tie. It also produces a ranking of all candidates, needed for the
creation of the committee.
In case of tie, a new vote is organized immediately between candidates
involved in the tie using the same voting method and also during 1 week.
If the second vote leads to a tie again, the current Steering Committee
is responsible to select the elected member(s).
If a committee member steps down, a new vote is organized to replace
them.
If the situation of a committee member changes in a way that no longer
satisfies the committee constraint (ex: they move to the same company as
two other committee members), they have to resign. If the employer of a
member is acquired by the employer of two other members, the member with
the mandate ending earlier has to resign once the acquisition completes.
Election Creating the Python Steering Committee Members
To bootstrap the process, 5 members are elected at the committee
creation. The vote follows the same rules than regular committee votes,
except that the election needs 5 members, and the vote is organized by
the PSF Board.
In a council election, if 3 of the top 5 vote-getters work for the
same employer, then whichever of them ranked lowest is disqualified
and the 6th-ranking candidate moves up into 5th place; this is
repeated until a valid council is formed.
In case of tie, a second vote is organized immediately between
candidates involved in the tie and following candidates to fill the
remaining seats. The vote follows the same rules as the regular
committee vote. If the second vote still result in a tie, the PSF Board
is responsible to elect members and decide their position in the vote
result.
The order in the vote result must be unique for elected members: #1 and
#2 are elected for 3 years, #2 and #3 for 2 years, and #5 for 1 year.
Example of vote result with a tie:
A
B
C
D
E, F
G
…
The first 4 candidates (A, B, C and D) are elected immediately. If E
works for the same employer than two other elected member, F is also
elected. Otherwise, a second vote is organized for the 5th seat between
E and F.
Special Case: Steering Committee Members And PEPs
A committee member can be a PEP delegate.
A committee member can propose a PEP, but cannot be the PEP delegate of
their own PEP.
When the committee decides that a PEP must be voted, committee members
can vote as they are also core developers, but they don’t have more
power than other core developer.
PSF Code of Conduct Workgroup
Charter
The workgroup’s purpose is to foster a diverse and inclusive Python
community by enforcing the PSF code of conduct, along with providing
guidance and recommendations to the Python community on codes of
conduct, that supports the PSF mission of “ongoing development of
Python-related technology and educational resources”.
We work toward this common goal in three ways:
Review, revise, and advise on policies relating to the PSF code of
conducts and other communities that the PSF supports. This includes
any #python chat community & python.org email list under PSF
jurisdiction.
Create a standard set of codes of conduct and supporting documents for
multiple channels of interaction such as, but not limited to,
conferences, mailing lists, slack/IRC, code repositories, and more.
Develop training materials and other processes to support Python
community organizers in implementing and enforcing the code of
conduct.
The organization of this workgroup is defined by the
ConductWG Charter.
Special Case: Ban a core developer
As any other member of the Python community, the PSF Code of Conduct
Workgroup can ban a core developer for a limited amount of time. In this
case, the core developer immediately loses their core developer status.
Core developers are expected to be exemplary when it comes to the Code
of Conduct.
In general, a ban is only the last resort action when all other options
have been exhausted.
At the end of the ban, the developer is allowed to contribute again as a
regular contributor.
If the developer changes their behavior, another core developer can
organize a new vote to propose the developer for promotion to core
developer. The vote follows the same process than for any other Python
contributor.
PEP process
There are 2 main roles on PEPs:
PEP Authors
PEP Delegate
PEP Authors do their best to write high quality PEP.
The PEP delegate is responsible to help the authors to enhance their PEP
and is the one taking the final decision (accept, reject or defer the
PEP). They can also help to guide the discussion.
If no decision is taken, the authors can propose again the PEP later
(ex: one year later), if possible with new data to motivate the change. A
PEP Delegate can also choose to mark a PEP as “Deferred” to not reject
the PEP and encourage to reopen the discussion later.
PEPs specific to a Python team are discussed on the team mailing list.
PEPs impacting all Python developers (like language changes) must be
discussed on the python-dev mailing list.
Vote on a PEP
When the Python Steering Committee decides that a PEP needs a wider
approval, a vote is organized.
The vote is reserved to core developers, is public, is announced 1 week
in advance, and is open for 1 week. The PEP can still be updated during
the 1 week notice, but must not be modified during the vote. Such vote
happens on
the mailing list where the PEP has been discussed. The committee decides
when the vote is organized. The PEP must have been discussed for a
reasonable amount of time before it is put to vote.
A PEP is only approved if two thirds (>= 2/3) of votes approve
(“+1”) the PEP. Only “+1” and “-1” votes are accounted; other votes
(ex: null, “-0”, “+0.5”) are ignored.
A PEP can only be approved or rejected by a vote, not be deferred.
Lack of Decision
If a discussion fails to reach a consensus, if the Python Steering
Committee fail to choose a PEP delegate, or if a PEP delegate fails to
take a decision, the obvious risk is that Python fails to evolve.
That’s fine. Sometimes, doing nothing is the wisest choice.
Change this PEP
The first version of this PEP has been written after Guido van Rossum
decided to resign from his role of BDFL in July 2018. Before this PEP,
the roles of Python community members have never been formalized. It is
difficult to design a perfect organization at the first attempt. This
PEP can be updated in the future to adjust the organization, specify how
to handle corner cases and fix mistakes.
Any change to this PEP must be validated by a vote. The vote is
announced 3 weeks in advance, is reserved to core developers, happens in
public on the python-committers mailing list, and is open for 1 week.
The proposed PEP change can still be updated during the 3 weeks notice,
but must not be modified during the vote.
The change is only approved if four fifths (>= 4/5) of votes approve
(“+1”) the change. Only “+1” and “-1” votes are accounted; other votes
(ex: null, “-0”, “+0.5”) are ignored.
Annex: Summary on votes
Vote
Notice
Open
Ballot
Method
Promote contributor
none
1 week
public
>= 2/3 majority
PEP
1 week
1 week
public
>= 2/3 majority
Change this PEP
3 weeks
1 week
public
>= 4/5 majority
Steering Committee
3 weeks
1 week
private
Condorcet (Schulze/Beatpath/CSSD)
All these votes are reserved to core developers.
Annex: Examples of Python Teams
Below are examples of some Python teams (the list will not be kept up to
date in this PEP).
Packaging Team
The packaging team runs its own PEP category and can approve (or reject)
their own PEPs.
Website: packaging.python.org
Mailing list: distutils-sig
Bug tracker component: Distutils
Example of members: Paul Moore, Alyssa Coghlan, Donald Stuff
Stdlib module: distutils
Current PEP delegate: Paul Moore
IDLE Team
IDLE is a special case in the Python standard library: it’s a whole
application, not just a module. For this reason, it has been decided
that the code will be the same in all Python stable branches (whereas
the stdlib diverges in newer stable branches).
Bug tracker component: IDLE
Example of members: Terry Reedy, Cheryl Sabella, Serhiy Storchaka
Stdlib module: idlelib
Mentorship Team
Becoming a core developer is long and slow process. Mentorship is an
efficient way to train contributors as future core developers and build
a trust relationship.
Websites:
https://www.python.org/dev/core-mentorship/
https://devguide.python.org/
Repository: https://github.com/python/devguide
Mailing list: core-mentorship (private archives)
Example of members: Guido van Rossum, Carol Willing, Victor Stinner
Note: The group is not responsible to promote core developers.
Documentation Team
Mailing list: doc-sig
Bug tracker component: Documentation
GitHub tag: type-doc
Example of members: Julien Palard, INADA Naoki, Raymond Hettinger.
The team also manages documentation translations.
See also the Mentorship team which maintains the “Devguide”.
Security Team
Website: https://www.python.org/news/security/
Mailing lists:
[email protected] (to report vulnerabilities)
security-sig
(public list)
Stdlib modules: hashlib, secrets and ssl
Example of members: Christian Heimes, Benjamin Peterson
The [email protected] mailing list is invite-only: only members of
the “Python Security Response Team” (PSRT) can read emails and reply;
whereas security-sig is public.
Note: This team rarely proposed PEPs.
Performance Team
Website: https://speed.python.org/
Mailing list: speed
Repositories:
https://github.com/python/performance
https://github.com/tobami/codespeed
Bug tracker type: Performance
GitHub label: type-performance
Stdlib module: cProfile, profile, pstats and timeit
Example of members: Victor Stinner, INADA Naoki, Serhiy Storchaka
Usually PEPs involving performance impact everybody and so are discussed
on the python-dev mailing list, rather than the speed mailing list.
Asynchronous Programming Team
Website: https://docs.python.org/dev/library/asyncio.html
Mailing list: async-sig
Bug tracker component: asyncio
GitHub label: expert-asyncio
Stdlib modules: asyncio and contextvars
Example of members: Andrew Sveltov, Yury Selivanov
PEP only modifying asyncio and contextvars can be discussed on
the async-sig mailing list, whereas changes impacting the Python
language must be discussed on python-dev.
Type Hints Team
Website: http://mypy-lang.org/
Repository: https://github.com/python/typing
GitHub label for mypy project: topic-pep-484
Stdlib modules: typing
Example of members: Guido van Rossum, Ivan Levkivskyi,
Jukka Lehtosalo, Łukasz Langa, Mark Shannon.
Note: There is a backport for Python 3.6 and older, see
typing on PyPI.
Version History
History of this PEP:
Version 7: Adjust the Steering Committee
The Steering Committee is now made of 5 people instead of 3.
There are no term limits (instead of a limit of 2 mandates:
6 years in total).
A committee member can now be a PEP delegate.
Version 6: Adjust votes
Specify the Condorcet method: use Schulze/Beatpath/CSSD variant to
elect Python Steering Committee members. Specify how to deal with
tie and the constraint on the employers.
Vote on promoting a contributor and on PEPs now requires >= 2/3
rather than 50%+1.
Vote on changing this PEP now requires >= 4/5 rather than
50%+1.
Explain how to deal with a company acquisition.
Version 5: Election of Python Steering Committee Members uses secret
ballots
Version 4:
Adjust votes: open for 1 week instead of 1 month, and announced
in advance.
Rename the “Python Core Board” to the “Python Steering Committee”;
Clarify that this committee doesn’t approve PEPs and that committee
members cannot cumulate more than 2 mandates;
Add the “Type Hints” team to the annex.
Version 3: Add “Special Case: Ban a core developer” and “How to update
this PEP” sections.
Version 2: Rename the “Python board” to the “Python Core Board”,
to avoid confusion with the PSF Board.
Version 1: First version posted to python-committers and
discuss.python.org.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 8015 – Organization of the Python community | Informational | This PEP formalizes the current organization of the Python community and
proposes 3 main changes: |
PEP 8016 – The Steering Council Model
Author:
Nathaniel J. Smith, Donald Stufft
Status:
Accepted
Type:
Informational
Topic:
Governance
Created:
01-Nov-2018
Table of Contents
Note
Abstract
PEP Acceptance
Rationale
Specification
The steering council
Composition
Mandate
Powers
Electing the council
Term
Vacancies
Conflicts of interest
Ejecting core team members
Vote of no confidence
The core team
Role
Prerogatives
Membership
Changing this document
TODO
Acknowledgements
Copyright
Note
This PEP is retained for historical purposes, but the official
governance document is now PEP 13.
Abstract
This PEP proposes a model of Python governance based around a steering
council. The council has broad authority, which they seek to exercise
as rarely as possible; instead, they use this power to establish
standard processes, like those proposed in the other 801x-series PEPs.
This follows the general philosophy that it’s better to split up large
changes into a series of small changes that can be reviewed
independently: instead of trying to do everything in one PEP, we focus
on providing a minimal-but-solid foundation for further governance
decisions.
PEP Acceptance
PEP 8016 was accepted by a core developer vote
described in PEP 8001 on Monday, December 17, 2018.
Rationale
The main goals of this proposal are:
Be boring: We’re not experts in governance, and we don’t think
Python is a good place to experiment with new and untried governance
models. So this proposal sticks to mature, well-known, previously
tested processes as much as possible. The high-level approach of a
mostly-hands-off council is arguably the most common across large
successful F/OSS projects, and low-level details are derived
directly from Django’s governance.
Be simple: We’ve attempted to pare things down to the minimum
needed to make this workable: the council, the core team (who elect
the council), and the process for changing the document. The goal is
Minimum Viable Governance.
Be comprehensive: But for the things we need to define, we’ve
tried to make sure to cover all the bases, because we don’t want to
go through this kind of crisis again. Having a clear and unambiguous
set of rules also helps minimize confusion and resentment.
Be flexible and light-weight: We know that it will take time and
experimentation to find the best processes for working together. By
keeping this document as minimal as possible, we keep maximal
flexibility for adjusting things later, while minimizing the need
for heavy-weight and anxiety-provoking processes like whole-project
votes.
A number of details were discussed in this Discourse thread,
and then this thread has further discussion. These
may be useful to anyone trying to understand the rationale for various
minor decisions.
Specification
The steering council
Composition
The steering council is a 5-person committee.
Mandate
The steering council shall work to:
Maintain the quality and stability of the Python language and
CPython interpreter,
Make contributing as accessible, inclusive, and sustainable as
possible,
Formalize and maintain the relationship between the core team and
the PSF,
Establish appropriate decision-making processes for PEPs,
Seek consensus among contributors and the core team before acting in
a formal capacity,
Act as a “court of final appeal” for decisions where all other
methods have failed.
Powers
The council has broad authority to make decisions about the project.
For example, they can:
Accept or reject PEPs
Enforce or update the project’s code of conduct
Work with the PSF to manage any project assets
Delegate parts of their authority to other subcommittees or
processes
However, they cannot modify this PEP, or affect the membership of the
core team, except via the mechanisms specified in this PEP.
The council should look for ways to use these powers as little as
possible. Instead of voting, it’s better to seek consensus. Instead of
ruling on individual PEPs, it’s better to define a standard process
for PEP decision making (for example, by accepting one of the other
801x series of PEPs). It’s better to establish a Code of Conduct
committee than to rule on individual cases. And so on.
To use its powers, the council votes. Every council member must either
vote or explicitly abstain. Members with conflicts of interest on a
particular vote must abstain. Passing requires support from a majority
of non-abstaining council members.
Whenever possible, the council’s deliberations and votes shall be held
in public.
Electing the council
A council election consists of two phases:
Phase 1: Candidates advertise their interest in serving. Candidates
must be nominated by a core team member. Self-nominations are
allowed.
Phase 2: Each core team member can vote for zero to five of the
candidates. Voting is performed anonymously. Candidates are ranked
by the total number of votes they receive. If a tie occurs, it may
be resolved by mutual agreement among the candidates, or else the
winner will be chosen at random.
Each phase lasts one to two weeks, at the outgoing council’s discretion.
For the initial election, both phases will last two weeks.
The election process is managed by a returns officer nominated by the
outgoing steering council. For the initial election, the returns
officer will be nominated by the PSF Executive Director.
The council should ideally reflect the diversity of Python
contributors and users, and core team members are encouraged to vote
accordingly.
Term
A new council is elected after each feature release. Each council’s
term runs from when their election results are finalized until the
next council’s term starts. There are no term limits.
Vacancies
Council members may resign their position at any time.
Whenever there is a vacancy during the regular council term, the
council may vote to appoint a replacement to serve out the rest of the
term.
If a council member drops out of touch and cannot be contacted for a
month or longer, then the rest of the council may vote to replace
them.
Conflicts of interest
While we trust council members to act in the best interests of Python
rather than themselves or their employers, the mere appearance of any
one company dominating Python development could itself be harmful and
erode trust. In order to avoid any appearance of conflict of interest,
at most 2 members of the council can work for any single employer.
In a council election, if 3 of the top 5 vote-getters work for the
same employer, then whichever of them ranked lowest is disqualified
and the 6th-ranking candidate moves up into 5th place; this is
repeated until a valid council is formed.
During a council term, if changing circumstances cause this rule to be
broken (for instance, due to a council member changing employment),
then one or more council members must resign to remedy the issue, and
the resulting vacancies can then be filled as normal.
Ejecting core team members
In exceptional circumstances, it may be necessary to remove someone
from the core team against their will. (For example: egregious and
ongoing code of conduct violations.) This can be accomplished by a
steering council vote, but unlike other steering council votes, this
requires at least a two-thirds majority. With 5 members voting, this
means that a 3:2 vote is insufficient; 4:1 in favor is the minimum
required for such a vote to succeed. In addition, this is the one
power of the steering council which cannot be delegated, and this
power cannot be used while a vote of no confidence is in process.
If the ejected core team member is also on the steering council, then
they are removed from the steering council as well.
Vote of no confidence
In exceptional circumstances, the core team may remove a sitting
council member, or the entire council, via a vote of no confidence.
A no-confidence vote is triggered when a core team member calls for
one publicly on an appropriate project communication channel, and
another core team member seconds the proposal.
The vote lasts for two weeks. Core team members vote for or against.
If at least two thirds of voters express a lack of confidence, then
the vote succeeds.
There are two forms of no-confidence votes: those targeting a single
member, and those targeting the council as a whole. The initial call
for a no-confidence vote must specify which type is intended. If a
single-member vote succeeds, then that member is removed from the
council and the resulting vacancy can be handled in the usual way. If
a whole-council vote succeeds, the council is dissolved and a new
council election is triggered immediately.
The core team
Role
The core team is the group of trusted volunteers who manage Python.
They assume many roles required to achieve the project’s goals,
especially those that require a high level of trust. They make the
decisions that shape the future of the project.
Core team members are expected to act as role models for the community
and custodians of the project, on behalf of the community and all
those who rely on Python.
They will intervene, where necessary, in online discussions or at
official Python events on the rare occasions that a situation arises
that requires intervention.
They have authority over the Python Project infrastructure, including
the Python Project website itself, the Python GitHub organization and
repositories, the bug tracker, the mailing lists, IRC channels, etc.
Prerogatives
Core team members may participate in formal votes, typically to nominate new
team members and to elect the steering council.
Membership
Python core team members demonstrate:
a good grasp of the philosophy of the Python Project
a solid track record of being constructive and helpful
significant contributions to the project’s goals, in any form
willingness to dedicate some time to improving Python
As the project matures, contributions go beyond code. Here’s an
incomplete list of areas where contributions may be considered for
joining the core team, in no particular order:
Working on community management and outreach
Providing support on the mailing lists and on IRC
Triaging tickets
Writing patches (code, docs, or tests)
Reviewing patches (code, docs, or tests)
Participating in design decisions
Providing expertise in a particular domain (security, i18n, etc.)
Managing the continuous integration infrastructure
Managing the servers (website, tracker, documentation, etc.)
Maintaining related projects (alternative interpreters, core
infrastructure like packaging, etc.)
Creating visual designs
Core team membership acknowledges sustained and valuable efforts that
align well with the philosophy and the goals of the Python project.
It is granted by receiving at least two-thirds positive votes in a
core team vote and no veto by the steering council.
Core team members are always looking for promising contributors,
teaching them how the project is managed, and submitting their names
to the core team’s vote when they’re ready.
There’s no time limit on core team membership. However, in order to
provide the general public with a reasonable idea of how many people
maintain Python, core team members who have stopped contributing are
encouraged to declare themselves as “inactive”. Those who haven’t made
any non-trivial contribution in two years may be asked to move
themselves to this category, and moved there if they don’t respond. To
record and honor their contributions, inactive team members will
continue to be listed alongside active core team members; and, if they
later resume contributing, they can switch back to active status at
will. While someone is in inactive status, though, they lose their
active privileges like voting or nominating for the steering council,
and commit access.
The initial active core team members will consist of everyone
currently listed in the “Python core” team on GitHub, and the
initial inactive members will consist of everyone else who has been a
committer in the past.
Changing this document
Changes to this document require at least a two-thirds majority of
votes cast in a core team vote.
TODO
Lots of people contributed helpful suggestions and feedback; we
should check if they’re comfortable being added as co-authors
It looks like Aymeric Augustin wrote the whole Django doc, so
presumably holds copyright; maybe we should ask him if he’s willing
to release it into the public domain so our copyright statement
below can be simpler.
Acknowledgements
Substantial text was copied shamelessly from The Django project’s
governance document.
Copyright
Text copied from Django used under their license. The rest of
this document has been placed in the public domain.
| Accepted | PEP 8016 – The Steering Council Model | Informational | This PEP proposes a model of Python governance based around a steering
council. The council has broad authority, which they seek to exercise
as rarely as possible; instead, they use this power to establish
standard processes, like those proposed in the other 801x-series PEPs.
This follows the general philosophy that it’s better to split up large
changes into a series of small changes that can be reviewed
independently: instead of trying to do everything in one PEP, we focus
on providing a minimal-but-solid foundation for further governance
decisions. |
PEP 8100 – January 2019 Steering Council election
Author:
Nathaniel J. Smith <njs at pobox.com>, Ee Durbin <ee at python.org>
Status:
Final
Type:
Informational
Topic:
Governance
Created:
03-Jan-2019
Table of Contents
Abstract
Returns officer
Schedule
Candidates
Voter Roll
Election Implementation
Configuration
Questions
Question 1
Results
Copyright
Complete Voter Roll
Active Python core developers
Abstract
This document describes the schedule and other details of the January
2019 election for the Python steering council, as specified in
PEP 13. This is the first steering council election.
Returns officer
In future elections, the returns officer will be appointed by the
outgoing steering council. Since this is the first election, we have
no outgoing steering council, and PEP 13 says that the returns officer
is instead appointed by the PSF Executive Director, Ewa Jodlowska.
She appointed Ee Durbin.
Schedule
There will be a two-week nomination period, followed by a two-week
vote.
The nomination period is: January 7, 2019 through January 20, 2019
The voting period is: January 21, 2019 12:00 UTC through February 4, 2019 12:00
UTC (The end of February 3, 2019 Anywhere on Earth)
Candidates
Candidates must be nominated by a core team member. If the candidate
is a core team member, they may nominate themselves.
Once the nomination period opens, candidates will be listed here:
Brett Cannon
Alyssa (Nick) Coghlan
Barry Warsaw
Guido van Rossum
Victor Stinner
Yury Selivanov
David Mertz
Łukasz Langa
Benjamin Peterson
Mariatta
Carol Willing
Emily Morehouse
Peter Wang
Donald Stufft
Travis Oliphant
Kushal Das
Gregory P. Smith
Voter Roll
All active Python core team members are eligible to vote.
Ballots will be distributed based on the The Python Voter Roll for this
election
[1].
While this file is not public as it contains private email addresses, the
Complete Voter Roll by name is available.
Election Implementation
The election will be conducted using the Helios Voting Service.
Configuration
Short name: 2019-python-steering-committee
Name: 2019 Python Steering Committee Election
Description: Election for the Python steering council, as specified in PEP 13. This is the first steering council election.
type: Election
Use voter aliases: [X]
Randomize answer order: [X]
Private: [X]
Help Email Address: [email protected]
Voting starts at: January 21, 2019 12:00 UTC
Voting ends at: February 4, 2019 12:00 UTC
This will create an election in which:
Voting is not open to the public, only those on the Voter Roll may
participate. Ballots will be emailed when voting starts.
Candidates are presented in random order, to help avoid bias.
Voter identities and ballots are protected against cryptographic advances.
Questions
Question 1
Select between 0 and 5 answers. Result Type: absolute
Question: Select candidates for the Python Steering Council
Answer #1 - #N: Candidates from Candidates_ Section
Results
Of the 96 eligible voters, 69 cast ballots.
The top five vote-getters are:
Barry Warsaw
Brett Cannon
Carol Willing
Guido van Rossum
Alyssa (Nick) Coghlan
No conflict of interest as defined in PEP 13 were observed.
The full vote counts are as follows:
Candidate
Votes Received
Guido van Rossum
45
Brett Cannon
44
Carol Willing
33
Barry Warsaw
31
Alyssa (Nick) Coghlan
25
Benjamin Peterson
22
Łukasz Langa
21
Victor Stinner
21
Mariatta
20
Emily Morehouse
18
Yury Selivanov
15
Donald Stufft
11
Peter Wang
10
Travis Oliphant
8
Kushal Das
7
Gregory P. Smith
6
David Mertz
3
Copyright
This document has been placed in the public domain.
Complete Voter Roll
Active Python core developers
Alex Gaynor
Alex Martelli
Alexander Belopolsky
Alexandre Vassalotti
Amaury Forgeot d'Arc
Andrew Kuchling
Andrew Svetlov
Antoine Pitrou
Armin Ronacher
Barry Warsaw
Benjamin Peterson
Berker Peksag
Brett Cannon
Brian Curtin
Carol Willing
Chris Jerdonek
Chris Withers
Christian Heimes
David Malcolm
David Wolever
Davin Potts
Dino Viehland
Donald Stufft
Doug Hellmann
Eli Bendersky
Emily Morehouse
Éric Araujo
Eric Snow
Eric V. Smith
Ethan Furman
Ezio Melotti
Facundo Batista
Fred Drake
Georg Brandl
Giampaolo Rodola'
Gregory P. Smith
Guido van Rossum
Hyeshik Chang
Hynek Schlawack
INADA Naoki
Ivan Levkivskyi
Jack Diederich
Jack Jansen
Jason R. Coombs
Jeff Hardy
Jeremy Hylton
Jesús Cea
Julien Palard
Kurt B. Kaiser
Kushal Das
Larry Hastings
Lars Gustäbel
Lisa Roach
Łukasz Langa
Marc-Andre Lemburg
Mariatta
Mark Dickinson
Mark Hammond
Mark Shannon
Martin Panter
Matthias Klose
Meador Inge
Michael Hudson-Doyle
Nathaniel J. Smith
Ned Deily
Neil Schemenauer
Alyssa Coghlan
Pablo Galindo
Paul Moore
Petr Viktorin
Petri Lehtinen
Philip Jenvey
R. David Murray
Raymond Hettinger
Robert Collins
Ronald Oussoren
Sandro Tosi
Senthil Kumaran
Serhiy Storchaka
Sjoerd Mullender
Stefan Krah
Steve Dower
Steven Daprano
T. Wouters
Tal Einat
Terry Jan Reedy
Thomas Heller
Tim Golden
Tim Peters
Trent Nelson
Victor Stinner
Vinay Sajip
Walter Dörwald
Xiang Zhang
Yury Selivanov
Zachary Ware
[1]
This repository is private and accessible only to Python Core
Developers, administrators, and Python Software Foundation Staff as it
contains personal email addresses.
| Final | PEP 8100 – January 2019 Steering Council election | Informational | This document describes the schedule and other details of the January
2019 election for the Python steering council, as specified in
PEP 13. This is the first steering council election. |