text
stringlengths 330
67k
| status
stringclasses 9
values | title
stringlengths 18
80
| type
stringclasses 3
values | abstract
stringlengths 4
917
|
---|---|---|---|---|
PEP 5 – Guidelines for Language Evolution
Author:
Paul Prescod <paul at prescod.net>
Status:
Superseded
Type:
Process
Created:
26-Oct-2000
Post-History:
Superseded-By:
387
Table of Contents
Abstract
Implementation Details
Scope
Steps For Introducing Backwards-Incompatible Features
Abstract
In the natural evolution of programming languages it is sometimes
necessary to make changes that modify the behavior of older programs.
This PEP proposes a policy for implementing these changes in a manner
respectful of the installed base of Python users.
Implementation Details
Implementation of this PEP requires the addition of a formal warning
and deprecation facility that will be described in another proposal.
Scope
These guidelines apply to future versions of Python that introduce
backward-incompatible behavior. Backward incompatible behavior is a
major deviation in Python interpretation from an earlier behavior
described in the standard Python documentation. Removal of a feature
also constitutes a change of behavior.
This PEP does not replace or preclude other compatibility strategies
such as dynamic loading of backwards-compatible parsers. On the other
hand, if execution of “old code” requires a special switch or pragma
then that is indeed a change of behavior from the point of view of the
user and that change should be implemented according to these
guidelines.
In general, common sense must prevail in the implementation of these
guidelines. For instance changing “sys.copyright” does not constitute
a backwards-incompatible change of behavior!
Steps For Introducing Backwards-Incompatible Features
Propose backwards-incompatible behavior in a PEP. The PEP must
include a section on backwards compatibility that describes in
detail a plan to complete the remainder of these steps.
Once the PEP is accepted as a productive direction, implement an
alternate way to accomplish the task previously provided by the
feature that is being removed or changed. For instance if the
addition operator were scheduled for removal, a new version of
Python could implement an “add()” built-in function.
Formally deprecate the obsolete construct in the Python
documentation.
Add an optional warning mode to the parser that will inform users
when the deprecated construct is used. In other words, all
programs that will behave differently in the future must trigger
warnings in this mode. Compile-time warnings are preferable to
runtime warnings. The warning messages should steer people from
the deprecated construct to the alternative construct.
There must be at least a one-year transition period between the
release of the transitional version of Python and the release of
the backwards incompatible version. Users will have at least a
year to test their programs and migrate them from use of the
deprecated construct to the alternative one.
| Superseded | PEP 5 – Guidelines for Language Evolution | Process | In the natural evolution of programming languages it is sometimes
necessary to make changes that modify the behavior of older programs.
This PEP proposes a policy for implementing these changes in a manner
respectful of the installed base of Python users. |
PEP 6 – Bug Fix Releases
Author:
Aahz <aahz at pythoncraft.com>, Anthony Baxter <anthony at interlink.com.au>
Status:
Superseded
Type:
Process
Created:
15-Mar-2001
Post-History:
15-Mar-2001, 18-Apr-2001, 19-Aug-2004
Table of Contents
Abstract
Motivation
Prohibitions
Not-Quite-Prohibitions
Applicability of Prohibitions
Helping the Bug Fix Releases Happen
Version Numbers
Procedure
Patch Czar History
History
References
Copyright
Note
This PEP is obsolete.
The current release policy is documented in the devguide.
See also PEP 101 for mechanics of the release process.
Abstract
Python has historically had only a single fork of development, with
releases having the combined purpose of adding new features and
delivering bug fixes (these kinds of releases will be referred to as
“major releases”). This PEP describes how to fork off maintenance, or
bug fix, releases of old versions for the primary purpose of fixing
bugs.
This PEP is not, repeat NOT, a guarantee of the existence of bug fix
releases; it only specifies a procedure to be followed if bug fix
releases are desired by enough of the Python community willing to do
the work.
Motivation
With the move to SourceForge, Python development has accelerated.
There is a sentiment among part of the community that there was too
much acceleration, and many people are uncomfortable with upgrading to
new versions to get bug fixes when so many features have been added,
sometimes late in the development cycle.
One solution for this issue is to maintain the previous major release,
providing bug fixes until the next major release. This should make
Python more attractive for enterprise development, where Python may
need to be installed on hundreds or thousands of machines.
Prohibitions
Bug fix releases are required to adhere to the following restrictions:
There must be zero syntax changes. All .pyc and .pyo files must
work (no regeneration needed) with all bugfix releases forked off
from a major release.
There must be zero pickle changes.
There must be no incompatible C API changes. All extensions must
continue to work without recompiling in all bugfix releases in the
same fork as a major release.
Breaking any of these prohibitions requires a BDFL proclamation (and a
prominent warning in the release notes).
Not-Quite-Prohibitions
Where possible, bug fix releases should also:
Have no new features. The purpose of a bug fix release is to fix
bugs, not add the latest and greatest whizzo feature from the HEAD
of the CVS root.
Be a painless upgrade. Users should feel confident that an upgrade
from 2.x.y to 2.x.(y+1) will not break their running systems. This
means that, unless it is necessary to fix a bug, the standard
library should not change behavior, or worse yet, APIs.
Applicability of Prohibitions
The above prohibitions and not-quite-prohibitions apply both for a
final release to a bugfix release (for instance, 2.4 to 2.4.1) and for
one bugfix release to the next in a series (for instance 2.4.1 to
2.4.2).
Following the prohibitions listed in this PEP should help keep the
community happy that a bug fix release is a painless and safe upgrade.
Helping the Bug Fix Releases Happen
Here’s a few pointers on helping the bug fix release process along.
Backport bug fixes. If you fix a bug, and it seems appropriate,
port it to the CVS branch for the current bug fix release. If
you’re unwilling or unable to backport it yourself, make a note in
the commit message, with words like ‘Bugfix candidate’ or
‘Backport candidate’.
If you’re not sure, ask. Ask the person managing the current bug
fix releases if they think a particular fix is appropriate.
If there’s a particular bug you’d particularly like fixed in a bug
fix release, jump up and down and try to get it done. Do not wait
until 48 hours before a bug fix release is due, and then start
asking for bug fixes to be included.
Version Numbers
Starting with Python 2.0, all major releases are required to have a
version number of the form X.Y; bugfix releases will always be of the
form X.Y.Z.
The current major release under development is referred to as release
N; the just-released major version is referred to as N-1.
In CVS, the bug fix releases happen on a branch. For release 2.x, the
branch is named ‘release2x-maint’. For example, the branch for the 2.3
maintenance releases is release23-maint
Procedure
The process for managing bugfix releases is modeled in part on the Tcl
system [1].
The Patch Czar is the counterpart to the BDFL for bugfix releases.
However, the BDFL and designated appointees retain veto power over
individual patches. A Patch Czar might only be looking after a single
branch of development - it’s quite possible that a different person
might be maintaining the 2.3.x and the 2.4.x releases.
As individual patches get contributed to the current trunk of CVS,
each patch committer is requested to consider whether the patch is a
bug fix suitable for inclusion in a bugfix release. If the patch is
considered suitable, the committer can either commit the release to
the maintenance branch, or else mark the patch in the commit message.
In addition, anyone from the Python community is free to suggest
patches for inclusion. Patches may be submitted specifically for
bugfix releases; they should follow the guidelines in PEP 3. In
general, though, it’s probably better that a bug in a specific release
also be fixed on the HEAD as well as the branch.
The Patch Czar decides when there are a sufficient number of patches
to warrant a release. The release gets packaged up, including a
Windows installer, and made public. If any new bugs are found, they
must be fixed immediately and a new bugfix release publicized (with an
incremented version number). For the 2.3.x cycle, the Patch Czar
(Anthony) has been trying for a release approximately every six
months, but this should not be considered binding in any way on any
future releases.
Bug fix releases are expected to occur at an interval of roughly six
months. This is only a guideline, however - obviously, if a major bug
is found, a bugfix release may be appropriate sooner. In general, only
the N-1 release will be under active maintenance at any time. That is,
during Python 2.4’s development, Python 2.3 gets bugfix releases. If,
however, someone qualified wishes to continue the work to maintain an
older release, they should be encouraged.
Patch Czar History
Anthony Baxter is the Patch Czar for 2.3.1 through 2.3.4.
Barry Warsaw is the Patch Czar for 2.2.3.
Guido van Rossum is the Patch Czar for 2.2.2.
Michael Hudson is the Patch Czar for 2.2.1.
Anthony Baxter is the Patch Czar for 2.1.2 and 2.1.3.
Thomas Wouters is the Patch Czar for 2.1.1.
Moshe Zadka is the Patch Czar for 2.0.1.
History
This PEP started life as a proposal on comp.lang.python. The original
version suggested a single patch for the N-1 release to be released
concurrently with the N release. The original version also argued for
sticking with a strict bug fix policy.
Following feedback from the BDFL and others, the draft PEP was written
containing an expanded bugfix release cycle that permitted any
previous major release to obtain patches and also relaxed the strict
bug fix requirement (mainly due to the example of PEP 235, which
could be argued as either a bug fix or a feature).
Discussion then mostly moved to python-dev, where BDFL finally issued
a proclamation basing the Python bugfix release process on Tcl’s,
which essentially returned to the original proposal in terms of being
only the N-1 release and only bug fixes, but allowing multiple bugfix
releases until release N is published.
Anthony Baxter then took this PEP and revised it, based on lessons
from the 2.3 release cycle.
References
[1]
http://www.tcl.tk/cgi-bin/tct/tip/28.html
Copyright
This document has been placed in the public domain.
| Superseded | PEP 6 – Bug Fix Releases | Process | Python has historically had only a single fork of development, with
releases having the combined purpose of adding new features and
delivering bug fixes (these kinds of releases will be referred to as
“major releases”). This PEP describes how to fork off maintenance, or
bug fix, releases of old versions for the primary purpose of fixing
bugs. |
PEP 10 – Voting Guidelines
Author:
Barry Warsaw <barry at python.org>
Status:
Active
Type:
Process
Created:
07-Mar-2002
Post-History:
07-Mar-2002
Table of Contents
Abstract
Rationale
Voting Scores
References
Copyright
Abstract
This PEP outlines the python-dev voting guidelines. These guidelines
serve to provide feedback or gauge the “wind direction” on a
particular proposal, idea, or feature. They don’t have a binding
force.
Rationale
When a new idea, feature, patch, etc. is floated in the Python
community, either through a PEP or on the mailing lists (most likely
on python-dev [1]), it is sometimes helpful to gauge the community’s
general sentiment. Sometimes people just want to register their
opinion of an idea. Sometimes the BDFL wants to take a straw poll.
Whatever the reason, these guidelines have been adopted so as to
provide a common language for developers.
While opinions are (sometimes) useful, but they are never binding.
Opinions that are accompanied by rationales are always valued higher
than bare scores (this is especially true with -1 votes).
Voting Scores
The scoring guidelines are loosely derived from the Apache voting
procedure [2], with of course our own spin on things. There are 4
possible vote scores:
+1 I like it
+0 I don’t care, but go ahead
-0 I don’t care, so why bother?
-1 I hate it
You may occasionally see wild flashes of enthusiasm (either for or
against) with vote scores like +2, +1000, or -1000. These aren’t
really valued much beyond the above scores, but it’s nice to see
people get excited about such geeky stuff.
References
[1]
Python Developer’s Guide,
(http://www.python.org/dev/)
[2]
Apache Project Guidelines and Voting Rules
(http://httpd.apache.org/dev/guidelines.html)
Copyright
This document has been placed in the public domain.
| Active | PEP 10 – Voting Guidelines | Process | This PEP outlines the python-dev voting guidelines. These guidelines
serve to provide feedback or gauge the “wind direction” on a
particular proposal, idea, or feature. They don’t have a binding
force. |
PEP 11 – CPython platform support
Author:
Martin von Löwis <martin at v.loewis.de>,
Brett Cannon <brett at python.org>
Status:
Active
Type:
Process
Created:
07-Jul-2002
Post-History:
18-Aug-2007,
14-May-2014,
20-Feb-2015,
10-Mar-2022
Table of Contents
Abstract
Rationale
Support tiers
Tier 1
Tier 2
Tier 3
All other platforms
Notes
Microsoft Windows
Legacy C Locale
Unsupporting platforms
No-longer-supported platforms
Discussions
Copyright
Abstract
This PEP documents how an operating system (platform) becomes
supported in CPython, what platforms are currently supported, and
documents past support.
Rationale
Over time, the CPython source code has collected various pieces of
platform-specific code, which, at some point in time, was
considered necessary to use CPython on a specific platform.
Without access to this platform, it is not possible to determine
whether this code is still needed. As a result, this code may
either break during CPython’s evolution, or it may become
unnecessary as the platforms evolve as well.
Allowing these fragments to grow poses the risk of
unmaintainability: without having experts for a large number of
platforms, it is not possible to determine whether a certain
change to the CPython source code will work on all supported
platforms.
To reduce this risk, this PEP specifies what is required for a
platform to be considered supported by CPython as well as providing a
procedure to remove code for platforms with few or no CPython
users.
This PEP also lists what platforms are supported by the CPython
interpreter. This lets people know what platforms are directly
supported by the CPython development team.
Support tiers
Platform support is broken down into tiers. Each tier comes with
different requirements which lead to different promises being made
about support.
To be promoted to a tier, steering council support is required and is
expected to be driven by team consensus. Demotion to a lower tier
occurs when the requirements of the current tier are no longer met for
a platform for an extended period of time based on the judgment of
the release manager or steering council. For platforms which no longer
meet the requirements of any tier by b1 of a new feature release, an
announcement will be made to warn the community of the pending removal
of support for the platform (e.g. in the b1 announcement). If the
platform is not brought into line for at least one of the tiers by the
first release candidate, it will be listed as unsupported in this PEP.
Tier 1
STATUS
CI failures block releases.
Changes which would break the main branch are not allowed to be merged;
any breakage should be fixed or reverted immediately.
All core developers are responsible to keep main, and thus these
platforms, working.
Failures on these platforms block a release.
Target Triple
Notes
i686-pc-windows-msvc
x86_64-pc-windows-msvc
x86_64-apple-darwin
BSD libc, clang
x86_64-unknown-linux-gnu
glibc, gcc
Tier 2
STATUS
Must have a reliable buildbot.
At least two core developers are signed up to support the platform.
Changes which break any of these platforms are to be fixed or
reverted within 24 hours.
Failures on these platforms block a release.
Target Triple
Notes
Contacts
aarch64-apple-darwin
clang
Ned Deily, Ronald Oussoren, Dong-hee Na
aarch64-unknown-linux-gnu
glibc, gccglibc, clang
Petr Viktorin, Victor StinnerVictor Stinner, Gregory P. Smith
wasm32-unknown-wasi
WASI SDK, Wasmtime
Brett Cannon, Eric Snow
x86_64-unknown-linux-gnu
glibc, clang
Victor Stinner, Gregory P. Smith
Tier 3
STATUS
Must have a reliable buildbot.
At least one core developer is signed up to support the platform.
No response SLA to failures.
Failures on these platforms do not block a release.
Target Triple
Notes
Contacts
aarch64-pc-windows-msvc
Steve Dower
armv7l-unknown-linux-gnueabihf
Raspberry Pi OS, glibc, gcc
Gregory P. Smith
powerpc64le-unknown-linux-gnu
glibc, clangglibc, gcc
Victor StinnerVictor Stinner
s390x-unknown-linux-gnu
glibc, gcc
Victor Stinner
x86_64-unknown-freebsd
BSD libc, clang
Victor Stinner
All other platforms
Support for a platform may be partial within the code base, such as
from active development around platform support or accidentally.
Code changes to platforms not listed in the above tiers may be rejected
or removed from the code base without a deprecation process if they
cause a maintenance burden or obstruct general improvements.
Platforms not listed here may be supported by the wider Python
community in some way. If your desired platform is not listed above,
please perform a search online to see if someone is already providing
support in some form.
Notes
Microsoft Windows
Windows versions prior to Windows 10 follow Microsoft’s Fixed Lifecycle Policy,
with a mainstream support phase for 5 years after release,
where the product is generally commercially available,
and an additional 5 year extended support phase,
where paid support is still available and certain bug fixes are released.
Extended Security Updates (ESU)
is a paid program available to high-volume enterprise customers
as a “last resort” option to receive certain security updates after extended support ends.
ESU is considered a distinct phase that follows the expiration of extended support.
Windows 10 and later follow Microsoft’s Modern Lifecycle Policy,
which varies per-product, per-version, per-edition and per-channel.
Generally, feature updates (1709, 22H2) occur every 6-12 months
and are supported for 18-36 months;
Server and IoT editions, and LTSC channel releases are supported for 5-10 years,
and the latest feature release of a major version (Windows 10, Windows 11)
generally receives new updates for at least 10 years following release.
Microsoft’s Windows Lifecycle FAQ
has more specific and up-to-date guidance.
CPython’s Windows support currently follows Microsoft’s lifecycles.
A new feature release X.Y.0 will support all Windows versions
whose extended support phase has not yet expired.
Subsequent bug fix releases will support the same Windows versions
as the original feature release, even if no longer supported by Microsoft.
New versions of Windows released while CPython is in maintenance mode
may be supported at the discretion of the core team and release manager.
As of 2024, our current interpretation of Microsoft’s lifecycles is that
Windows for IoT and embedded systems is out of scope for new CPython releases,
as the intent of those is to avoid feature updates. Windows Server will usually
be the oldest version still receiving free security fixes, and that will
determine the earliest supported client release with equivalent API version
(which will usually be past its end-of-life).
Each feature release is built by a specific version of Microsoft
Visual Studio. That version should have mainstream support when the
release is made. Developers of extension modules will generally need
to use the same Visual Studio release; they are concerned both with
the availability of the versions they need to use, and with keeping
the zoo of versions small. The CPython source tree will keep
unmaintained build files for older Visual Studio releases, for which
patches will be accepted. Such build files will be removed from the
source tree 3 years after the extended support for the compiler has
ended (but continue to remain available in revision control).
Legacy C Locale
Starting with CPython 3.7.0, *nix platforms are expected to provide
at least one of C.UTF-8 (full locale), C.utf8 (full locale) or
UTF-8 (LC_CTYPE-only locale) as an alternative to the legacy C
locale.
Any Unicode-related integration problems that occur only in the legacy C
locale and cannot be reproduced in an appropriately configured non-ASCII
locale will be closed as “won’t fix”.
Unsupporting platforms
If a platform drops out of tiered support, a note must be posted
in this PEP that the platform is no longer actively supported. This
note must include:
The name of the system,
The first release number that does not support this platform
anymore, and
The first release where the historical support code is actively
removed.
In some cases, it is not possible to identify the specific list of
systems for which some code is used (e.g. when autoconf tests for
absence of some feature which is considered present on all
supported systems). In this case, the name will give the precise
condition (usually a preprocessor symbol) that will become
unsupported.
At the same time, the CPython build must be changed to produce a
warning if somebody tries to install CPython on this platform. On
platforms using autoconf, configure should also be made emit a warning
about the unsupported platform.
This gives potential users of the platform a chance to step forward
and offer maintenance. We do not treat a platform that loses Tier 3
support any worse than a platform that was never supported.
No-longer-supported platforms
Name: MS-DOS, MS-Windows 3.x
Unsupported in: Python 2.0
Code removed in: Python 2.1
Name: SunOS 4
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: DYNIX
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: dgux
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Minix
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Irix 4 and –with-sgi-dl
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Linux 1
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems defining __d6_pthread_create (configure.in)
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems defining PY_PTHREAD_D4, PY_PTHREAD_D6,
or PY_PTHREAD_D7 in thread_pthread.h
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems using –with-dl-dld
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: Systems using –without-universal-newlines,
Unsupported in: Python 2.3
Code removed in: Python 2.4
Name: MacOS 9
Unsupported in: Python 2.4
Code removed in: Python 2.4
Name: Systems using –with-wctype-functions
Unsupported in: Python 2.6
Code removed in: Python 2.6
Name: Win9x, WinME, NT4
Unsupported in: Python 2.6 (warning in 2.5 installer)
Code removed in: Python 2.6
Name: AtheOS
Unsupported in: Python 2.6 (with “AtheOS” changed to “Syllable”)
Build broken in: Python 2.7 (edit configure to re-enable)
Code removed in: Python 3.0
Details: http://www.syllable.org/discussion.php?id=2320
Name: BeOS
Unsupported in: Python 2.6 (warning in configure)
Build broken in: Python 2.7 (edit configure to re-enable)
Code removed in: Python 3.0
Name: Systems using Mach C Threads
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: SunOS lightweight processes (LWP)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: Systems using –with-pth (GNU pth threads)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: Systems using Irix threads
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: OSF* systems (issue 8606)
Unsupported in: Python 3.2
Code removed in: Python 3.3
Name: OS/2 (issue 16135)
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: VMS (issue 16136)
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: Windows 2000
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: Windows systems where COMSPEC points to command.com
Unsupported in: Python 3.3
Code removed in: Python 3.4
Name: RISC OS
Unsupported in: Python 3.0 (some code actually removed)
Code removed in: Python 3.4
Name: IRIX
Unsupported in: Python 3.7
Code removed in: Python 3.7
Name: Systems without multithreading support
Unsupported in: Python 3.7
Code removed in: Python 3.7
Name: wasm32-unknown-emscripten
Unsupported in: Python 3.13
Code removed in: Unknown
Discussions
April 2022: Consider adding a Tier 3 to tiered platform support
(Victor Stinner)
March 2022: Proposed tiered platform support
(Brett Cannon)
February 2015: Update to PEP 11 to clarify garnering platform support
(Brett Cannon)
May 2014: Where is our official policy of what platforms we do support?
(Brett Cannon)
August 2007: PEP 11 update - Call for port maintainers to step forward
(Skip Montanaro)
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Active | PEP 11 – CPython platform support | Process | This PEP documents how an operating system (platform) becomes
supported in CPython, what platforms are currently supported, and
documents past support. |
PEP 12 – Sample reStructuredText PEP Template
Author:
David Goodger <goodger at python.org>,
Barry Warsaw <barry at python.org>,
Brett Cannon <brett at python.org>
Status:
Active
Type:
Process
Created:
05-Aug-2002
Post-History:
30-Aug-2002
Table of Contents
Abstract
Rationale
How to Use This Template
ReStructuredText PEP Formatting Requirements
General
Section Headings
Paragraphs
Inline Markup
Block Quotes
Literal Blocks
Lists
Tables
Hyperlinks
Internal and PEP/RFC Links
Footnotes
Images
Comments
Escaping Mechanism
Canonical Documentation and Intersphinx
Habits to Avoid
Suggested Sections
Resources
Copyright
Note
For those who have written a PEP before, there is a template
(which is included as a file in the PEPs repository).
Abstract
This PEP provides a boilerplate or sample template for creating your
own reStructuredText PEPs. In conjunction with the content guidelines
in PEP 1, this should make it easy for you to conform your own
PEPs to the format outlined below.
Note: if you are reading this PEP via the web, you should first grab
the text (reStructuredText) source of this PEP in order to complete
the steps below. DO NOT USE THE HTML FILE AS YOUR TEMPLATE!
The source for this (or any) PEP can be found in the
PEPs repository,
as well as via a link at the bottom of each PEP.
Rationale
If you intend to submit a PEP, you MUST use this template, in
conjunction with the format guidelines below, to ensure that your PEP
submission won’t get automatically rejected because of form.
ReStructuredText provides PEP authors with useful functionality and
expressivity, while maintaining easy readability in the source text.
The processed HTML form makes the functionality accessible to readers:
live hyperlinks, styled text, tables, images, and automatic tables of
contents, among other advantages.
How to Use This Template
To use this template you must first decide whether your PEP is going
to be an Informational or Standards Track PEP. Most PEPs are
Standards Track because they propose a new feature for the Python
language or standard library. When in doubt, read PEP 1 for details,
or open a tracker issue on the PEPs repo to ask for assistance.
Once you’ve decided which type of PEP yours is going to be, follow the
directions below.
Make a copy of this file (the .rst file, not the HTML!) and
perform the following edits. Name the new file pep-NNNN.rst, using
the next available number (not used by a published or in-PR PEP).
Replace the “PEP: 12” header with “PEP: NNNN”,
matching the file name. Note that the file name should be padded with
zeros (eg pep-0012.rst), but the header should not (PEP: 12).
Change the Title header to the title of your PEP.
Change the Author header to include your name, and optionally your
email address. Be sure to follow the format carefully: your name
must appear first, and it must not be contained in parentheses.
Your email address may appear second (or it can be omitted) and if
it appears, it must appear in angle brackets. It is okay to
obfuscate your email address.
If none of the authors are Python core developers, include a Sponsor
header with the name of the core developer sponsoring your PEP.
Add the direct URL of the PEP’s canonical discussion thread
(on e.g. Python-Dev, Discourse, etc) under the Discussions-To header.
If the thread will be created after the PEP is submitted as an official
draft, it is okay to just list the venue name initially, but remember to
update the PEP with the URL as soon as the PEP is successfully merged
to the PEPs repository and you create the corresponding discussion thread.
See PEP 1 for more details.
Change the Status header to “Draft”.
For Standards Track PEPs, change the Type header to “Standards
Track”.
For Informational PEPs, change the Type header to “Informational”.
For Standards Track PEPs, if your feature depends on the acceptance
of some other currently in-development PEP, add a Requires header
right after the Type header. The value should be the PEP number of
the PEP yours depends on. Don’t add this header if your dependent
feature is described in a Final PEP.
Change the Created header to today’s date. Be sure to follow the
format carefully: it must be in dd-mmm-yyyy format, where the
mmm is the 3 English letter month abbreviation, i.e. one of Jan,
Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec.
For Standards Track PEPs, after the Created header, add a
Python-Version header and set the value to the next planned version
of Python, i.e. the one your new feature will hopefully make its
first appearance in. Do not use an alpha or beta release
designation here. Thus, if the last version of Python was 2.2 alpha
1 and you’re hoping to get your new feature into Python 2.2, set the
header to:Python-Version: 2.2
Add a Topic header if the PEP belongs under one shown at the Topic Index.
Most PEPs don’t.
Leave Post-History alone for now; you’ll add dates and corresponding links
to this header each time you post your PEP to the designated discussion forum
(and update the Discussions-To header with said link, as above).
For each thread, use the date (in the dd-mmm-yyy format) as the
linked text, and insert the URLs inline as anonymous reST hyperlinks,
with commas in between each posting.If you posted threads for your PEP on August 14, 2001 and September 3, 2001,
the Post-History header would look like, e.g.:
Post-History: `14-Aug-2001 <https://www.example.com/thread_1>`__,
`03-Sept-2001 <https://www.example.com/thread_2>`__
You should add the new dates/links here as soon as you post a
new discussion thread.
Add a Replaces header if your PEP obsoletes an earlier PEP. The
value of this header is the number of the PEP that your new PEP is
replacing. Only add this header if the older PEP is in “final”
form, i.e. is either Accepted, Final, or Rejected. You aren’t
replacing an older open PEP if you’re submitting a competing idea.
Now write your Abstract, Rationale, and other content for your PEP,
replacing all this gobbledygook with your own text. Be sure to
adhere to the format guidelines below, specifically on the
prohibition of tab characters and the indentation requirements.
See “Suggested Sections” below for a template of sections to include.
Update your Footnotes section, listing any footnotes and
non-inline link targets referenced by the text.
Run ./build.py to ensure the PEP is rendered without errors,
and check that the output in build/pep-NNNN.html looks as you intend.
Create a pull request against the PEPs repository.
For reference, here are all of the possible header fields (everything
in brackets should either be replaced or have the field removed if
it has a leading * marking it as optional and it does not apply to
your PEP):
PEP: [NNN]
Title: [...]
Author: [Full Name <email at example.com>]
Sponsor: *[Full Name <email at example.com>]
PEP-Delegate:
Discussions-To: [URL]
Status: Draft
Type: [Standards Track | Informational | Process]
Topic: *[Governance | Packaging | Release | Typing]
Requires: *[NNN]
Created: [DD-MMM-YYYY]
Python-Version: *[M.N]
Post-History: [`DD-MMM-YYYY <URL>`__]
Replaces: *[NNN]
Superseded-By: *[NNN]
Resolution:
ReStructuredText PEP Formatting Requirements
The following is a PEP-specific summary of reStructuredText syntax.
For the sake of simplicity and brevity, much detail is omitted. For
more detail, see Resources below. Literal blocks (in which no
markup processing is done) are used for examples throughout, to
illustrate the plaintext markup.
General
Lines should usually not extend past column 79,
excepting URLs and similar circumstances.
Tab characters must never appear in the document at all.
Section Headings
PEP headings must begin in column zero and the initial letter of each
word must be capitalized as in book titles. Acronyms should be in all
capitals. Section titles must be adorned with an underline, a single
repeated punctuation character, which begins in column zero and must
extend at least as far as the right edge of the title text (4
characters minimum). First-level section titles are underlined with
“=” (equals signs), second-level section titles with “-” (hyphens),
and third-level section titles with “’” (single quotes or
apostrophes). For example:
First-Level Title
=================
Second-Level Title
------------------
Third-Level Title
'''''''''''''''''
If there are more than three levels of sections in your PEP, you may
insert overline/underline-adorned titles for the first and second
levels as follows:
============================
First-Level Title (optional)
============================
-----------------------------
Second-Level Title (optional)
-----------------------------
Third-Level Title
=================
Fourth-Level Title
------------------
Fifth-Level Title
'''''''''''''''''
You shouldn’t have more than five levels of sections in your PEP. If
you do, you should consider rewriting it.
You must use two blank lines between the last line of a section’s body
and the next section heading. If a subsection heading immediately
follows a section heading, a single blank line in-between is
sufficient.
The body of each section is not normally indented, although some
constructs do use indentation, as described below. Blank lines are
used to separate constructs.
Paragraphs
Paragraphs are left-aligned text blocks separated by blank lines.
Paragraphs are not indented unless they are part of an indented
construct (such as a block quote or a list item).
Inline Markup
Portions of text within paragraphs and other text blocks may be
styled. For example:
Text may be marked as *emphasized* (single asterisk markup,
typically shown in italics) or **strongly emphasized** (double
asterisks, typically boldface). ``Inline literals`` (using double
backquotes) are typically rendered in a monospaced typeface. No
further markup recognition is done within the double backquotes,
so they're safe for any kind of code snippets.
Block Quotes
Block quotes consist of indented body elements. For example:
This is a paragraph.
This is a block quote.
A block quote may contain many paragraphs.
Block quotes are used to quote extended passages from other sources.
Block quotes may be nested inside other body elements. Use 4 spaces
per indent level.
Literal Blocks
Literal blocks are used for code samples and other preformatted text.
To indicate a literal block, preface the indented text block with
“::” (two colons), or use the .. code-block:: directive.
Indent the text block by 4 spaces; the literal block continues until the end
of the indentation. For example:
This is a typical paragraph. A literal block follows.
::
for a in [5, 4, 3, 2, 1]: # this is program code, shown as-is
print(a)
print("it's...")
“::” is also recognized at the end of any paragraph; if not immediately
preceded by whitespace, one colon will remain visible in the final output:
This is an example::
Literal block
By default, literal blocks will be syntax-highlighted as Python code.
For specific blocks that contain code or data in other languages/formats,
use the .. code-block:: language directive, substituting the “short name”
of the appropriate Pygments lexer
(or text to disable highlighting) for language. For example:
.. code-block:: rst
An example of the ``rst`` lexer (i.e. *reStructuredText*).
For PEPs that predominantly contain literal blocks of a specific language,
use the .. highlight:: language directive with the appropriate language
at the top of the PEP body (below the headers and above the Abstract).
All literal blocks will then be treated as that language,
unless specified otherwise in the specific .. code-block. For example:
.. highlight:: c
Abstract
========
Here's some C code::
printf("Hello, World!\n");
Lists
Bullet list items begin with one of “-”, “*”, or “+” (hyphen,
asterisk, or plus sign), followed by whitespace and the list item
body. List item bodies must be left-aligned and indented relative to
the bullet; the text immediately after the bullet determines the
indentation. For example:
This paragraph is followed by a list.
* This is the first bullet list item. The blank line above the
first list item is required; blank lines between list items
(such as below this paragraph) are optional.
* This is the first paragraph in the second item in the list.
This is the second paragraph in the second item in the list.
The blank line above this paragraph is required. The left edge
of this paragraph lines up with the paragraph above, both
indented relative to the bullet.
- This is a sublist. The bullet lines up with the left edge of
the text blocks above. A sublist is a new list so requires a
blank line above and below.
* This is the third item of the main list.
This paragraph is not part of the list.
Enumerated (numbered) list items are similar, but use an enumerator
instead of a bullet. Enumerators are numbers (1, 2, 3, …), letters
(A, B, C, …; uppercase or lowercase), or Roman numerals (i, ii, iii,
iv, …; uppercase or lowercase), formatted with a period suffix
(“1.”, “2.”), parentheses (“(1)”, “(2)”), or a right-parenthesis
suffix (“1)”, “2)”). For example:
1. As with bullet list items, the left edge of paragraphs must
align.
2. Each list item may contain multiple paragraphs, sublists, etc.
This is the second paragraph of the second list item.
a) Enumerated lists may be nested.
b) Blank lines may be omitted between list items.
Definition lists are written like this:
what
Definition lists associate a term with a definition.
how
The term is a one-line phrase, and the definition is one
or more paragraphs or body elements, indented relative to
the term.
Tables
Simple tables are easy and compact:
===== ===== =======
A B A and B
===== ===== =======
False False False
True False False
False True False
True True True
===== ===== =======
There must be at least two columns in a table (to differentiate from
section titles). Column spans use underlines of hyphens (“Inputs”
spans the first two columns):
===== ===== ======
Inputs Output
------------ ------
A B A or B
===== ===== ======
False False False
True False True
False True True
True True True
===== ===== ======
Text in a first-column cell starts a new row. No text in the first
column indicates a continuation line; the rest of the cells may
consist of multiple lines. For example:
===== =========================
col 1 col 2
===== =========================
1 Second column of row 1.
2 Second column of row 2.
Second line of paragraph.
3 - Second column of row 3.
- Second item in bullet
list (row 3, column 2).
===== =========================
Hyperlinks
When referencing an external web page in the body of a PEP, you should
include the title of the page or a suitable description in the text, with
either an inline hyperlink or a separate explicit target with the URL.
Do not include bare URLs in the body text of the PEP, and use HTTPS
links wherever available.
Hyperlink references use backquotes and a trailing underscore to mark
up the reference text; backquotes are optional if the reference text
is a single word. For example, to reference a hyperlink target named
Python website, you would write:
In this paragraph, we refer to the `Python website`_.
If you intend to only reference a link once, and want to define it inline
with the text, insert the link into angle brackets (<>) after the text
you want to link, but before the closing backtick, with a space between the
text and the opening backtick. You should also use a double-underscore after
the closing backtick instead of a single one, which makes it an anonymous
reference to avoid conflicting with other target names. For example:
Visit the `website <https://www.python.org/>`__ for more.
If you want to use one link multiple places with different linked text,
or want to ensure you don’t have to update your link target names when
changing the linked text, include the target name within angle brackets
following the text to link, with an underscore after the target name
but before the closing angle bracket (or the link will not work).
For example:
For further examples, see the `documentation <pydocs_>`_.
An explicit target provides the URL. Put targets in the Footnotes section
at the end of the PEP, or immediately after the paragraph with the reference.
Hyperlink targets begin with two periods and a space (the “explicit
markup start”), followed by a leading underscore, the reference text,
a colon, and the URL.
.. _Python web site: https://www.python.org/
.. _pydocs: https://docs.python.org/
The reference text and the target text must match (although the match
is case-insensitive and ignores differences in whitespace). Note that
the underscore trails the reference text but precedes the target text.
If you think of the underscore as a right-pointing arrow, it points
away from the reference and toward the target.
Internal and PEP/RFC Links
The same mechanism as hyperlinks can be used for internal references.
Every unique section title implicitly defines an internal hyperlink target.
We can make a link to the Abstract section like this:
Here is a hyperlink reference to the `Abstract`_ section. The
backquotes are optional since the reference text is a single word;
we can also just write: Abstract_.
To refer to PEPs or RFCs, always use the :pep: and :rfc: roles,
never hardcoded URLs.
For example:
See :pep:`1` for more information on how to write a PEP,
and :pep:`the Hyperlink section of PEP 12 <12#hyperlinks>` for how to link.
This renders as:
See PEP 1 for more information on how to write a PEP,
and the Hyperlink section of PEP 12 for how to link.
PEP numbers in the text are never padded, and there is a space (not a dash)
between “PEP” or “RFC” and the number; the above roles will take care of
that for you.
Footnotes
Footnote references consist of a left square bracket, a label, a
right square bracket, and a trailing underscore.
Instead of a number, use a label of the
form “#word”, where “word” is a mnemonic consisting of alphanumerics
plus internal hyphens, underscores, and periods (no whitespace or
other characters are allowed).
For example:
Refer to The TeXbook [#TeXbook]_ for more information.
which renders as
Refer to The TeXbook [1] for more information.
Whitespace must precede the footnote reference. Leave a space between
the footnote reference and the preceding word.
Use footnotes for additional notes, explanations and caveats, as well as
for references to books and other sources not readily available online.
Native reST hyperlink targets or inline hyperlinks in the text should be
used in preference to footnotes for including URLs to online resources.
Footnotes begin with “.. “ (the explicit
markup start), followed by the footnote marker (no underscores),
followed by the footnote body. For example:
.. [#TeXbook] Donald Knuth's *The TeXbook*, pages 195 and 196.
which renders as
[1]
Donald Knuth’s The TeXbook, pages 195 and 196.
Footnotes and footnote references will be numbered automatically, and
the numbers will always match.
Images
If your PEP contains a diagram or other graphic, you may include it in the
processed output using the image directive:
.. image:: diagram.png
Any browser-friendly graphics format is possible; PNG should be
preferred for graphics, JPEG for photos and GIF for animations.
Currently, SVG must be avoided due to compatibility issues with the
PEP build system.
For accessibility and readers of the source text, you should include
a description of the image and any key information contained within
using the :alt: option to the image directive:
.. image:: dataflow.png
:alt: Data flows from the input module, through the "black box"
module, and finally into (and through) the output module.
Comments
A comment is an indented block of arbitrary text immediately
following an explicit markup start: two periods and whitespace. Leave
the “..” on a line by itself to ensure that the comment is not
misinterpreted as another explicit markup construct. Comments are not
visible in the processed document. For example:
..
This section should be updated in the final PEP.
Ensure the date is accurate.
Escaping Mechanism
reStructuredText uses backslashes (”\”) to override the special
meaning given to markup characters and get the literal characters
themselves. To get a literal backslash, use an escaped backslash
(”\\”). There are two contexts in which backslashes have no
special meaning: literal blocks and inline literals (see Inline
Markup above). In these contexts, no markup recognition is done,
and a single backslash represents a literal backslash, without having
to double up.
If you find that you need to use a backslash in your text, consider
using inline literals or a literal block instead.
Canonical Documentation and Intersphinx
As PEP 1 describes,
PEPs are considered historical documents once marked Final,
and their canonical documentation/specification should be moved elsewhere.
To indicate this, use the canonical-doc directive
or an appropriate subclass:
canonical-pypa-spec for packaging standards
canonical-typing-spec for typing standards
Furthermore, you can use
Intersphinx references
to other Sphinx sites,
currently the Python documentation
and packaging.python.org,
to easily cross-reference pages, sections and Python/C objects.
This works with both the “canonical” directives and anywhere in your PEP.
Add the directive between the headers and the first section of the PEP
(typically the Abstract)
and pass as an argument an Intersphinx reference of the canonical doc/spec
(or if the target is not on a Sphinx site, a reST hyperlink).
For example,
to create a banner pointing to the sqlite3 docs,
you would write the following:
.. canonical-doc:: :mod:`python:sqlite3`
which would generate the banner:
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at sqlite3.
×
See PEP 1 for how to propose changes.
Or for a PyPA spec,
such as the Core metadata specifications,
you would use:
.. canonical-pypa-spec:: :ref:`packaging:core-metadata`
which renders as:
Attention
This PEP is a historical document. The up-to-date, canonical spec, Core metadata specifications, is maintained on the PyPA specs page.
×
See the PyPA specification update process for how to propose changes.
The argument accepts arbitrary reST,
so you can include multiple linked docs/specs and name them whatever you like,
and you can also include directive content that will be inserted into the text.
The following advanced example:
.. canonical-doc:: the :ref:`python:sqlite3-connection-objects` and :exc:`python:~sqlite3.DataError` docs
Also, see the :ref:`Data Persistence docs <persistence>` for other examples.
would render as:
Important
This PEP is a historical document. The up-to-date, canonical documentation can now be found at the Connection objects and sqlite3.DataError docs.
×
Also, see the Data Persistence docs for other examples.
See PEP 1 for how to propose changes.
Habits to Avoid
Many programmers who are familiar with TeX often write quotation marks
like this:
`single-quoted' or ``double-quoted''
Backquotes are significant in reStructuredText, so this practice
should be avoided. For ordinary text, use ordinary ‘single-quotes’ or
“double-quotes”. For inline literal text (see Inline Markup
above), use double-backquotes:
``literal text: in here, anything goes!``
Suggested Sections
Various sections are found to be common across PEPs and are outlined in
PEP 1. Those sections are provided here for convenience.
PEP: <REQUIRED: pep number>
Title: <REQUIRED: pep title>
Author: <REQUIRED: list of authors' real names and optionally, email addrs>
Sponsor: <real name of sponsor>
PEP-Delegate: <PEP delegate's real name>
Discussions-To: <REQUIRED: URL of current canonical discussion thread>
Status: <REQUIRED: Draft | Active | Accepted | Provisional | Deferred | Rejected | Withdrawn | Final | Superseded>
Type: <REQUIRED: Standards Track | Informational | Process>
Topic: <Governance | Packaging | Release | Typing>
Requires: <pep numbers>
Created: <date created on, in dd-mmm-yyyy format>
Python-Version: <version number>
Post-History: <REQUIRED: dates, in dd-mmm-yyyy format, and corresponding links to PEP discussion threads>
Replaces: <pep number>
Superseded-By: <pep number>
Resolution: <url>
Abstract
========
[A short (~200 word) description of the technical issue being addressed.]
Motivation
==========
[Clearly explain why the existing language specification is inadequate to address the problem that the PEP solves.]
Rationale
=========
[Describe why particular design decisions were made.]
Specification
=============
[Describe the syntax and semantics of any new language feature.]
Backwards Compatibility
=======================
[Describe potential impact and severity on pre-existing code.]
Security Implications
=====================
[How could a malicious user take advantage of this new feature?]
How to Teach This
=================
[How to teach users, new and experienced, how to apply the PEP to their work.]
Reference Implementation
========================
[Link to any existing implementation and details about its state, e.g. proof-of-concept.]
Rejected Ideas
==============
[Why certain ideas that were brought while discussing this PEP were not ultimately pursued.]
Open Issues
===========
[Any points that are still being decided/discussed.]
Footnotes
=========
[A collection of footnotes cited in the PEP, and a place to list non-inline hyperlink targets.]
Copyright
=========
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
Resources
Many other constructs and variations are possible,
both those supported by basic Docutils
and the extensions added by Sphinx.
A number of resources are available to learn more about them:
Sphinx ReStructuredText Primer,
a gentle but fairly detailed introduction.
reStructuredText Markup Specification,
the authoritative, comprehensive documentation of the basic reST syntax,
directives, roles and more.
Sphinx Roles
and Sphinx Directives,
the extended constructs added by the Sphinx documentation system used to
render the PEPs to HTML.
If you have questions or require assistance with writing a PEP that the above
resources don’t address, ping @python/pep-editors on GitHub, open an
issue on the PEPs repository
or reach out to a PEP editor directly.
Copyright
This document is placed in the public domain or under the
CC0-1.0-Universal license, whichever is more permissive.
| Active | PEP 12 – Sample reStructuredText PEP Template | Process | This PEP provides a boilerplate or sample template for creating your
own reStructuredText PEPs. In conjunction with the content guidelines
in PEP 1, this should make it easy for you to conform your own
PEPs to the format outlined below. |
PEP 13 – Python Language Governance
Author:
The Python core team and community
Status:
Active
Type:
Process
Topic:
Governance
Created:
16-Dec-2018
Table of Contents
Abstract
Current steering council
Specification
The steering council
Composition
Mandate
Powers
Electing the council
Term
Vacancies
Conflicts of interest
Ejecting core team members
Vote of no confidence
The core team
Role
Prerogatives
Membership
Changing this document
History
Creation of this document
History of council elections
History of amendments
Acknowledgements
Copyright
Abstract
This PEP defines the formal governance process for Python, and records
how this has changed over time. Currently, governance is based around
a steering council. The council has broad authority, which they seek
to exercise as rarely as possible.
Current steering council
The 2024 term steering council consists of:
Barry Warsaw
Emily Morehouse
Gregory P. Smith
Pablo Galindo Salgado
Thomas Wouters
Per the results of the vote tracked in PEP 8105.
The core team consists of those listed in the private
https://github.com/python/voters/ repository which is publicly
shared via https://devguide.python.org/developers/.
Specification
The steering council
Composition
The steering council is a 5-person committee.
Mandate
The steering council shall work to:
Maintain the quality and stability of the Python language and
CPython interpreter,
Make contributing as accessible, inclusive, and sustainable as
possible,
Formalize and maintain the relationship between the core team and
the PSF,
Establish appropriate decision-making processes for PEPs,
Seek consensus among contributors and the core team before acting in
a formal capacity,
Act as a “court of final appeal” for decisions where all other
methods have failed.
Powers
The council has broad authority to make decisions about the project.
For example, they can:
Accept or reject PEPs
Enforce or update the project’s code of conduct
Work with the PSF to manage any project assets
Delegate parts of their authority to other subcommittees or
processes
However, they cannot modify this PEP, or affect the membership of the
core team, except via the mechanisms specified in this PEP.
The council should look for ways to use these powers as little as
possible. Instead of voting, it’s better to seek consensus. Instead of
ruling on individual PEPs, it’s better to define a standard process
for PEP decision making (for example, by accepting one of the other
801x series of PEPs). It’s better to establish a Code of Conduct
committee than to rule on individual cases. And so on.
To use its powers, the council votes. Every council member must either
vote or explicitly abstain. Members with conflicts of interest on a
particular vote must abstain. Passing requires a strict majority of
non-abstaining council members.
Whenever possible, the council’s deliberations and votes shall be held
in public.
Electing the council
A council election consists of two phases:
Phase 1: Candidates advertise their interest in serving. Candidates
must be nominated by a core team member. Self-nominations are
allowed.
Phase 2: Each core team member can vote for zero or more of the
candidates. Voting is performed anonymously. Candidates are ranked
by the total number of votes they receive. If a tie occurs, it may
be resolved by mutual agreement among the candidates, or else the
winner will be chosen at random.
Each phase lasts one to two weeks, at the outgoing council’s discretion.
For the initial election, both phases will last two weeks.
The election process is managed by a returns officer nominated by the
outgoing steering council. For the initial election, the returns
officer will be nominated by the PSF Executive Director.
The council should ideally reflect the diversity of Python
contributors and users, and core team members are encouraged to vote
accordingly.
Term
A new council is elected after each feature release. Each council’s
term runs from when their election results are finalized until the
next council’s term starts. There are no term limits.
Vacancies
Council members may resign their position at any time.
Whenever there is a vacancy during the regular council term, the
council may vote to appoint a replacement to serve out the rest of the
term.
If a council member drops out of touch and cannot be contacted for a
month or longer, then the rest of the council may vote to replace
them.
Conflicts of interest
While we trust council members to act in the best interests of Python
rather than themselves or their employers, the mere appearance of any
one company dominating Python development could itself be harmful and
erode trust. In order to avoid any appearance of conflict of interest,
at most 2 members of the council can work for any single employer.
In a council election, if 3 of the top 5 vote-getters work for the
same employer, then whichever of them ranked lowest is disqualified
and the 6th-ranking candidate moves up into 5th place; this is
repeated until a valid council is formed.
During a council term, if changing circumstances cause this rule to be
broken (for instance, due to a council member changing employment),
then one or more council members must resign to remedy the issue, and
the resulting vacancies can then be filled as normal.
Ejecting core team members
In exceptional circumstances, it may be necessary to remove someone
from the core team against their will. (For example: egregious and
ongoing code of conduct violations.) This can be accomplished by a
steering council vote, but unlike other steering council votes, this
requires at least a two-thirds majority. With 5 members voting, this
means that a 3:2 vote is insufficient; 4:1 in favor is the minimum
required for such a vote to succeed. In addition, this is the one
power of the steering council which cannot be delegated, and this
power cannot be used while a vote of no confidence is in process.
If the ejected core team member is also on the steering council, then
they are removed from the steering council as well.
Vote of no confidence
In exceptional circumstances, the core team may remove a sitting
council member, or the entire council, via a vote of no confidence.
A no-confidence vote is triggered when a core team member calls for
one publicly on an appropriate project communication channel, and
another core team member seconds the proposal.
The vote lasts for two weeks. Core team members vote for or against.
If at least two thirds of voters express a lack of confidence, then
the vote succeeds.
There are two forms of no-confidence votes: those targeting a single
member, and those targeting the council as a whole. The initial call
for a no-confidence vote must specify which type is intended. If a
single-member vote succeeds, then that member is removed from the
council and the resulting vacancy can be handled in the usual way. If
a whole-council vote succeeds, the council is dissolved and a new
council election is triggered immediately.
The core team
Role
The core team is the group of trusted volunteers who manage Python.
They assume many roles required to achieve the project’s goals,
especially those that require a high level of trust. They make the
decisions that shape the future of the project.
Core team members are expected to act as role models for the community
and custodians of the project, on behalf of the community and all
those who rely on Python.
They will intervene, where necessary, in online discussions or at
official Python events on the rare occasions that a situation arises
that requires intervention.
They have authority over the Python Project infrastructure, including
the Python Project website itself, the Python GitHub organization and
repositories, the bug tracker, the mailing lists, IRC channels, etc.
Prerogatives
Core team members may participate in formal votes, typically to nominate new
team members and to elect the steering council.
Membership
Python core team members demonstrate:
a good grasp of the philosophy of the Python Project
a solid track record of being constructive and helpful
significant contributions to the project’s goals, in any form
willingness to dedicate some time to improving Python
As the project matures, contributions go beyond code. Here’s an
incomplete list of areas where contributions may be considered for
joining the core team, in no particular order:
Working on community management and outreach
Providing support on the mailing lists and on IRC
Triaging tickets
Writing patches (code, docs, or tests)
Reviewing patches (code, docs, or tests)
Participating in design decisions
Providing expertise in a particular domain (security, i18n, etc.)
Managing the continuous integration infrastructure
Managing the servers (website, tracker, documentation, etc.)
Maintaining related projects (alternative interpreters, core
infrastructure like packaging, etc.)
Creating visual designs
Core team membership acknowledges sustained and valuable efforts that
align well with the philosophy and the goals of the Python project.
It is granted by receiving at least two-thirds positive votes in a
core team vote that is open for one week and is not vetoed by the
steering council.
Core team members are always looking for promising contributors,
teaching them how the project is managed, and submitting their names
to the core team’s vote when they’re ready.
There’s no time limit on core team membership. However, in order to
provide the general public with a reasonable idea of how many people
maintain Python, core team members who have stopped contributing are
encouraged to declare themselves as “inactive”. Those who haven’t made
any non-trivial contribution in two years may be asked to move
themselves to this category, and moved there if they don’t respond. To
record and honor their contributions, inactive team members will
continue to be listed alongside active core team members; and, if they
later resume contributing, they can switch back to active status at
will. While someone is in inactive status, though, they lose their
active privileges like voting or nominating for the steering council,
and commit access.
The initial active core team members will consist of everyone
currently listed in the “Python core” team on GitHub (access
granted for core members only), and the
initial inactive members will consist of everyone else who has been a
committer in the past.
Changing this document
Changes to this document require at least a two-thirds majority of
votes cast in a core team vote which should be open for two weeks.
History
Creation of this document
The Python project was started by Guido van Rossum, who served as its
Benevolent Dictator for Life (BDFL) from inception until July 2018,
when he stepped down.
After discussion, a number of proposals were put forward for a new
governance model, and the core devs voted to choose between them. The
overall process is described in PEP 8000 and PEP 8001, a review of
other projects was performed in PEP 8002, and the proposals themselves
were written up as the 801x series of PEPs. Eventually the proposal in
PEP 8016 was selected
as the new governance model, and was used to create the initial
version of this PEP. The 8000-series PEPs are preserved for historical
reference (and in particular, PEP 8016 contains additional rationale
and links to contemporary discussions), but this PEP is now the
official reference, and will evolve following the rules described
herein.
History of council elections
January 2019: PEP 8100
December 2019: PEP 8101
December 2020: PEP 8102
December 2021: PEP 8103
December 2022: PEP 8104
December 2023: PEP 8105
History of amendments
2019-04-17: Added the vote length for core devs and changes to this document.
Acknowledgements
This PEP began as PEP 8016, which was written by Nathaniel J. Smith
and Donald Stufft, based on a Django governance document written by
Aymeric Augustin, and incorporated feedback and assistance from
numerous others.
Copyright
This document has been placed in the public domain.
| Active | PEP 13 – Python Language Governance | Process | This PEP defines the formal governance process for Python, and records
how this has changed over time. Currently, governance is based around
a steering council. The council has broad authority, which they seek
to exercise as rarely as possible. |
PEP 20 – The Zen of Python
Author:
Tim Peters <tim.peters at gmail.com>
Status:
Active
Type:
Informational
Created:
19-Aug-2004
Post-History:
22-Aug-2004
Table of Contents
Abstract
The Zen of Python
Easter Egg
References
Copyright
Abstract
Long time Pythoneer Tim Peters succinctly channels the BDFL’s guiding
principles for Python’s design into 20 aphorisms, only 19 of which
have been written down.
The Zen of Python
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Easter Egg
>>> import this
References
Originally posted to comp.lang.python/[email protected] under a
thread called “The Way of Python”
Copyright
This document has been placed in the public domain.
| Active | PEP 20 – The Zen of Python | Informational | Long time Pythoneer Tim Peters succinctly channels the BDFL’s guiding
principles for Python’s design into 20 aphorisms, only 19 of which
have been written down. |
PEP 101 – Doing Python Releases 101
Author:
Barry Warsaw <barry at python.org>, Guido van Rossum <guido at python.org>
Status:
Active
Type:
Informational
Created:
22-Aug-2001
Post-History:
Replaces:
102
Table of Contents
Abstract
Things You’ll Need
Types of Releases
How To Make A Release
What Next?
Moving to End-of-life
Windows Notes
Copyright
Abstract
Making a Python release is a thrilling and crazy process. You’ve heard
the expression “herding cats”? Imagine trying to also saddle those
purring little creatures up, and ride them into town, with some of their
buddies firmly attached to your bare back, anchored by newly sharpened
claws. At least they’re cute, you remind yourself.
Actually, no, that’s a slight exaggeration 😉 The Python release
process has steadily improved over the years and now, with the help of our
amazing community, is really not too difficult. This PEP attempts to
collect, in one place, all the steps needed to make a Python release.
Most of the steps are now automated or guided by automation, so manually
following this list is no longer necessary.
Things You’ll Need
As a release manager there are a lot of resources you’ll need to access.
Here’s a hopefully-complete list.
A GPG key.Python releases are digitally signed with GPG; you’ll need a key,
which hopefully will be on the “web of trust” with at least one of
the other release managers.
A bunch of software:
A checkout of the python/release-tools repo.
It contains a requirements.txt file that you need to install
dependencies from first. Afterwards, you can fire up scripts in the
repo, covered later in this PEP.
blurb, the
Misc/NEWS
management tool. You can pip install it.
A fairly complete installation of a recent TeX distribution,
such as texlive. You need that for building the PDF docs.
Access to servers where you will upload files:
downloads.nyc1.psf.io, the server that hosts download files; and
docs.nyc1.psf.io, the server that hosts the documentation.
Administrator access to https://github.com/python/cpython.
An administrator account on www.python.org, including an “API key”.
Write access to the PEP repository.If you’re reading this, you probably already have this–the first
task of any release manager is to draft the release schedule. But
in case you just signed up… sucker! I mean, uh, congratulations!
Posting access to http://blog.python.org, a Blogger-hosted weblog.
The RSS feed from this blog is used for the ‘Python News’ section
on www.python.org.
A subscription to the super secret release manager mailing list, which may
or may not be called python-cabal. Bug Barry about this.
A @python.org email address that you will use to sign your releases
with. Ask postmaster@ for an address; you can either get a full
account, or a redirecting alias + SMTP credentials to send email from
this address that looks legit to major email providers.
Types of Releases
There are several types of releases you will need to make. These include:
alpha
begin beta, also known as beta 1, also known as new branch
beta 2+
release candidate 1
release candidate 2+
final
new branch
begin bugfix mode
begin security-only mode
end-of-life
Some of these release types actually involve more than
one release branch. In particular, a new branch is that point in the
release cycle when a new feature release cycle begins. Under the current
organization of the cpython git repository, the main branch is always
the target for new features. At some point in the release cycle of the
next feature release, a new branch release is made which creates a
new separate branch for stabilization and later maintenance of the
current in-progress feature release (3.n.0) and the main branch is modified
to build a new version (which will eventually be released as 3.n+1.0).
While the new branch release step could occur at one of several points
in the release cycle, current practice is for it to occur at feature code
cutoff for the release which is scheduled for the first beta release.
In the descriptions that follow, steps specific to release types are
labeled accordingly, for now, new branch and final.
How To Make A Release
Here are the steps taken to make a Python release. Some steps are more
fuzzy than others because there’s little that can be automated (e.g.
writing the NEWS entries). Where a step is usually performed by An
Expert, the role of that expert is given. Otherwise, assume the step is
done by the Release Manager (RM), the designated person performing the
release. The roles and their current experts are:
RM = Release Manager
Thomas Wouters <[email protected]> (NL)
Pablo Galindo Salgado <[email protected]> (UK)
Łukasz Langa <[email protected]> (PL)
WE = Windows - Steve Dower <[email protected]>
ME = Mac - Ned Deily <[email protected]> (US)
DE = Docs - Julien Palard <[email protected]> (Central Europe)
Note
It is highly recommended that the RM contact the Experts the day
before the release. Because the world is round and everyone lives
in different timezones, the RM must ensure that the release tag is
created in enough time for the Experts to cut binary releases.
You should not make the release public (by updating the website and
sending announcements) before all experts have updated their bits.
In rare cases where the expert for Windows or Mac is MIA, you may add
a message “(Platform) binaries will be provided shortly” and proceed.
As much as possible, the release steps are automated and guided by the
release script, which is available in a separate repository:
https://github.com/python/release-tools
We use the following conventions in the examples below. Where a release
number is given, it is of the form 3.X.YaN, e.g. 3.13.0a3 for Python 3.13.0
alpha 3, where “a” == alpha, “b” == beta, “rc” == release candidate.
Release tags are named v3.X.YaN. The branch name for minor release
maintenance branches is 3.X.
This helps by performing several automatic editing steps, and guides you
to perform some manual editing steps.
Log into Discord and join the Python Core Devs server. Ask Thomas
or Łukasz for an invite.You probably need to coordinate with other people around the world.
This communication channel is where we’ve arranged to meet.
Check to see if there are any showstopper bugs.Go to https://github.com/python/cpython/issues and look for any open
bugs that can block this release. You’re looking at two relevant labels:
release-blockerStops the release dead in its tracks. You may not
make any release with any open release blocker bugs.
deferred-blockerDoesn’t block this release, but it will block a
future release. You may not make a final or
candidate release with any open deferred blocker
bugs.
Review the release blockers and either resolve them, bump them down to
deferred, or stop the release and ask for community assistance. If
you’re making a final or candidate release, do the same with any open
deferred.
Check the stable buildbots.Go to https://buildbot.python.org/all/#/release_status
Look at the buildbots for the release
you’re making. Ignore any that are offline (or inform the community so
they can be restarted). If what remains are (mostly) green buildbots,
you’re good to go. If you have non-offline red buildbots, you may want
to hold up the release until they are fixed. Review the problems and
use your judgement, taking into account whether you are making an alpha,
beta, or final release.
Make a release clone.On a fork of the cpython repository on GitHub, create a release branch
within it (called the “release clone” from now on). You can use the same
GitHub fork you use for cpython development. Using the standard setup
recommended in the Python Developer’s Guide, your fork would be referred
to as origin and the standard cpython repo as upstream. You will
use the branch on your fork to do the release engineering work, including
tagging the release, and you will use it to share with the other experts
for making the binaries.
For a final or release candidate 2+ release, if you are going
to cherry-pick a subset of changes for the next rc or final from all those
merged since the last rc, you should create a release
engineering branch starting from the most recent release candidate tag,
i.e. v3.8.0rc1. You will then cherry-pick changes from the standard
release branch as necessary into the release engineering branch and
then proceed as usual. If you are going to take all of the changes
since the previous rc, you can proceed as normal.
Make sure the current branch of your release clone is the branch you
want to release from. (git status)
Run blurb release <version> specifying the version number
(e.g. blurb release 3.4.7rc1). This merges all the recent news
blurbs into a single file marked with this release’s version number.
Regenerate Lib/pydoc-topics.py.While still in the Doc directory, run make pydoc-topics. Then copy
build/pydoc-topics/topics.py to ../Lib/pydoc_data/topics.py.
Commit your changes to pydoc_topics.py
(and any fixes you made in the docs).
Consider running autoconf using the currently accepted standard version
in case configure or other autoconf-generated files were last
committed with a newer or older version and may contain spurious or
harmful differences. Currently, autoconf 2.71 is our de facto standard.
if there are differences, commit them.
Make sure the SOURCE_URI in Doc/tools/extensions/pyspecific.py
points to the right branch in the git repository (main or 3.X).
For a new branch release, change the branch in the file from main
to the new release branch you are about to create (3.X).
Bump version numbers via the release script:$ .../release-tools/release.py --bump 3.X.YaN
Reminder: X, Y, and N should be integers.
a should be one of “a”, “b”, or “rc” (e.g. “3.4.3rc1”).
For final releases omit the aN (“3.4.3”). For the first
release of a new version Y should be 0 (“3.6.0”).
This automates updating various release numbers, but you will have to
modify a few files manually. If your $EDITOR environment variable is
set up correctly, release.py will pop up editor windows with the files
you need to edit.
Review the blurb-generated Misc/NEWS file and edit as necessary.
Make sure all changes have been committed. (release.py --bump
doesn’t check in its changes for you.)
Check the years on the copyright notice. If the last release
was some time last year, add the current year to the copyright
notice in several places:
README
LICENSE (make sure to change on trunk and the branch)
Python/getcopyright.c
Doc/copyright.rst
Doc/license.rst
PC/python_ver_rc.h sets up the DLL version resource for Windows
(displayed when you right-click on the DLL and select
Properties). This isn’t a C include file, it’s a Windows
“resource file” include file.
For a final major release, edit the first paragraph of
Doc/whatsnew/3.X.rst to include the actual release date; e.g. “Python
2.5 was released on August 1, 2003.” There’s no need to edit this for
alpha or beta releases.
Do a “git status” in this directory.You should not see any files. I.e. you better not have any uncommitted
changes in your working directory.
Tag the release for 3.X.YaN:$ .../release-tools/release.py --tag 3.X.YaN
This executes a git tag command with the -s option so that the
release tag in the repo is signed with your gpg key. When prompted
choose the private key you use for signing release tarballs etc.
For begin security-only mode and end-of-life releases, review the
two files and update the versions accordingly in all active branches.
Time to build the source tarball. Use the release script to create
the source gzip and xz tarballs,
documentation tar and zip files, and gpg signature files:$ .../release-tools/release.py --export 3.X.YaN
This can take a while for final releases, and it will leave all the
tarballs and signatures in a subdirectory called 3.X.YaN/src, and the
built docs in 3.X.YaN/docs (for final releases).
Note that the script will sign your release with Sigstore. Please use
your @python.org email address for this. See here for more information:
https://www.python.org/download/sigstore/.
Now you want to perform the very important step of checking the
tarball you just created, to make sure a completely clean,
virgin build passes the regression test. Here are the best
steps to take:$ cd /tmp
$ tar xvf /path/to/your/release/clone/<version>//Python-3.2rc2.tgz
$ cd Python-3.2rc2
$ ls
(Do things look reasonable?)
$ ls Lib
(Are there stray .pyc files?)
$ ./configure
(Loads of configure output)
$ make test
(Do all the expected tests pass?)
If you’re feeling lucky and have some time to kill, or if you are making
a release candidate or final release, run the full test suite:
$ make testall
If the tests pass, then you can feel good that the tarball is
fine. If some of the tests fail, or anything else about the
freshly unpacked directory looks weird, you better stop now and
figure out what the problem is.
Push your commits to the remote release branch in your GitHub fork.:# Do a dry run first.
$ git push --dry-run --tags origin
# Make sure you are pushing to your GitHub fork, *not* to the main
# python/cpython repo!
$ git push --tags origin
Notify the experts that they can start building binaries.
Warning
STOP: at this point you must receive the “green light” from other experts
in order to create the release. There are things you can do while you wait
though, so keep reading until you hit the next STOP.
The WE generates and publishes the Windows files using the Azure
Pipelines build scripts in .azure-pipelines/windows-release/,
currently set up at https://dev.azure.com/Python/cpython/_build?definitionId=21.The build process runs in multiple stages, with each stage’s output being
available as a downloadable artifact. The stages are:
Compile all variants of binaries (32-bit, 64-bit, debug/release),
including running profile-guided optimization.
Compile the HTML Help file containing the Python documentation
Codesign all the binaries with the PSF’s certificate
Create packages for python.org, nuget.org, the embeddable distro and
the Windows Store
Perform basic verification of the installers
Upload packages to python.org and nuget.org, purge download caches and
run a test download.
After the uploads are complete, the WE copies the generated hashes from
the build logs and emails them to the RM. The Windows Store packages are
uploaded manually to https://partner.microsoft.com/dashboard/home by the
WE.
The ME builds Mac installer packages and uploads them to
downloads.nyc1.psf.io together with gpg signature files.
scp or rsync all the files built by release.py --export
to your home directory on downloads.nyc1.psf.io.While you’re waiting for the files to finish uploading, you can continue
on with the remaining tasks. You can also ask folks on #python-dev
and/or python-committers to download the files as they finish uploading
so that they can test them on their platforms as well.
Now you need to go to downloads.nyc1.psf.io and move all the files in place
over there. Our policy is that every Python version gets its own
directory, but each directory contains all releases of that version.
On downloads.nyc1.psf.io, cd /srv/www.python.org/ftp/python/3.X.Y
creating it if necessary. Make sure it is owned by group ‘downloads’
and group-writable.
Move the release .tgz, and .tar.xz files into place, as well as the
.asc GPG signature files. The Win/Mac binaries are usually put there
by the experts themselves.Make sure they are world readable. They should also be group
writable, and group-owned by downloads.
Use gpg --verify to make sure they got uploaded intact.
If this is a final or rc release: Move the doc zips and tarballs to
/srv/www.python.org/ftp/python/doc/3.X.Y[rcA], creating the directory
if necessary, and adapt the “current” symlink in .../doc to point to
that directory. Note though that if you’re releasing a maintenance
release for an older version, don’t change the current link.
If this is a final or rc release (even a maintenance release), also
unpack the HTML docs to /srv/docs.python.org/release/3.X.Y[rcA] on
docs.nyc1.psf.io. Make sure the files are in group docs and are
group-writeable.
Let the DE check if the docs are built and work all right.
Note both the documentation and downloads are behind a caching CDN. If
you change archives after downloading them through the website, you’ll
need to purge the stale data in the CDN like this:$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/Python-3.12.0.tar.xz
You should always purge the cache of the directory listing as people
use that to browse the release files:
$ curl -X PURGE https://www.python.org/ftp/python/3.12.0/
For the extra paranoid, do a completely clean test of the release.
This includes downloading the tarball from www.python.org.Make sure the md5 checksums match. Then unpack the tarball,
and do a clean make test.:
$ make distclean
$ ./configure
$ make test
To ensure that the regression test suite passes. If not, you
screwed up somewhere!
Warning
STOP and confirm:
Have you gotten the green light from the WE?
Have you gotten the green light from the ME?
Have you gotten the green light from the DE?
If green, it’s time to merge the release engineering branch back into
the main repo.
In order to push your changes to GitHub, you’ll have to temporarily
disable branch protection for administrators. Go to the
Settings | Branches page:https://github.com/python/cpython/settings/branches/
“Edit” the settings for the branch you’re releasing on.
This will load the settings page for that branch.
Uncheck the “Include administrators” box and press the
“Save changes” button at the bottom.
Merge your release clone into the main development repo:# Pristine copy of the upstream repo branch
$ git clone [email protected]:python/cpython.git merge
$ cd merge
# Checkout the correct branch:
# 1. For feature pre-releases up to and including a
# **new branch** release, i.e. alphas and first beta
# do a checkout of the main branch
$ git checkout main
# 2. Else, for all other releases, checkout the
# appropriate release branch.
$ git checkout 3.X
# Fetch the newly created and signed tag from your clone repo
$ git fetch --tags [email protected]:your-github-id/cpython.git v3.X.YaN
# Merge the temporary release engineering branch back into
$ git merge --no-squash v3.X.YaN
$ git commit -m 'Merge release engineering branch'
If this is a new branch release, i.e. first beta,
now create the new release branch:$ git checkout -b 3.X
Do any steps needed to setup the new release branch, including:
In README.rst, change all references from main to
the new branch, in particular, GitHub repo URLs.
For all releases, do the guided post-release steps with the
release script.:$ .../release-tools/release.py --done 3.X.YaN
For a final or release candidate 2+ release, you may need to
do some post-merge cleanup. Check the top-level README.rst
and include/patchlevel.h files to ensure they now reflect
the desired post-release values for on-going development.
The patchlevel should be the release tag with a +.
Also, if you cherry-picked changes from the standard release
branch into the release engineering branch for this release,
you will now need to manual remove each blurb entry from
the Misc/NEWS.d/next directory that was cherry-picked
into the release you are working on since that blurb entry
is now captured in the merged x.y.z.rst file for the new
release. Otherwise, the blurb entry will appear twice in
the changelog.html file, once under Python next and again
under x.y.z.
Review and commit these changes:$ git commit -m 'Post release updates'
If this is a new branch release (e.g. the first beta),
update the main branch to start development for the
following feature release. When finished, the main
branch will now build Python X.Y+1.
First, set main up to be the next release, i.e.X.Y+1.a0:$ git checkout main
$ .../release-tools/release.py --bump 3.9.0a0
Edit all version references in README.rst
Move any historical “what’s new” entries from Misc/NEWS to
Misc/HISTORY.
Edit Doc/tutorial/interpreter.rst (2 references to ‘[Pp]ython3x’,
one to ‘Python 3.x’, also make the date in the banner consistent).
Edit Doc/tutorial/stdlib.rst and Doc/tutorial/stdlib2.rst, which
have each one reference to ‘[Pp]ython3x’.
Add a new whatsnew/3.x.rst file (with the comment near the top
and the toplevel sections copied from the previous file) and
add it to the toctree in whatsnew/index.rst. But beware that
the initial whatsnew/3.x.rst checkin from previous releases
may be incorrect due to the initial midstream change to blurb
that propagates from release to release! Help break the cycle: if
necessary make the following change:- For full details, see the :source:`Misc/NEWS` file.
+ For full details, see the :ref:`changelog <changelog>`.
Update the version number in configure.ac and re-run autoconf.
Make sure the SOURCE_URI in Doc/tools/extensions/pyspecific.py
points to main.
Update the version numbers for the Windows builds in PC/ and
PCbuild/, which have references to python38.
NOTE, check with Steve Dower about this step, it is probably obsolete.:$ find PC/ PCbuild/ -type f | xargs sed -i 's/python38/python39/g'
$ git mv -f PC/os2emx/python38.def PC/os2emx/python39.def
$ git mv -f PC/python38stub.def PC/python39stub.def
$ git mv -f PC/python38gen.py PC/python39gen.py
Commit these changes to the main branch:$ git status
$ git add ...
$ git commit -m 'Bump to 3.9.0a0'
Do another git status in this directory.You should not see any files. I.e. you better not have any uncommitted
changes in your working directory.
Commit and push to the main repo.:# Do a dry run first.
# For feature pre-releases prior to a **new branch** release,
# i.e. a feature alpha release:
$ git push --dry-run --tags [email protected]:python/cpython.git main
# If it looks OK, take the plunge. There's no going back!
$ git push --tags [email protected]:python/cpython.git main
# For a **new branch** release, i.e. first beta:
$ git push --dry-run --tags [email protected]:python/cpython.git 3.X
$ git push --dry-run --tags [email protected]:python/cpython.git main
# If it looks OK, take the plunge. There's no going back!
$ git push --tags [email protected]:python/cpython.git 3.X
$ git push --tags [email protected]:python/cpython.git main
# For all other releases:
$ git push --dry-run --tags [email protected]:python/cpython.git 3.X
# If it looks OK, take the plunge. There's no going back!
$ git push --tags [email protected]:python/cpython.git 3.X
If this is a new branch release, add a Branch protection rule
for the newly created branch (3.X). Look at the values for the previous
release branch (3.X-1) and use them as a template.
https://github.com/python/cpython/settings/branches/Also, add a needs backport to 3.X label to the GitHub repo.
https://github.com/python/cpython/labels
You can now re-enable enforcement of branch settings against administrators
on GitHub. Go back to the Settings | Branch page:https://github.com/python/cpython/settings/branches/
“Edit” the settings for the branch you’re releasing on.
Re-check the “Include administrators” box and press the
“Save changes” button at the bottom.
Now it’s time to twiddle the web site. Almost none of this is automated, sorry.
To do these steps, you must have the permission to edit the website. If you
don’t have that, ask someone on [email protected] for the proper
permissions. (Or ask Ewa, who coordinated the effort for the new website
with RevSys.)
Log in to https://www.python.org/admin .
Create a new “release” for the release. Currently “Releases” are
sorted under “Downloads”.The easiest thing is probably to copy fields from an existing
Python release “page”, editing as you go.
You can use Markdown or
ReStructured Text
to describe your release. The former is less verbose, while the latter has nifty
integration for things like referencing PEPs.
Leave the “Release page” field on the form empty.
“Save” the release.
Populate the release with the downloadable files.Your friend and mine, Georg Brandl, made a lovely tool
called “add-to-pydotorg.py”. You can find it in the
“release” tree (next to “release.py”). You run the
tool on downloads.nyc1.psf.io, like this:
$ AUTH_INFO=<username>:<python.org-api-key> python add-to-pydotorg.py <version>
This walks the correct download directory for <version>,
looks for files marked with <version>, and populates
the “Release Files” for the correct “release” on the web
site with these files. Note that clears the “Release Files”
for the relevant version each time it’s run. You may run
it from any directory you like, and you can run it as
many times as you like if the files happen to change.
Keep a copy in your home directory on dl-files and
keep it fresh.
If new types of files are added to the release, someone will need to
update add-to-pydotorg.py so it recognizes these new files.
(It’s best to update add-to-pydotorg.py when file types
are removed, too.)
The script will also sign any remaining files that were not
signed with Sigstore until this point. Again, if this happens,
do use your @python.org address for this process. More info:
https://www.python.org/download/sigstore/
In case the CDN already cached a version of the Downloads page
without the files present, you can invalidate the cache using:$ curl -X PURGE https://www.python.org/downloads/release/python-XXX/
If this is a final release:
Add the new version to the Python Documentation by Version
page https://www.python.org/doc/versions/ and
remove the current version from any ‘in development’ section.
For 3.X.Y, edit all the previous X.Y releases’ page(s) to
point to the new release. This includes the content field of the
Downloads -> Releases entry for the release:Note: Python 3.x.(y-1) has been superseded by
`Python 3.x.y </downloads/release/python-3xy/>`_.
And, for those releases having separate release page entries
(phasing these out?), update those pages as well,
e.g. download/releases/3.x.y:
Note: Python 3.x.(y-1) has been superseded by
`Python 3.x.y </download/releases/3.x.y/>`_.
Update the “Current Pre-release Testing Versions web page”.There’s a page that lists all the currently-in-testing versions
of Python:
https://www.python.org/download/pre-releases/
Every time you make a release, one way or another you’ll
have to update this page:
If you’re releasing a version before 3.x.0,
you should add it to this page, removing the previous pre-release
of version 3.x as needed.
If you’re releasing 3.x.0 final, you need to remove the pre-release
version from this page.
This is in the “Pages” category on the Django-based website, and finding
it through that UI is kind of a chore. However! If you’re already logged
in to the admin interface (which, at this point, you should be), Django
will helpfully add a convenient “Edit this page” link to the top of the
page itself. So you can simply follow the link above, click on the
“Edit this page” link, and make your changes as needed. How convenient!
If appropriate, update the “Python Documentation by Version” page:
https://www.python.org/doc/versions/
This lists all releases of Python by version number and links to their
static (not built daily) online documentation. There’s a list at the
bottom of in-development versions, which is where all alphas/betas/RCs
should go. And yes you should be able to click on the link above then
press the shiny, exciting “Edit this page” button.
Write the announcement on https://discuss.python.org/. This is the
fuzzy bit because not much can be automated. You can use an earlier
announcement as a template, but edit it for content!
Once the announcement is up on Discourse, send an equivalent to the
following mailing lists:[email protected]
[email protected]
[email protected]
Also post the announcement to
The Python Insider blog.
To add a new entry, go to
your Blogger home page, here.
Update any release PEPs (e.g. 719) with the release dates.
Update the labels on https://github.com/python/cpython/issues:
Flip all the deferred-blocker issues back to release-blocker
for the next release.
Add version 3.X+1 as when version 3.X enters alpha.
Change non-doc feature requests to version 3.X+1 when version 3.X
enters beta.
Update issues from versions that your release makes
unsupported to the next supported version.
Review open issues, as this might find lurking showstopper bugs,
besides reminding people to fix the easy ones they forgot about.
You can delete the remote release clone branch from your repo clone.
If this is a new branch release, you will need to ensure various
pieces of the development infrastructure are updated for the new branch.
These include:
Update the issue tracker for the new branch: add the new version to
the versions list.
Update the devguide to reflect the new branches and versions.
Create a PR to update the supported releases table on the
downloads page.
(See https://github.com/python/pythondotorg/issues/1302)
Ensure buildbots are defined for the new branch (contact Łukasz
or Zach Ware).
Ensure the various GitHub bots are updated, as needed, for the
new branch, in particular, make sure backporting to the new
branch works (contact core-workflow team)
https://github.com/python/core-workflow/issues
Review the most recent commit history for the main and new release
branches to identify and backport any merges that might have been made
to the main branch during the release engineering phase and that
should be in the release branch.
Verify that CI is working for new PRs for the main and new release
branches and that the release branch is properly protected (no direct
pushes, etc).
Verify that the on-line docs are building properly (this may take up to
24 hours for a complete build on the web site).
What Next?
Verify! Pretend you’re a user: download the files from python.org, and
make Python from it. This step is too easy to overlook, and on several
occasions we’ve had useless release files. Once a general server problem
caused mysterious corruption of all files; once the source tarball got
built incorrectly; more than once the file upload process on SF truncated
files; and so on.
Rejoice. Drink. Be Merry. Write a PEP like this one. Or be
like unto Guido and take A Vacation.
You’ve just made a Python release!
Moving to End-of-life
Under current policy, a release branch normally reaches end-of-life status
5 years after its initial release. The policy is discussed in more detail
in the Python Developer’s Guide.
When end-of-life is reached, there are a number of tasks that need to be
performed either directly by you as release manager or by ensuring someone
else does them. Some of those tasks include:
Optionally making a final release to publish any remaining unreleased
changes.
Freeze the state of the release branch by creating a tag of its current HEAD
and then deleting the branch from the cpython repo. The current HEAD should
be at or beyond the final security release for the branch:git fetch upstream
git tag --sign -m 'Final head of the former 3.3 branch' 3.3 upstream/3.3
git push upstream refs/tags/3.3
If all looks good, delete the branch. This may require the assistance of
someone with repo administrator privileges:git push upstream --delete 3.3 # or perform from GitHub Settings page
Remove the release from the list of “Active Python Releases” on the Downloads
page. To do this, log in to the admin page for python.org, navigate to Boxes,
and edit the downloads-active-releases entry. Simply strip out the relevant
paragraph of HTML for your release. (You’ll probably have to do the curl -X PURGE
trick to purge the cache if you want to confirm you made the change correctly.)
Add retired notice to each release page on python.org for the retired branch.
For example:
https://www.python.org/downloads/release/python-337/https://www.python.org/downloads/release/python-336/
In the developer’s guide, add the branch to the recent end-of-life branches
list (https://devguide.python.org/devcycle/#end-of-life-branches) and update
or remove references to the branch elsewhere in the devguide.
Retire the release from the issue tracker. Tasks include:
remove version label from list of versions
remove the “needs backport to” label for the retired version
review and dispose of open issues marked for this branch
Announce the branch retirement in the usual places:
discuss.python.org
mailing lists (python-dev, python-list, python-announcements)
Python Dev blog
Enjoy your retirement and bask in the glow of a job well done!
Windows Notes
Windows has a MSI installer, various flavors of Windows have
“special limitations”, and the Windows installer also packs
precompiled “foreign” binaries (Tcl/Tk, expat, etc).
The installer is tested as part of the Azure Pipeline. In the past,
those steps were performed manually. We’re keeping this for posterity.
Concurrent with uploading the installer, the WE installs Python
from it twice: once into the default directory suggested by the
installer, and later into a directory with embedded spaces in its
name. For each installation, the WE runs the full regression suite
from a DOS box, and both with and without -0. For maintenance
release, the WE also tests whether upgrade installations succeed.
The WE also tries every shortcut created under Start -> Menu -> the
Python group. When trying IDLE this way, you need to verify that
Help -> Python Documentation works. When trying pydoc this way
(the “Module Docs” Start menu entry), make sure the “Start
Browser” button works, and make sure you can search for a random
module (like “random” <wink>) and then that the “go to selected”
button works.
It’s amazing how much can go wrong here – and even more amazing
how often last-second checkins break one of these things. If
you’re “the Windows geek”, keep in mind that you’re likely the
only person routinely testing on Windows, and that Windows is
simply a mess.
Repeat the testing for each target architecture. Try both an
Admin and a plain User (not Power User) account.
Copyright
This document has been placed in the public domain.
| Active | PEP 101 – Doing Python Releases 101 | Informational | Making a Python release is a thrilling and crazy process. You’ve heard
the expression “herding cats”? Imagine trying to also saddle those
purring little creatures up, and ride them into town, with some of their
buddies firmly attached to your bare back, anchored by newly sharpened
claws. At least they’re cute, you remind yourself. |
PEP 102 – Doing Python Micro Releases
Author:
Anthony Baxter <anthony at interlink.com.au>,
Barry Warsaw <barry at python.org>,
Guido van Rossum <guido at python.org>
Status:
Superseded
Type:
Informational
Created:
09-Jan-2002
Post-History:
Superseded-By:
101
Table of Contents
Replacement Note
Abstract
How to Make A Release
What Next?
Final Release Notes
Windows Notes
Copyright
Replacement Note
Although the size of the to-do list in this PEP is much less scary
than that in PEP 101, it turns out not to be enough justification
for the duplication of information, and with it, the danger of one
of the copies to become out of date. Therefore, this PEP is not
maintained anymore, and micro releases are fully covered by PEP 101.
Abstract
Making a Python release is an arduous process that takes a
minimum of half a day’s work even for an experienced releaser.
Until recently, most – if not all – of that burden was borne by
Guido himself. But several recent releases have been performed by
other folks, so this PEP attempts to collect, in one place, all
the steps needed to make a Python bugfix release.
The major Python release process is covered in PEP 101 - this PEP
is just PEP 101, trimmed down to only include the bits that are
relevant for micro releases, a.k.a. patch, or bug fix releases.
It is organized as a recipe and you can actually print this out and
check items off as you complete them.
How to Make A Release
Here are the steps taken to make a Python release. Some steps are
more fuzzy than others because there’s little that can be
automated (e.g. writing the NEWS entries). Where a step is
usually performed by An Expert, the name of that expert is given.
Otherwise, assume the step is done by the Release Manager (RM),
the designated person performing the release. Almost every place
the RM is mentioned below, this step can also be done by the BDFL
of course!
XXX: We should include a dependency graph to illustrate the steps
that can be taken in parallel, or those that depend on other
steps.
We use the following conventions in the examples below. Where a
release number is given, it is of the form X.Y.MaA, e.g. 2.1.2c1
for Python 2.1.2 release candidate 1, where “a” == alpha, “b” ==
beta, “c” == release candidate. Final releases are tagged with
“releaseXYZ” in CVS. The micro releases are made from the
maintenance branch of the major release, e.g. Python 2.1.2 is made
from the release21-maint branch.
Send an email to [email protected] indicating the release is
about to start.
Put a freeze on check ins into the maintenance branch. At this
point, nobody except the RM should make any commits to the branch
(or his duly assigned agents, i.e. Guido the BDFL, Fred Drake for
documentation, or Thomas Heller for Windows). If the RM screwed up
and some desperate last minute change to the branch is
necessary, it can mean extra work for Fred and Thomas. So try to
avoid this!
On the branch, change Include/patchlevel.h in two places, to
reflect the new version number you’ve just created. You’ll want
to change the PY_VERSION macro, and one or several of the
version subpart macros just above PY_VERSION, as appropriate.
Change the “%define version” line of Misc/RPM/python-2.3.spec to the
same string as PY_VERSION was changed to above. E.g:%define version 2.3.1
You also probably want to reset the %define release line
to ‘1pydotorg’ if it’s not already that.
If you’re changing the version number for Python (e.g. from
Python 2.1.1 to Python 2.1.2), you also need to update the
README file, which has a big banner at the top proclaiming its
identity. Don’t do this if you’re just releasing a new alpha or
beta release, but /do/ do this if you’re release a new micro,
minor or major release.
The LICENSE file also needs to be changed, due to several
references to the release number. As for the README file, changing
these are necessary for a new micro, minor or major release.The LICENSE file contains a table that describes the legal
heritage of Python; you should add an entry for the X.Y.Z
release you are now making. You should update this table in the
LICENSE file on the CVS trunk too.
When the year changes, copyright legends need to be updated in
many places, including the README and LICENSE files.
For the Windows build, additional files have to be updated.PCbuild/BUILDno.txt contains the Windows build number, see the
instructions in this file how to change it. Saving the project
file PCbuild/pythoncore.dsp results in a change to
PCbuild/pythoncore.dsp as well.
PCbuild/python20.wse sets up the Windows installer version
resource (displayed when you right-click on the installer .exe
and select Properties), and also contains the Python version
number.
(Before version 2.3.2, it was required to manually edit
PC/python_nt.rc, this step is now automated by the build
process.)
After starting the process, the most important thing to do next
is to update the Misc/NEWS file. Thomas will need this in order to
do the Windows release and he likes to stay up late. This step
can be pretty tedious, so it’s best to get to it immediately
after making the branch, or even before you’ve made the branch.
The sooner the better (but again, watch for new checkins up
until the release is made!)Add high level items new to this release. E.g. if we’re
releasing 2.2a3, there must be a section at the top of the file
explaining “What’s new in Python 2.2a3”. It will be followed by
a section entitled “What’s new in Python 2.2a2”.
Note that you /hope/ that as developers add new features to the
trunk, they’ve updated the NEWS file accordingly. You can’t be
positive, so double check. If you’re a Unix weenie, it helps to
verify with Thomas about changes on Windows, and Jack Jansen
about changes on the Mac.
This command should help you (but substitute the correct -r tag!):
% cvs log -rr22a1: | python Tools/scripts/logmerge.py > /tmp/news.txt
IOW, you’re printing out all the cvs log entries from the
previous release until now. You can then troll through the
news.txt file looking for interesting things to add to NEWS.
Check your NEWS changes into the maintenance branch. It’s easy
to forget to update the release date in this file!
Check in any changes to IDLE’s NEWS.txt. Update the header in
Lib/idlelib/NEWS.txt to reflect its release version and date.
Update the IDLE version in Lib/idlelib/idlever.py to match.
Once the release process has started, the documentation needs to
be built and posted on python.org according to the instructions
in PEP 101.Note that Fred is responsible both for merging doc changes from
the trunk to the branch AND for merging any branch changes from
the branch to the trunk during the cleaning up phase.
Basically, if it’s in Doc/ Fred will take care of it.
Thomas compiles everything with MSVC 6.0 SP5, and moves the
python23.chm file into the src/chm directory. The installer
executable is then generated with Wise Installation System.The installer includes the MSVC 6.0 runtime in the files
MSVCRT.DLL and MSVCIRT.DLL. It leads to disaster if these files
are taken from the system directory of the machine where the
installer is built, instead it must be absolutely made sure that
these files come from the VCREDIST.EXE redistributable package
contained in the MSVC SP5 CD. VCREDIST.EXE must be unpacked
with winzip, and the Wise Installation System prompts for the
directory.
After building the installer, it should be opened with winzip,
and the MS dlls extracted again and check for the same version
number as those unpacked from VCREDIST.EXE.
Thomas uploads this file to the starship. He then sends the RM
a notice which includes the location and MD5 checksum of the
Windows executable.
Note that Thomas’s creation of the Windows executable may generate
a few more commits on the branch. Thomas will be responsible for
merging Windows-specific changes from trunk to branch, and from
branch to trunk.
Sean performs his Red Hat magic, generating a set of RPMs. He
uploads these files to python.org. He then sends the RM a notice
which includes the location and MD5 checksum of the RPMs.
It’s Build Time!Now, you’re ready to build the source tarball. First cd to your
working directory for the branch. E.g.
% cd …/python-22a3
Do a “cvs update” in this directory. Do NOT include the -A flag!You should not see any “M” files, but you may see several “P”
and/or “U” files. I.e. you better not have any uncommitted
changes in your working directory, but you may pick up some of
Fred’s or Thomas’s last minute changes.
Now tag the branch using a symbolic name like “rXYMaZ”,
e.g. r212% cvs tag r212
Be sure to tag only the python/dist/src subdirectory of the
Python CVS tree!
Change to a neutral directory, i.e. one in which you can do a
fresh, virgin, cvs export of the branch. You will be creating a
new directory at this location, to be named “Python-X.Y.M”. Do
a CVS export of the tagged branch.% cd ~
% cvs -d cvs.sf.net:/cvsroot/python export -rr212 \
-d Python-2.1.2 python/dist/src
Generate the tarball. Note that we’re not using the ‘z’ option
on the tar command because 1) that’s only supported by GNU tar
as far as we know, and 2) we’re going to max out the compression
level, which isn’t a supported option. We generate both tar.gz
tar.bz2 formats, as the latter is about 1/6th smaller.% tar -cf - Python-2.1.2 | gzip -9 > Python-2.1.2.tgz
% tar -cf - Python-2.1.2 | bzip2 -9 > Python-2.1.2.tar.bz2
Calculate the MD5 checksum of the tgz and tar.bz2 files you
just created% md5sum Python-2.1.2.tgz
Note that if you don’t have the md5sum program, there is a
Python replacement in the Tools/scripts/md5sum.py file.
Create GPG keys for each of the files.% gpg -ba Python-2.1.2.tgz
% gpg -ba Python-2.1.2.tar.bz2
% gpg -ba Python-2.1.2.exe
Now you want to perform the very important step of checking the
tarball you just created, to make sure a completely clean,
virgin build passes the regression test. Here are the best
steps to take:% cd /tmp
% tar zxvf ~/Python-2.1.2.tgz
% cd Python-2.1.2
% ls
(Do things look reasonable?)
% ./configure
(Loads of configure output)
% make test
(Do all the expected tests pass?)
If the tests pass, then you can feel good that the tarball is
fine. If some of the tests fail, or anything else about the
freshly unpacked directory looks weird, you better stop now and
figure out what the problem is.
You need to upload the tgz and the exe file to creosote.python.org.
This step can take a long time depending on your network
bandwidth. scp both files from your own machine to creosote.
While you’re waiting, you can start twiddling the web pages to
include the announcement.
In the top of the python.org web site CVS tree, create a
subdirectory for the X.Y.Z release. You can actually copy an
earlier patch release’s subdirectory, but be sure to delete
the X.Y.Z/CVS directory and “cvs add X.Y.Z”, for example:% cd .../pydotorg
% cp -r 2.2.2 2.2.3
% rm -rf 2.2.3/CVS
% cvs add 2.2.3
% cd 2.2.3
Edit the files for content: usually you can globally replace
X.Ya(Z-1) with X.YaZ. However, you’ll need to think about the
“What’s New?” section.
Copy the Misc/NEWS file to NEWS.txt in the X.Y.Z directory for
python.org; this contains the “full scoop” of changes to
Python since the previous release for this version of Python.
Copy the .asc GPG signatures you created earlier here as well.
Also, update the MD5 checksums.
Preview the web page by doing a “make” or “make install” (as
long as you’ve created a new directory for this release!)
Similarly, edit the ../index.ht file, i.e. the python.org home
page. In the Big Blue Announcement Block, move the paragraph
for the new version up to the top and boldify the phrase
“Python X.YaZ is out”. Edit for content, and preview locally,
but do NOT do a “make install” yet!
Now we’re waiting for the scp to creosote to finish. Da de da,
da de dum, hmm, hmm, dum de dum.
Once that’s done you need to go to creosote.python.org and move
all the files in place over there. Our policy is that every
Python version gets its own directory, but each directory may
contain several releases. We keep all old releases, moving them
into a “prev” subdirectory when we have a new release.So, there’s a directory called “2.2” which contains
Python-2.2a2.exe and Python-2.2a2.tgz, along with a “prev”
subdirectory containing Python-2.2a1.exe and Python-2.2a1.tgz.
So…
On creosote, cd to ~ftp/pub/python/X.Y creating it if
necessary.
Move the previous release files to a directory called “prev”
creating the directory if necessary (make sure the directory
has g+ws bits on). If this is the first alpha release of a
new Python version, skip this step.
Move the .tgz file and the .exe file to this directory. Make
sure they are world readable. They should also be group
writable, and group-owned by webmaster.
md5sum the files and make sure they got uploaded intact.
the X.Y/bugs.ht file if necessary. It is best to get
BDFL input for this step.
Go up to the parent directory (i.e. the root of the web page
hierarchy) and do a “make install” there. You’re release is now
live!
Now it’s time to write the announcement for the mailing lists.
This is the fuzzy bit because not much can be automated. You
can use one of Guido’s earlier announcements as a template, but
please edit it for content!Once the announcement is ready, send it to the following
addresses:
[email protected]
[email protected]
[email protected]
Send a SourceForge News Item about the release. From the
project’s “menu bar”, select the “News” link; once in News,
select the “Submit” link. Type a suitable subject (e.g. “Python
2.2c1 released” :-) in the Subject box, add some text to the
Details box (at the very least including the release URL at
www.python.org and the fact that you’re happy with the release)
and click the SUBMIT button.Feel free to remove any old news items.
Now it’s time to do some cleanup. These steps are very important!
Edit the file Include/patchlevel.h so that the PY_VERSION
string says something like “X.YaZ+”. Note the trailing ‘+’
indicating that the trunk is going to be moving forward with
development. E.g. the line should look like:#define PY_VERSION "2.1.2+"
Make sure that the other PY_ version macros contain the
correct values. Commit this change.
For the extra paranoid, do a completely clean test of the
release. This includes downloading the tarball from
www.python.org.
Make sure the md5 checksums match. Then unpack the tarball,
and do a clean make test.% make distclean
% ./configure
% make test
To ensure that the regression test suite passes. If not, you
screwed up somewhere!
Step 5 …
Verify! This can be interleaved with Step 4. Pretend you’re a
user: download the files from python.org, and make Python from
it. This step is too easy to overlook, and on several occasions
we’ve had useless release files. Once a general server problem
caused mysterious corruption of all files; once the source tarball
got built incorrectly; more than once the file upload process on
SF truncated files; and so on.
What Next?
Rejoice. Drink. Be Merry. Write a PEP like this one. Or be
like unto Guido and take A Vacation.
You’ve just made a Python release!
Actually, there is one more step. You should turn over ownership
of the branch to Jack Jansen. All this means is that now he will
be responsible for making commits to the branch. He’s going to
use this to build the MacOS versions. He may send you information
about the Mac release that should be merged into the informational
pages on www.python.org. When he’s done, he’ll tag the branch
something like “rX.YaZ-mac”. He’ll also be responsible for
merging any Mac-related changes back into the trunk.
Final Release Notes
The Final release of any major release, e.g. Python 2.2 final, has
special requirements, specifically because it will be one of the
longest lived releases (i.e. betas don’t last more than a couple
of weeks, but final releases can last for years!).
For this reason we want to have a higher coordination between the
three major releases: Windows, Mac, and source. The Windows and
source releases benefit from the close proximity of the respective
release-bots. But the Mac-bot, Jack Jansen, is 6 hours away. So
we add this extra step to the release process for a final
release:
Hold up the final release until Jack approves, or until we
lose patience <wink>.
The python.org site also needs some tweaking when a new bugfix release
is issued.
The documentation should be installed at doc/<version>/.
Add a link from doc/<previous-minor-release>/index.ht to the
documentation for the new version.
All older doc/<old-release>/index.ht files should be updated to
point to the documentation for the new version.
/robots.txt should be modified to prevent the old version’s
documentation from being crawled by search engines.
Windows Notes
Windows has a GUI installer, various flavors of Windows have
“special limitations”, and the Windows installer also packs
precompiled “foreign” binaries (Tcl/Tk, expat, etc). So Windows
testing is tiresome but very necessary.
Concurrent with uploading the installer, Thomas installs Python
from it twice: once into the default directory suggested by the
installer, and later into a directory with embedded spaces in its
name. For each installation, he runs the full regression suite
from a DOS box, and both with and without -0.
He also tries every shortcut created under Start -> Menu -> the
Python group. When trying IDLE this way, you need to verify that
Help -> Python Documentation works. When trying pydoc this way
(the “Module Docs” Start menu entry), make sure the “Start
Browser” button works, and make sure you can search for a random
module (Thomas uses “random” <wink>) and then that the “go to
selected” button works.
It’s amazing how much can go wrong here – and even more amazing
how often last-second checkins break one of these things. If
you’re “the Windows geek”, keep in mind that you’re likely the
only person routinely testing on Windows, and that Windows is
simply a mess.
Repeat all of the above on at least one flavor of Win9x, and one
of NT/2000/XP. On NT/2000/XP, try both an Admin and a plain User
(not Power User) account.
WRT Step 5 above (verify the release media), since by the time
release files are ready to download Thomas has generally run many
Windows tests on the installer he uploaded, he usually doesn’t do
anything for Step 5 except a full byte-comparison (“fc /b” if
using a Windows shell) of the downloaded file against the file he
uploaded.
Copyright
This document has been placed in the public domain.
| Superseded | PEP 102 – Doing Python Micro Releases | Informational | Making a Python release is an arduous process that takes a
minimum of half a day’s work even for an experienced releaser.
Until recently, most – if not all – of that burden was borne by
Guido himself. But several recent releases have been performed by
other folks, so this PEP attempts to collect, in one place, all
the steps needed to make a Python bugfix release. |
PEP 103 – Collecting information about git
Author:
Oleg Broytman <phd at phdru.name>
Status:
Withdrawn
Type:
Informational
Created:
01-Jun-2015
Post-History:
12-Sep-2015
Table of Contents
Withdrawal
Abstract
Documentation
Documentation for starters
Advanced documentation
Offline documentation
Quick start
Download and installation
Initial configuration
Examples in this PEP
Branches and branches
Remote repositories and remote branches
Updating local and remote-tracking branches
Fetch and pull
Push
Tags
Private information
Commit editing and caveats
Undo
git checkout: restore file’s content
git reset: remove (non-pushed) commits
Unstaging
git reflog: reference log
git revert: revert a commit
One thing that cannot be undone
Merge or rebase?
Null-merges
Branching models
Advanced configuration
Line endings
Useful assets
Advanced topics
Staging area
Root
ReReRe
Database maintenance
Tips and tricks
Command-line options and arguments
bash/zsh completion
bash/zsh prompt
SSH connection sharing
git on server
From Mercurial to git
Git and GitHub
Copyright
Withdrawal
This PEP was withdrawn as it’s too generic and doesn’t really deals
with Python development. It is no longer updated.
The content was moved to Python Wiki. Make further updates in the
wiki.
Abstract
This Informational PEP collects information about git. There is, of
course, a lot of documentation for git, so the PEP concentrates on
more complex (and more related to Python development) issues,
scenarios and examples.
The plan is to extend the PEP in the future collecting information
about equivalence of Mercurial and git scenarios to help migrating
Python development from Mercurial to git.
The author of the PEP doesn’t currently plan to write a Process PEP on
migration Python development from Mercurial to git.
Documentation
Git is accompanied with a lot of documentation, both online and
offline.
Documentation for starters
Git Tutorial: part 1,
part 2.
Git User’s manual.
Everyday GIT With 20 Commands Or So.
Git workflows.
Advanced documentation
Git Magic,
with a number of translations.
Pro Git. The Book about git. Buy it at
Amazon or download in PDF, mobi, or ePub form. It has translations to
many different languages. Download Russian translation from GArik.
Git Wiki.
Git Buch (German).
Offline documentation
Git has builtin help: run git help $TOPIC. For example, run
git help git or git help help.
Quick start
Download and installation
Unix users: download and install using your package manager.
Microsoft Windows: download git-for-windows.
MacOS X: use git installed with XCode or download from MacPorts or
git-osx-installer or
install git with Homebrew: brew install git.
git-cola (repository) is a Git GUI written in
Python and GPL licensed. Linux, Windows, MacOS X.
TortoiseGit is a Windows Shell Interface
to Git based on TortoiseSVN; open source.
Initial configuration
This simple code is often appears in documentation, but it is
important so let repeat it here. Git stores author and committer
names/emails in every commit, so configure your real name and
preferred email:
$ git config --global user.name "User Name"
$ git config --global user.email [email protected]
Examples in this PEP
Examples of git commands in this PEP use the following approach. It is
supposed that you, the user, works with a local repository named
python that has an upstream remote repo named origin. Your
local repo has two branches v1 and master. For most examples
the currently checked out branch is master. That is, it’s assumed
you have done something like that:
$ git clone https://git.python.org/python.git
$ cd python
$ git branch v1 origin/v1
The first command clones remote repository into local directory
python, creates a new local branch master, sets
remotes/origin/master as its upstream remote-tracking branch and
checks it out into the working directory.
The last command creates a new local branch v1 and sets
remotes/origin/v1 as its upstream remote-tracking branch.
The same result can be achieved with commands:
$ git clone -b v1 https://git.python.org/python.git
$ cd python
$ git checkout --track origin/master
The last command creates a new local branch master, sets
remotes/origin/master as its upstream remote-tracking branch and
checks it out into the working directory.
Branches and branches
Git terminology can be a bit misleading. Take, for example, the term
“branch”. In git it has two meanings. A branch is a directed line of
commits (possibly with merges). And a branch is a label or a pointer
assigned to a line of commits. It is important to distinguish when you
talk about commits and when about their labels. Lines of commits are
by itself unnamed and are usually only lengthening and merging.
Labels, on the other hand, can be created, moved, renamed and deleted
freely.
Remote repositories and remote branches
Remote-tracking branches are branches (pointers to commits) in your
local repository. They are there for git (and for you) to remember
what branches and commits have been pulled from and pushed to what
remote repos (you can pull from and push to many remotes).
Remote-tracking branches live under remotes/$REMOTE namespaces,
e.g. remotes/origin/master.
To see the status of remote-tracking branches run:
$ git branch -rv
To see local and remote-tracking branches (and tags) pointing to
commits:
$ git log --decorate
You never do your own development on remote-tracking branches. You
create a local branch that has a remote branch as upstream and do
development on that local branch. On push git pushes commits to the
remote repo and updates remote-tracking branches, on pull git fetches
commits from the remote repo, updates remote-tracking branches and
fast-forwards, merges or rebases local branches.
When you do an initial clone like this:
$ git clone -b v1 https://git.python.org/python.git
git clones remote repository https://git.python.org/python.git to
directory python, creates a remote named origin, creates
remote-tracking branches, creates a local branch v1, configure it
to track upstream remotes/origin/v1 branch and checks out v1 into
the working directory.
Some commands, like git status --branch and git branch --verbose,
report the difference between local and remote branches.
Please remember they only do comparison with remote-tracking branches
in your local repository, and the state of those remote-tracking
branches can be outdated. To update remote-tracking branches you
either fetch and merge (or rebase) commits from the remote repository
or update remote-tracking branches without updating local branches.
Updating local and remote-tracking branches
To update remote-tracking branches without updating local branches run
git remote update [$REMOTE...]. For example:
$ git remote update
$ git remote update origin
Fetch and pull
There is a major difference between
$ git fetch $REMOTE $BRANCH
and
$ git fetch $REMOTE $BRANCH:$BRANCH
The first command fetches commits from the named $BRANCH in the
$REMOTE repository that are not in your repository, updates
remote-tracking branch and leaves the id (the hash) of the head commit
in file .git/FETCH_HEAD.
The second command fetches commits from the named $BRANCH in the
$REMOTE repository that are not in your repository and updates both
the local branch $BRANCH and its upstream remote-tracking branch. But
it refuses to update branches in case of non-fast-forward. And it
refuses to update the current branch (currently checked out branch,
where HEAD is pointing to).
The first command is used internally by git pull.
$ git pull $REMOTE $BRANCH
is equivalent to
$ git fetch $REMOTE $BRANCH
$ git merge FETCH_HEAD
Certainly, $BRANCH in that case should be your current branch. If you
want to merge a different branch into your current branch first update
that non-current branch and then merge:
$ git fetch origin v1:v1 # Update v1
$ git pull --rebase origin master # Update the current branch master
# using rebase instead of merge
$ git merge v1
If you have not yet pushed commits on v1, though, the scenario has
to become a bit more complex. Git refuses to update
non-fast-forwardable branch, and you don’t want to do force-pull
because that would remove your non-pushed commits and you would need
to recover. So you want to rebase v1 but you cannot rebase
non-current branch. Hence, checkout v1 and rebase it before
merging:
$ git checkout v1
$ git pull --rebase origin v1
$ git checkout master
$ git pull --rebase origin master
$ git merge v1
It is possible to configure git to make it fetch/pull a few branches
or all branches at once, so you can simply run
$ git pull origin
or even
$ git pull
Default remote repository for fetching/pulling is origin. Default
set of references to fetch is calculated using matching algorithm: git
fetches all branches having the same name on both ends.
Push
Pushing is a bit simpler. There is only one command push. When you
run
$ git push origin v1 master
git pushes local v1 to remote v1 and local master to remote master.
The same as:
$ git push origin v1:v1 master:master
Git pushes commits to the remote repo and updates remote-tracking
branches. Git refuses to push commits that aren’t fast-forwardable.
You can force-push anyway, but please remember - you can force-push to
your own repositories but don’t force-push to public or shared repos.
If you find git refuses to push commits that aren’t fast-forwardable,
better fetch and merge commits from the remote repo (or rebase your
commits on top of the fetched commits), then push. Only force-push if
you know what you do and why you do it. See the section Commit
editing and caveats below.
It is possible to configure git to make it push a few branches or all
branches at once, so you can simply run
$ git push origin
or even
$ git push
Default remote repository for pushing is origin. Default set of
references to push in git before 2.0 is calculated using matching
algorithm: git pushes all branches having the same name on both ends.
Default set of references to push in git 2.0+ is calculated using
simple algorithm: git pushes the current branch back to its
@{upstream}.
To configure git before 2.0 to the new behaviour run:
$ git config push.default simple
To configure git 2.0+ to the old behaviour run:
$ git config push.default matching
Git doesn’t allow to push a branch if it’s the current branch in the
remote non-bare repository: git refuses to update remote working
directory. You really should push only to bare repositories. For
non-bare repositories git prefers pull-based workflow.
When you want to deploy code on a remote host and can only use push
(because your workstation is behind a firewall and you cannot pull
from it) you do that in two steps using two repositories: you push
from the workstation to a bare repo on the remote host, ssh to the
remote host and pull from the bare repo to a non-bare deployment repo.
That changed in git 2.3, but see the blog post
for caveats; in 2.4 the push-to-deploy feature was further improved.
Tags
Git automatically fetches tags that point to commits being fetched
during fetch/pull. To fetch all tags (and commits they point to) run
git fetch --tags origin. To fetch some specific tags fetch them
explicitly:
$ git fetch origin tag $TAG1 tag $TAG2...
For example:
$ git fetch origin tag 1.4.2
$ git fetch origin v1:v1 tag 2.1.7
Git doesn’t automatically pushes tags. That allows you to have private
tags. To push tags list them explicitly:
$ git push origin tag 1.4.2
$ git push origin v1 master tag 2.1.7
Or push all tags at once:
$ git push --tags origin
Don’t move tags with git tag -f or remove tags with git tag -d
after they have been published.
Private information
When cloning/fetching/pulling/pushing git copies only database objects
(commits, trees, files and tags) and symbolic references (branches and
lightweight tags). Everything else is private to the repository and
never cloned, updated or pushed. It’s your config, your hooks, your
private exclude file.
If you want to distribute hooks, copy them to the working tree, add,
commit, push and instruct the team to update and install the hooks
manually.
Commit editing and caveats
A warning not to edit published (pushed) commits also appears in
documentation but it’s repeated here anyway as it’s very important.
It is possible to recover from a forced push but it’s PITA for the
entire team. Please avoid it.
To see what commits have not been published yet compare the head of the
branch with its upstream remote-tracking branch:
$ git log origin/master.. # from origin/master to HEAD (of master)
$ git log origin/v1..v1 # from origin/v1 to the head of v1
For every branch that has an upstream remote-tracking branch git
maintains an alias @{upstream} (short version @{u}), so the commands
above can be given as:
$ git log @{u}..
$ git log v1@{u}..v1
To see the status of all branches:
$ git branch -avv
To compare the status of local branches with a remote repo:
$ git remote show origin
Read how to recover from upstream rebase.
It is in git help rebase.
On the other hand, don’t be too afraid about commit editing. You can
safely edit, reorder, remove, combine and split commits that haven’t
been pushed yet. You can even push commits to your own (backup) repo,
edit them later and force-push edited commits to replace what have
already been pushed. Not a problem until commits are in a public
or shared repository.
Undo
Whatever you do, don’t panic. Almost anything in git can be undone.
git checkout: restore file’s content
git checkout, for example, can be used to restore the content of
file(s) to that one of a commit. Like this:
git checkout HEAD~ README
The commands restores the contents of README file to the last but one
commit in the current branch. By default the commit ID is simply HEAD;
i.e. git checkout README restores README to the latest commit.
(Do not use git checkout to view a content of a file in a commit,
use git cat-file -p; e.g. git cat-file -p HEAD~:path/to/README).
git reset: remove (non-pushed) commits
git reset moves the head of the current branch. The head can be
moved to point to any commit but it’s often used to remove a commit or
a few (preferably, non-pushed ones) from the top of the branch - that
is, to move the branch backward in order to undo a few (non-pushed)
commits.
git reset has three modes of operation - soft, hard and mixed.
Default is mixed. ProGit explains the
difference very clearly. Bare repositories don’t have indices or
working trees so in a bare repo only soft reset is possible.
Unstaging
Mixed mode reset with a path or paths can be used to unstage changes -
that is, to remove from index changes added with git add for
committing. See The Book for details
about unstaging and other undo tricks.
git reflog: reference log
Removing commits with git reset or moving the head of a branch
sounds dangerous and it is. But there is a way to undo: another
reset back to the original commit. Git doesn’t remove commits
immediately; unreferenced commits (in git terminology they are called
“dangling commits”) stay in the database for some time (default is two
weeks) so you can reset back to it or create a new branch pointing to
the original commit.
For every move of a branch’s head - with git commit, git
checkout, git fetch, git pull, git rebase, git reset
and so on - git stores a reference log (reflog for short). For every
move git stores where the head was. Command git reflog can be used
to view (and manipulate) the log.
In addition to the moves of the head of every branch git stores the
moves of the HEAD - a symbolic reference that (usually) names the
current branch. HEAD is changed with git checkout $BRANCH.
By default git reflog shows the moves of the HEAD, i.e. the
command is equivalent to git reflog HEAD. To show the moves of the
head of a branch use the command git reflog $BRANCH.
So to undo a git reset lookup the original commit in git
reflog, verify it with git show or git log and run git
reset $COMMIT_ID. Git stores the move of the branch’s head in
reflog, so you can undo that undo later again.
In a more complex situation you’d want to move some commits along with
resetting the head of the branch. Cherry-pick them to the new branch.
For example, if you want to reset the branch master back to the
original commit but preserve two commits created in the current branch
do something like:
$ git branch save-master # create a new branch saving master
$ git reflog # find the original place of master
$ git reset $COMMIT_ID
$ git cherry-pick save-master~ save-master
$ git branch -D save-master # remove temporary branch
git revert: revert a commit
git revert reverts a commit or commits, that is, it creates a new
commit or commits that revert(s) the effects of the given commits.
It’s the only way to undo published commits (git commit --amend,
git rebase and git reset change the branch in
non-fast-forwardable ways so they should only be used for non-pushed
commits.)
There is a problem with reverting a merge commit. git revert can
undo the code created by the merge commit but it cannot undo the fact
of merge. See the discussion How to revert a faulty merge.
One thing that cannot be undone
Whatever you undo, there is one thing that cannot be undone -
overwritten uncommitted changes. Uncommitted changes don’t belong to
git so git cannot help preserving them.
Most of the time git warns you when you’re going to execute a command
that overwrites uncommitted changes. Git doesn’t allow you to switch
branches with git checkout. It stops you when you’re going to
rebase with non-clean working tree. It refuses to pull new commits
over non-committed files.
But there are commands that do exactly that - overwrite files in the
working tree. Commands like git checkout $PATHs or git reset
--hard silently overwrite files including your uncommitted changes.
With that in mind you can understand the stance “commit early, commit
often”. Commit as often as possible. Commit on every save in your
editor or IDE. You can edit your commits before pushing - edit commit
messages, change commits, reorder, combine, split, remove. But save
your changes in git database, either commit changes or at least stash
them with git stash.
Merge or rebase?
Internet is full of heated discussions on the topic: “merge or
rebase?” Most of them are meaningless. When a DVCS is being used in a
big team with a big and complex project with many branches there is
simply no way to avoid merges. So the question’s diminished to
“whether to use rebase, and if yes - when to use rebase?” Considering
that it is very much recommended not to rebase published commits the
question’s diminished even further: “whether to use rebase on
non-pushed commits?”
That small question is for the team to decide. To preserve the beauty
of linear history it’s recommended to use rebase when pulling, i.e. do
git pull --rebase or even configure automatic setup of rebase for
every new branch:
$ git config branch.autosetuprebase always
and configure rebase for existing branches:
$ git config branch.$NAME.rebase true
For example:
$ git config branch.v1.rebase true
$ git config branch.master.rebase true
After that git pull origin master becomes equivalent to git pull
--rebase origin master.
It is recommended to create new commits in a separate feature or topic
branch while using rebase to update the mainline branch. When the
topic branch is ready merge it into mainline. To avoid a tedious task
of resolving large number of conflicts at once you can merge the topic
branch to the mainline from time to time and switch back to the topic
branch to continue working on it. The entire workflow would be
something like:
$ git checkout -b issue-42 # create a new issue branch and switch to it
...edit/test/commit...
$ git checkout master
$ git pull --rebase origin master # update master from the upstream
$ git merge issue-42
$ git branch -d issue-42 # delete the topic branch
$ git push origin master
When the topic branch is deleted only the label is removed, commits
are stayed in the database, they are now merged into master:
o--o--o--o--o--M--< master - the mainline branch
\ /
--*--*--* - the topic branch, now unnamed
The topic branch is deleted to avoid cluttering branch namespace with
small topic branches. Information on what issue was fixed or what
feature was implemented should be in the commit messages.
But even that small amount of rebasing could be too big in case of
long-lived merged branches. Imagine you’re doing work in both v1
and master branches, regularly merging v1 into master.
After some time you will have a lot of merge and non-merge commits in
master. Then you want to push your finished work to a shared
repository and find someone has pushed a few commits to v1. Now
you have a choice of two equally bad alternatives: either you fetch
and rebase v1 and then have to recreate all you work in master
(reset master to the origin, merge v1 and cherry-pick all
non-merge commits from the old master); or merge the new v1 and
loose the beauty of linear history.
Null-merges
Git has a builtin merge strategy for what Python core developers call
“null-merge”:
$ git merge -s ours v1 # null-merge v1 into master
Branching models
Git doesn’t assume any particular development model regarding
branching and merging. Some projects prefer to graduate patches from
the oldest branch to the newest, some prefer to cherry-pick commits
backwards, some use squashing (combining a number of commits into
one). Anything is possible.
There are a few examples to start with. git help workflows
describes how the very git authors develop git.
ProGit book has a few chapters devoted to branch management in
different projects: Git Branching - Branching Workflows and
Distributed Git - Contributing to a Project.
There is also a well-known article A successful Git branching model by Vincent
Driessen. It recommends a set of very detailed rules on creating and
managing mainline, topic and bugfix branches. To support the model the
author implemented git flow
extension.
Advanced configuration
Line endings
Git has builtin mechanisms to handle line endings between platforms
with different end-of-line styles. To allow git to do CRLF conversion
assign text attribute to files using .gitattributes.
For files that have to have specific line endings assign eol
attribute. For binary files the attribute is, naturally, binary.
For example:
$ cat .gitattributes
*.py text
*.txt text
*.png binary
/readme.txt eol=CRLF
To check what attributes git uses for files use git check-attr
command. For example:
$ git check-attr -a -- \*.py
Useful assets
GitAlias (repository) is a big collection of
aliases. A careful selection of aliases for frequently used commands
could save you a lot of keystrokes!
GitIgnore and
https://github.com/github/gitignore are collections of .gitignore
files for all kinds of IDEs and programming languages. Python
included!
pre-commit (repositories) is a framework for managing and
maintaining multi-language pre-commit hooks. The framework is written
in Python and has a lot of plugins for many programming languages.
Advanced topics
Staging area
Staging area aka index aka cache is a distinguishing feature of git.
Staging area is where git collects patches before committing them.
Separation between collecting patches and commit phases provides a
very useful feature of git: you can review collected patches before
commit and even edit them - remove some hunks, add new hunks and
review again.
To add files to the index use git add. Collecting patches before
committing means you need to do that for every change, not only to add
new (untracked) files. To simplify committing in case you just want to
commit everything without reviewing run git commit --all (or just
-a) - the command adds every changed tracked file to the index and
then commit. To commit a file or files regardless of patches collected
in the index run git commit [--only|-o] -- $FILE....
To add hunks of patches to the index use git add --patch (or just
-p). To remove collected files from the index use git reset HEAD
-- $FILE... To add/inspect/remove collected hunks use git add
--interactive (-i).
To see the diff between the index and the last commit (i.e., collected
patches) use git diff --cached. To see the diff between the
working tree and the index (i.e., uncollected patches) use just git
diff. To see the diff between the working tree and the last commit
(i.e., both collected and uncollected patches) run git diff HEAD.
See WhatIsTheIndex and
IndexCommandQuickref in Git
Wiki.
Root
Git switches to the root (top-level directory of the project where
.git subdirectory exists) before running any command. Git
remembers though the directory that was current before the switch.
Some programs take into account the current directory. E.g., git
status shows file paths of changed and unknown files relative to the
current directory; git grep searches below the current directory;
git apply applies only those hunks from the patch that touch files
below the current directory.
But most commands run from the root and ignore the current directory.
Imagine, for example, that you have two work trees, one for the branch
v1 and the other for master. If you want to merge v1 from
a subdirectory inside the second work tree you must write commands as
if you’re in the top-level dir. Let take two work trees,
project-v1 and project, for example:
$ cd project/subdirectory
$ git fetch ../project-v1 v1:v1
$ git merge v1
Please note the path in git fetch ../project-v1 v1:v1 is
../project-v1 and not ../../project-v1 despite the fact that
we run the commands from a subdirectory, not from the root.
ReReRe
Rerere is a mechanism that helps to resolve repeated merge conflicts.
The most frequent source of recurring merge conflicts are topic
branches that are merged into mainline and then the merge commits are
removed; that’s often performed to test the topic branches and train
rerere; merge commits are removed to have clean linear history and
finish the topic branch with only one last merge commit.
Rerere works by remembering the states of tree before and after a
successful commit. That way rerere can automatically resolve conflicts
if they appear in the same files.
Rerere can be used manually with git rerere command but most often
it’s used automatically. Enable rerere with these commands in a
working tree:
$ git config rerere.enabled true
$ git config rerere.autoupdate true
You don’t need to turn rerere on globally - you don’t want rerere in
bare repositories or single-branch repositories; you only need rerere
in repos where you often perform merges and resolve merge conflicts.
See Rerere in The
Book.
Database maintenance
Git object database and other files/directories under .git require
periodic maintenance and cleanup. For example, commit editing left
unreferenced objects (dangling objects, in git terminology) and these
objects should be pruned to avoid collecting cruft in the DB. The
command git gc is used for maintenance. Git automatically runs
git gc --auto as a part of some commands to do quick maintenance.
Users are recommended to run git gc --aggressive from time to
time; git help gc recommends to run it every few hundred
changesets; for more intensive projects it should be something like
once a week and less frequently (biweekly or monthly) for lesser
active projects.
git gc --aggressive not only removes dangling objects, it also
repacks object database into indexed and better optimized pack(s); it
also packs symbolic references (branches and tags). Another way to do
it is to run git repack.
There is a well-known message from Linus
Torvalds regarding “stupidity” of git gc --aggressive. The message
can safely be ignored now. It is old and outdated, git gc
--aggressive became much better since that time.
For those who still prefer git repack over git gc --aggressive
the recommended parameters are git repack -a -d -f --depth=20
--window=250. See this detailed experiment
for explanation of the effects of these parameters.
From time to time run git fsck [--strict] to verify integrity of
the database. git fsck may produce a list of dangling objects;
that’s not an error, just a reminder to perform regular maintenance.
Tips and tricks
Command-line options and arguments
git help cli
recommends not to combine short options/flags. Most of the times
combining works: git commit -av works perfectly, but there are
situations when it doesn’t. E.g., git log -p -5 cannot be combined
as git log -p5.
Some options have arguments, some even have default arguments. In that
case the argument for such option must be spelled in a sticky way:
-Oarg, never -O arg because for an option that has a default
argument the latter means “use default value for option -O and
pass arg further to the option parser”. For example, git grep
has an option -O that passes a list of names of the found files to
a program; default program for -O is a pager (usually less),
but you can use your editor:
$ git grep -Ovim # but not -O vim
BTW, if git is instructed to use less as the pager (i.e., if pager
is not configured in git at all it uses less by default, or if it
gets less from GIT_PAGER or PAGER environment variables, or if it
was configured with git config [--global] core.pager less, or
less is used in the command git grep -Oless) git grep
passes +/$pattern option to less which is quite convenient.
Unfortunately, git grep doesn’t pass the pattern if the pager is
not exactly less, even if it’s less with parameters (something
like git config [--global] core.pager less -FRSXgimq); fortunately,
git grep -Oless always passes the pattern.
bash/zsh completion
It’s a bit hard to type git rebase --interactive --preserve-merges
HEAD~5 manually even for those who are happy to use command-line,
and this is where shell completion is of great help. Bash/zsh come
with programmable completion, often automatically installed and
enabled, so if you have bash/zsh and git installed, chances are you
are already done - just go and use it at the command-line.
If you don’t have necessary bits installed, install and enable
bash_completion package. If you want to upgrade your git completion to
the latest and greatest download necessary file from git contrib.
Git-for-windows comes with git-bash for which bash completion is
installed and enabled.
bash/zsh prompt
For command-line lovers shell prompt can carry a lot of useful
information. To include git information in the prompt use
git-prompt.sh.
Read the detailed instructions in the file.
Search the Net for “git prompt” to find other prompt variants.
SSH connection sharing
SSH connection sharing is a feature of OpenSSH and perhaps derivatives
like PuTTY. SSH connection sharing is a way to decrease ssh client
startup time by establishing one connection and reusing it for all
subsequent clients connecting to the same server. SSH connection
sharing can be used to speedup a lot of short ssh sessions like scp,
sftp, rsync and of course git over ssh. If you regularly
fetch/pull/push from/to remote repositories accessible over ssh then
using ssh connection sharing is recommended.
To turn on ssh connection sharing add something like this to your
~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/mux-%r@%h:%p
ControlPersist 600
See OpenSSH wikibook and
search for
more information.
SSH connection sharing can be used at GitHub, GitLab and SourceForge
repositories, but please be advised that BitBucket doesn’t allow it
and forcibly closes master connection after a short inactivity period
so you will see errors like this from ssh: “Connection to bitbucket.org
closed by remote host.”
git on server
The simplest way to publish a repository or a group of repositories is
git daemon. The daemon provides anonymous access, by default it is
read-only. The repositories are accessible by git protocol (git://
URLs). Write access can be enabled but the protocol lacks any
authentication means, so it should be enabled only within a trusted
LAN. See git help daemon for details.
Git over ssh provides authentication and repo-level authorisation as
repositories can be made user- or group-writeable (see parameter
core.sharedRepository in git help config). If that’s too
permissive or too restrictive for some project’s needs there is a
wrapper gitolite that can
be configured to allow access with great granularity; gitolite is
written in Perl and has a lot of documentation.
Web interface to browse repositories can be created using gitweb or cgit. Both are CGI scripts (written in
Perl and C). In addition to web interface both provide read-only dumb
http access for git (http(s):// URLs). Klaus is a small and simple WSGI web
server that implements both web interface and git smart HTTP
transport; supports Python 2 and Python 3, performs syntax
highlighting.
There are also more advanced web-based development environments that
include ability to manage users, groups and projects; private,
group-accessible and public repositories; they often include issue
trackers, wiki pages, pull requests and other tools for development
and communication. Among these environments are Kallithea and pagure,
both are written in Python; pagure was written by Fedora developers
and is being used to develop some Fedora projects. GitPrep is yet another GitHub clone,
written in Perl. Gogs is written in Go.
GitBucket is
written in Scala.
And last but not least, GitLab. It’s
perhaps the most advanced web-based development environment for git.
Written in Ruby, community edition is free and open source (MIT
license).
From Mercurial to git
There are many tools to convert Mercurial repositories to git. The
most famous are, probably, hg-git and
fast-export (many years ago
it was known under the name hg2git).
But a better tool, perhaps the best, is git-remote-hg. It provides transparent
bidirectional (pull and push) access to Mercurial repositories from
git. Its author wrote a comparison of alternatives
that seems to be mostly objective.
To use git-remote-hg, install or clone it, add to your PATH (or copy
script git-remote-hg to a directory that’s already in PATH) and
prepend hg:: to Mercurial URLs. For example:
$ git clone https://github.com/felipec/git-remote-hg.git
$ PATH=$PATH:"`pwd`"/git-remote-hg
$ git clone hg::https://hg.python.org/peps/ PEPs
To work with the repository just use regular git commands including
git fetch/pull/push.
To start converting your Mercurial habits to git see the page
Mercurial for Git users at Mercurial wiki.
At the second half of the page there is a table that lists
corresponding Mercurial and git commands. Should work perfectly in
both directions.
Python Developer’s Guide also has a chapter Mercurial for git
developers that
documents a few differences between git and hg.
Git and GitHub
gitsome - Git/GitHub
command line interface (CLI). Written in Python, work on MacOS, Unix,
Windows. Git/GitHub CLI with autocomplete, includes many GitHub
integrated commands that work with all shells, builtin xonsh with
Python REPL to run Python commands alongside shell commands, command
history, customizable highlighting, thoroughly documented.
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 103 – Collecting information about git | Informational | This Informational PEP collects information about git. There is, of
course, a lot of documentation for git, so the PEP concentrates on
more complex (and more related to Python development) issues,
scenarios and examples. |
PEP 207 – Rich Comparisons
Author:
Guido van Rossum <guido at python.org>, David Ascher <DavidA at ActiveState.com>
Status:
Final
Type:
Standards Track
Created:
25-Jul-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Motivation
Previous Work
Concerns
Proposed Resolutions
Implementation Proposal
C API
Changes to the interpreter
Classes
Copyright
Appendix
Abstract
Motivation
Current State of Affairs
Proposed Mechanism
Chained Comparisons
Problem
Solution
Abstract
This PEP proposes several new features for comparisons:
Allow separately overloading of <, >, <=, >=, ==, !=, both in
classes and in C extensions.
Allow any of those overloaded operators to return something else
besides a Boolean result.
Motivation
The main motivation comes from NumPy, whose users agree that A<B
should return an array of elementwise comparison outcomes; they
currently have to spell this as less(A,B) because A<B can only
return a Boolean result or raise an exception.
An additional motivation is that frequently, types don’t have a
natural ordering, but still need to be compared for equality.
Currently such a type must implement comparison and thus define
an arbitrary ordering, just so that equality can be tested.
Also, for some object types an equality test can be implemented
much more efficiently than an ordering test; for example, lists
and dictionaries that differ in length are unequal, but the
ordering requires inspecting some (potentially all) items.
Previous Work
Rich Comparisons have been proposed before; in particular by David
Ascher, after experience with Numerical Python:
http://starship.python.net/crew/da/proposals/richcmp.html
It is also included below as an Appendix. Most of the material in
this PEP is derived from David’s proposal.
Concerns
Backwards compatibility, both at the Python level (classes using
__cmp__ need not be changed) and at the C level (extensions
defining tp_comparea need not be changed, code using
PyObject_Compare() must work even if the compared objects use
the new rich comparison scheme).
When A<B returns a matrix of elementwise comparisons, an easy
mistake to make is to use this expression in a Boolean context.
Without special precautions, it would always be true. This use
should raise an exception instead.
If a class overrides x==y but nothing else, should x!=y be
computed as not(x==y), or fail? What about the similar
relationship between < and >=, or between > and <=?
Similarly, should we allow x<y to be calculated from y>x? And
x<=y from not(x>y)? And x==y from y==x, or x!=y from y!=x?
When comparison operators return elementwise comparisons, what
to do about shortcut operators like A<B<C, A<B and C<D,
A<B or C<D?
What to do about min() and max(), the ‘in’ and ‘not in’
operators, list.sort(), dictionary key comparison, and other
uses of comparisons by built-in operations?
Proposed Resolutions
Full backwards compatibility can be achieved as follows. When
an object defines tp_compare() but not tp_richcompare(), and a
rich comparison is requested, the outcome of tp_compare() is
used in the obvious way. E.g. if “<” is requested, an exception if
tp_compare() raises an exception, the outcome is 1 if
tp_compare() is negative, and 0 if it is zero or positive. Etc.Full forward compatibility can be achieved as follows. When a
classic comparison is requested on an object that implements
tp_richcompare(), up to three comparisons are used: first == is
tried, and if it returns true, 0 is returned; next, < is tried
and if it returns true, -1 is returned; next, > is tried and if
it returns true, +1 is returned. If any operator tried returns
a non-Boolean value (see below), the exception raised by
conversion to Boolean is passed through. If none of the
operators tried returns true, the classic comparison fallbacks
are tried next.
(I thought long and hard about the order in which the three
comparisons should be tried. At one point I had a convincing
argument for doing it in this order, based on the behavior of
comparisons for cyclical data structures. But since that code
has changed again, I’m not so sure that it makes a difference
any more.)
Any type that returns a collection of Booleans instead of a
single boolean should define nb_nonzero() to raise an exception.
Such a type is considered a non-Boolean.
The == and != operators are not assumed to be each other’s
complement (e.g. IEEE 754 floating point numbers do not satisfy
this). It is up to the type to implement this if desired.
Similar for < and >=, or > and <=; there are lots of examples
where these assumptions aren’t true (e.g. tabnanny).
The reflexivity rules are assumed by Python. Thus, the
interpreter may swap y>x with x<y, y>=x with x<=y, and may swap
the arguments of x==y and x!=y. (Note: Python currently assumes
that x==x is always true and x!=x is never true; this should not
be assumed.)
In the current proposal, when A<B returns an array of
elementwise comparisons, this outcome is considered non-Boolean,
and its interpretation as Boolean by the shortcut operators
raises an exception. David Ascher’s proposal tries to deal
with this; I don’t think this is worth the additional complexity
in the code generator. Instead of A<B<C, you can write
(A<B)&(B<C).
The min() and list.sort() operations will only use the
< operator; max() will only use the > operator. The ‘in’ and
‘not in’ operators and dictionary lookup will only use the ==
operator.
Implementation Proposal
This closely follows David Ascher’s proposal.
C API
New functions:PyObject *PyObject_RichCompare(PyObject *, PyObject *, int)
This performs the requested rich comparison, returning a Python
object or raising an exception. The 3rd argument must be one of
Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT or Py_GE.
int PyObject_RichCompareBool(PyObject *, PyObject *, int)
This performs the requested rich comparison, returning a
Boolean: -1 for exception, 0 for false, 1 for true. The 3rd
argument must be one of Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT or
Py_GE. Note that when PyObject_RichCompare() returns a
non-Boolean object, PyObject_RichCompareBool() will raise an
exception.
New typedef:typedef PyObject *(*richcmpfunc) (PyObject *, PyObject *, int);
New slot in type object, replacing spare tp_xxx7:richcmpfunc tp_richcompare;
This should be a function with the same signature as
PyObject_RichCompare(), and performing the same comparison.
At least one of the arguments is of the type whose
tp_richcompare slot is being used, but the other may have a
different type. If the function cannot compare the particular
combination of objects, it should return a new reference to
Py_NotImplemented.
PyObject_Compare() is changed to try rich comparisons if they
are defined (but only if classic comparisons aren’t defined).
Changes to the interpreter
Whenever PyObject_Compare() is called with the intent of getting
the outcome of a particular comparison (e.g. in list.sort(), and
of course for the comparison operators in ceval.c), the code is
changed to call PyObject_RichCompare() or
PyObject_RichCompareBool() instead; if the C code needs to know
the outcome of the comparison, PyObject_IsTrue() is called on
the result (which may raise an exception).
Most built-in types that currently define a comparison will be
modified to define a rich comparison instead. (This is
optional; I’ve converted lists, tuples, complex numbers, and
arrays so far, and am not sure whether I will convert others.)
Classes
Classes can define new special methods __lt__, __le__, __eq__,
__ne__, __gt__, __ge__ to override the corresponding operators.
(I.e., <, <=, ==, !=, >, >=. You gotta love the Fortran
heritage.) If a class defines __cmp__ as well, it is only used
when __lt__ etc. have been tried and return NotImplemented.
Copyright
This document has been placed in the public domain.
Appendix
Here is most of David Ascher’s original proposal (version 0.2.1,
dated Wed Jul 22 16:49:28 1998; I’ve left the Contents, History
and Patches sections out). It addresses almost all concerns
above.
Abstract
A new mechanism allowing comparisons of Python objects to return
values other than -1, 0, or 1 (or raise exceptions) is
proposed. This mechanism is entirely backwards compatible, and can
be controlled at the level of the C PyObject type or of the Python
class definition. There are three cooperating parts to the
proposed mechanism:
the use of the last slot in the type object structure to store a
pointer to a rich comparison function
the addition of special methods for classes
the addition of an optional argument to the builtin cmp()
function.
Motivation
The current comparison protocol for Python objects assumes that
any two Python objects can be compared (as of Python 1.5, object
comparisons can raise exceptions), and that the return value for
any comparison should be -1, 0 or 1. -1 indicates that the first
argument to the comparison function is less than the right one, +1
indicating the contrapositive, and 0 indicating that the two
objects are equal. While this mechanism allows the establishment
of an order relationship (e.g. for use by the sort() method of list
objects), it has proven to be limited in the context of Numeric
Python (NumPy).
Specifically, NumPy allows the creation of multidimensional
arrays, which support most of the numeric operators. Thus:
x = array((1,2,3,4)) y = array((2,2,4,4))
are two NumPy arrays. While they can be added elementwise,:
z = x + y # z == array((3,4,7,8))
they cannot be compared in the current framework - the released
version of NumPy compares the pointers, (thus yielding junk
information) which was the only solution before the recent
addition of the ability (in 1.5) to raise exceptions in comparison
functions.
Even with the ability to raise exceptions, the current protocol
makes array comparisons useless. To deal with this fact, NumPy
includes several functions which perform the comparisons: less(),
less_equal(), greater(), greater_equal(), equal(),
not_equal(). These functions return arrays with the same shape as
their arguments (modulo broadcasting), filled with 0’s and 1’s
depending on whether the comparison is true or not for each
element pair. Thus, for example, using the arrays x and y defined
above:
less(x,y)
would be an array containing the numbers (1,0,0,0).
The current proposal is to modify the Python object interface to
allow the NumPy package to make it so that x < y returns the same
thing as less(x,y). The exact return value is up to the NumPy
package – what this proposal really asks for is changing the
Python core so that extension objects have the ability to return
something other than -1, 0, 1, should their authors choose to do
so.
Current State of Affairs
The current protocol is, at the C level, that each object type
defines a tp_compare slot, which is a pointer to a function which
takes two PyObject* references and returns -1, 0, or 1. This
function is called by the PyObject_Compare() function defined in
the C API. PyObject_Compare() is also called by the builtin
function cmp() which takes two arguments.
Proposed Mechanism
Changes to the C structure for type objectsThe last available slot in the PyTypeObject, reserved up to now
for future expansion, is used to optionally store a pointer to a
new comparison function, of type richcmpfunc defined by:
typedef PyObject *(*richcmpfunc)
Py_PROTO((PyObject *, PyObject *, int));
This function takes three arguments. The first two are the objects
to be compared, and the third is an integer corresponding to an
opcode (one of LT, LE, EQ, NE, GT, GE). If this slot is left NULL,
then rich comparison for that object type is not supported (except
for class instances whose class provide the special methods
described below).
The above opcodes need to be added to the published Python/C API
(probably under the names Py_LT, Py_LE, etc.)
Additions of special methods for classesClasses wishing to support the rich comparison mechanisms must add
one or more of the following new special methods:
def __lt__(self, other):
...
def __le__(self, other):
...
def __gt__(self, other):
...
def __ge__(self, other):
...
def __eq__(self, other):
...
def __ne__(self, other):
...
Each of these is called when the class instance is the on the
left-hand-side of the corresponding operators (<, <=, >, >=, ==,
and != or <>). The argument other is set to the object on the
right side of the operator. The return value of these methods is
up to the class implementor (after all, that’s the entire point of
the proposal).
If the object on the left side of the operator does not define an
appropriate rich comparison operator (either at the C level or
with one of the special methods, then the comparison is reversed,
and the right hand operator is called with the opposite operator,
and the two objects are swapped. This assumes that a < b and b > a
are equivalent, as are a <= b and b >= a, and that == and != are
commutative (e.g. a == b if and only if b == a).
For example, if obj1 is an object which supports the rich
comparison protocol and x and y are objects which do not support
the rich comparison protocol, then obj1 < x will call the __lt__
method of obj1 with x as the second argument. x < obj1 will call
obj1’s __gt__ method with x as a second argument, and x < y will
just use the existing (non-rich) comparison mechanism.
The above mechanism is such that classes can get away with not
implementing either __lt__ and __le__ or __gt__ and
__ge__. Further smarts could have been added to the comparison
mechanism, but this limited set of allowed “swaps” was chosen
because it doesn’t require the infrastructure to do any processing
(negation) of return values. The choice of six special methods was
made over a single (e.g. __richcmp__) method to allow the
dispatching on the opcode to be performed at the level of the C
implementation rather than the user-defined method.
Addition of an optional argument to the builtin cmp()The builtin cmp() is still used for simple comparisons. For rich
comparisons, it is called with a third argument, one of “<”, “<=”,
“>”, “>=”, “==”, “!=”, “<>” (the last two have the same
meaning). When called with one of these strings as the third
argument, cmp() can return any Python object. Otherwise, it can
only return -1, 0 or 1 as before.
Chained Comparisons
Problem
It would be nice to allow objects for which the comparison returns
something other than -1, 0, or 1 to be used in chained
comparisons, such as:
x < y < z
Currently, this is interpreted by Python as:
temp1 = x < y
if temp1:
return y < z
else:
return temp1
Note that this requires testing the truth value of the result of
comparisons, with potential “shortcutting” of the right-side
comparison testings. In other words, the truth-value of the result
of the result of the comparison determines the result of a chained
operation. This is problematic in the case of arrays, since if x,
y and z are three arrays, then the user expects:
x < y < z
to be an array of 0’s and 1’s where 1’s are in the locations
corresponding to the elements of y which are between the
corresponding elements in x and z. In other words, the right-hand
side must be evaluated regardless of the result of x < y, which is
incompatible with the mechanism currently in use by the parser.
Solution
Guido mentioned that one possible way out would be to change the
code generated by chained comparisons to allow arrays to be
chained-compared intelligently. What follows is a mixture of his
idea and my suggestions. The code generated for x < y < z would be
equivalent to:
temp1 = x < y
if temp1:
temp2 = y < z
return boolean_combine(temp1, temp2)
else:
return temp1
where boolean_combine is a new function which does something like
the following:
def boolean_combine(a, b):
if hasattr(a, '__boolean_and__') or \
hasattr(b, '__boolean_and__'):
try:
return a.__boolean_and__(b)
except:
return b.__boolean_and__(a)
else: # standard behavior
if a:
return b
else:
return 0
where the __boolean_and__ special method is implemented for
C-level types by another value of the third argument to the
richcmp function. This method would perform a boolean comparison
of the arrays (currently implemented in the umath module as the
logical_and ufunc).
Thus, objects returned by rich comparisons should always test
true, but should define another special method which creates
boolean combinations of them and their argument.
This solution has the advantage of allowing chained comparisons to
work for arrays, but the disadvantage that it requires comparison
arrays to always return true (in an ideal world, I’d have them
always raise an exception on truth testing, since the meaning of
testing “if a>b:” is massively ambiguous.
The inlining already present which deals with integer comparisons
would still apply, resulting in no performance cost for the most
common cases.
| Final | PEP 207 – Rich Comparisons | Standards Track | This PEP proposes several new features for comparisons: |
PEP 208 – Reworking the Coercion Model
Author:
Neil Schemenauer <nas at arctrix.com>, Marc-André Lemburg <mal at lemburg.com>
Status:
Final
Type:
Standards Track
Created:
04-Dec-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Rationale
Specification
Reference Implementation
Credits
Copyright
References
Abstract
Many Python types implement numeric operations. When the arguments of
a numeric operation are of different types, the interpreter tries to
coerce the arguments into a common type. The numeric operation is
then performed using this common type. This PEP proposes a new type
flag to indicate that arguments to a type’s numeric operations should
not be coerced. Operations that do not support the supplied types
indicate it by returning a new singleton object. Types which do not
set the type flag are handled in a backwards compatible manner.
Allowing operations handle different types is often simpler, more
flexible, and faster than having the interpreter do coercion.
Rationale
When implementing numeric or other related operations, it is often
desirable to provide not only operations between operands of one type
only, e.g. integer + integer, but to generalize the idea behind the
operation to other type combinations as well, e.g. integer + float.
A common approach to this mixed type situation is to provide a method
of “lifting” the operands to a common type (coercion) and then use
that type’s operand method as execution mechanism. Yet, this strategy
has a few drawbacks:
the “lifting” process creates at least one new (temporary)
operand object,
since the coercion method is not being told about the operation
that is to follow, it is not possible to implement operation
specific coercion of types,
there is no elegant way to solve situations were a common type
is not at hand, and
the coercion method will always have to be called prior to the
operation’s method itself.
A fix for this situation is obviously needed, since these drawbacks
make implementations of types needing these features very cumbersome,
if not impossible. As an example, have a look at the DateTime and
DateTimeDelta [1] types, the first being absolute, the second
relative. You can always add a relative value to an absolute one,
giving a new absolute value. Yet, there is no common type which the
existing coercion mechanism could use to implement that operation.
Currently, PyInstance types are treated specially by the interpreter
in that their numeric methods are passed arguments of different types.
Removing this special case simplifies the interpreter and allows other
types to implement numeric methods that behave like instance types.
This is especially useful for extension types like ExtensionClass.
Specification
Instead of using a central coercion method, the process of handling
different operand types is simply left to the operation. If the
operation finds that it cannot handle the given operand type
combination, it may return a special singleton as indicator.
Note that “numbers” (anything that implements the number protocol, or
part of it) written in Python already use the first part of this
strategy - it is the C level API that we focus on here.
To maintain nearly 100% backward compatibility we have to be very
careful to make numbers that don’t know anything about the new
strategy (old style numbers) work just as well as those that expect
the new scheme (new style numbers). Furthermore, binary compatibility
is a must, meaning that the interpreter may only access and use new
style operations if the number indicates the availability of these.
A new style number is considered by the interpreter as such if and
only if it sets the type flag Py_TPFLAGS_CHECKTYPES. The main
difference between an old style number and a new style one is that the
numeric slot functions can no longer assume to be passed arguments of
identical type. New style slots must check all arguments for proper
type and implement the necessary conversions themselves. This may seem
to cause more work on the behalf of the type implementor, but is in
fact no more difficult than writing the same kind of routines for an
old style coercion slot.
If a new style slot finds that it cannot handle the passed argument
type combination, it may return a new reference of the special
singleton Py_NotImplemented to the caller. This will cause the caller
to try the other operands operation slots until it finds a slot that
does implement the operation for the specific type combination. If
none of the possible slots succeed, it raises a TypeError.
To make the implementation easy to understand (the whole topic is
esoteric enough), a new layer in the handling of numeric operations is
introduced. This layer takes care of all the different cases that need
to be taken into account when dealing with all the possible
combinations of old and new style numbers. It is implemented by the
two static functions binary_op() and ternary_op(), which are both
internal functions that only the functions in Objects/abstract.c
have access to. The numeric API (PyNumber_*) is easy to adapt to
this new layer.
As a side-effect all numeric slots can be NULL-checked (this has to be
done anyway, so the added feature comes at no extra cost).
The scheme used by the layer to execute a binary operation is as
follows:
v
w
Action taken
new
new
v.op(v,w), w.op(v,w)
new
old
v.op(v,w), coerce(v,w), v.op(v,w)
old
new
w.op(v,w), coerce(v,w), v.op(v,w)
old
old
coerce(v,w), v.op(v,w)
The indicated action sequence is executed from left to right until
either the operation succeeds and a valid result (!=
Py_NotImplemented) is returned or an exception is raised. Exceptions
are returned to the calling function as-is. If a slot returns
Py_NotImplemented, the next item in the sequence is executed.
Note that coerce(v,w) will use the old style nb_coerce slot methods
via a call to PyNumber_Coerce().
Ternary operations have a few more cases to handle:
v
w
z
Action taken
new
new
new
v.op(v,w,z), w.op(v,w,z), z.op(v,w,z)
new
old
new
v.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
new
new
w.op(v,w,z), z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
old
new
z.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
new
new
old
v.op(v,w,z), w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
new
old
old
v.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
new
old
w.op(v,w,z), coerce(v,w,z), v.op(v,w,z)
old
old
old
coerce(v,w,z), v.op(v,w,z)
The same notes as above, except that coerce(v,w,z) actually does:
if z != Py_None:
coerce(v,w), coerce(v,z), coerce(w,z)
else:
# treat z as absent variable
coerce(v,w)
The current implementation uses this scheme already (there’s only one
ternary slot: nb_pow(a,b,c)).
Note that the numeric protocol is also used for some other related
tasks, e.g. sequence concatenation. These can also benefit from the
new mechanism by implementing right-hand operations for type
combinations that would otherwise fail to work. As an example, take
string concatenation: currently you can only do string + string. With
the new mechanism, a new string-like type could implement new_type +
string and string + new_type, even though strings don’t know anything
about new_type.
Since comparisons also rely on coercion (every time you compare an
integer to a float, the integer is first converted to float and then
compared…), a new slot to handle numeric comparisons is needed:
PyObject *nb_cmp(PyObject *v, PyObject *w)
This slot should compare the two objects and return an integer object
stating the result. Currently, this result integer may only be -1, 0, 1.
If the slot cannot handle the type combination, it may return a
reference to Py_NotImplemented. [XXX Note that this slot is still
in flux since it should take into account rich comparisons
(i.e. PEP 207).]
Numeric comparisons are handled by a new numeric protocol API:
PyObject *PyNumber_Compare(PyObject *v, PyObject *w)
This function compare the two objects as “numbers” and return an
integer object stating the result. Currently, this result integer may
only be -1, 0, 1. In case the operation cannot be handled by the given
objects, a TypeError is raised.
The PyObject_Compare() API needs to adjusted accordingly to make use
of this new API.
Other changes include adapting some of the built-in functions (e.g.
cmp()) to use this API as well. Also, PyNumber_CoerceEx() will need to
check for new style numbers before calling the nb_coerce slot. New
style numbers don’t provide a coercion slot and thus cannot be
explicitly coerced.
Reference Implementation
A preliminary patch for the CVS version of Python is available through
the Source Forge patch manager [2].
Credits
This PEP and the patch are heavily based on work done by Marc-André
Lemburg [3].
Copyright
This document has been placed in the public domain.
References
[1]
http://www.lemburg.com/files/python/mxDateTime.html
[2]
http://sourceforge.net/patch/?func=detailpatch&patch_id=102652&group_id=5470
[3]
http://www.lemburg.com/files/python/CoercionProposal.html
| Final | PEP 208 – Reworking the Coercion Model | Standards Track | Many Python types implement numeric operations. When the arguments of
a numeric operation are of different types, the interpreter tries to
coerce the arguments into a common type. The numeric operation is
then performed using this common type. This PEP proposes a new type
flag to indicate that arguments to a type’s numeric operations should
not be coerced. Operations that do not support the supplied types
indicate it by returning a new singleton object. Types which do not
set the type flag are handled in a backwards compatible manner.
Allowing operations handle different types is often simpler, more
flexible, and faster than having the interpreter do coercion. |
PEP 209 – Multi-dimensional Arrays
Author:
Paul Barrett <barrett at stsci.edu>, Travis Oliphant <oliphant at ee.byu.edu>
Status:
Withdrawn
Type:
Standards Track
Created:
03-Jan-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Motivation
Proposal
Design and Implementation
Open Issues
Implementation Steps
Incompatibilities
Appendices
Copyright
Related PEPs
References
Abstract
This PEP proposes a redesign and re-implementation of the
multi-dimensional array module, Numeric, to make it easier to add
new features and functionality to the module. Aspects of Numeric 2
that will receive special attention are efficient access to arrays
exceeding a gigabyte in size and composed of inhomogeneous data
structures or records. The proposed design uses four Python
classes: ArrayType, UFunc, Array, and ArrayView; and a low-level
C-extension module, _ufunc, to handle the array operations
efficiently. In addition, each array type has its own C-extension
module which defines the coercion rules, operations, and methods
for that type. This design enables new types, features, and
functionality to be added in a modular fashion. The new version
will introduce some incompatibilities with the current Numeric.
Motivation
Multi-dimensional arrays are commonly used to store and manipulate
data in science, engineering, and computing. Python currently has
an extension module, named Numeric (henceforth called Numeric 1),
which provides a satisfactory set of functionality for users
manipulating homogeneous arrays of data of moderate size (of order
10 MB). For access to larger arrays (of order 100 MB or more) of
possibly inhomogeneous data, the implementation of Numeric 1 is
inefficient and cumbersome. In the future, requests by the
Numerical Python community for additional functionality is also
likely as PEPs 211: Adding New Linear Operators to Python, and
225: Elementwise/Objectwise Operators illustrate.
Proposal
This proposal recommends a re-design and re-implementation of
Numeric 1, henceforth called Numeric 2, which will enable new
types, features, and functionality to be added in an easy and
modular manner. The initial design of Numeric 2 should focus on
providing a generic framework for manipulating arrays of various
types and should enable a straightforward mechanism for adding new
array types and UFuncs. Functional methods that are more specific
to various disciplines can then be layered on top of this core.
This new module will still be called Numeric and most of the
behavior found in Numeric 1 will be preserved.
The proposed design uses four Python classes: ArrayType, UFunc,
Array, and ArrayView; and a low-level C-extension module to handle
the array operations efficiently. In addition, each array type
has its own C-extension module which defines the coercion rules,
operations, and methods for that type. At a later date, when core
functionality is stable, some Python classes can be converted to
C-extension types.
Some planned features are:
Improved memory usageThis feature is particularly important when handling large arrays
and can produce significant improvements in performance as well as
memory usage. We have identified several areas where memory usage
can be improved:
Use a local coercion modelInstead of using Python’s global coercion model which creates
temporary arrays, Numeric 2, like Numeric 1, will implement a
local coercion model as described in PEP 208 which defers the
responsibility of coercion to the operator. By using internal
buffers, a coercion operation can be done for each array
(including output arrays), if necessary, at the time of the
operation. Benchmarks [1] have shown that performance is at
most degraded only slightly and is improved in cases where the
internal buffers are less than the L2 cache size and the
processor is under load. To avoid array coercion altogether,
C functions having arguments of mixed type are allowed in
Numeric 2.
Avoid creation of temporary arraysIn complex array expressions (i.e. having more than one
operation), each operation will create a temporary array which
will be used and then deleted by the succeeding operation. A
better approach would be to identify these temporary arrays
and reuse their data buffers when possible, namely when the
array shape and type are the same as the temporary array being
created. This can be done by checking the temporary array’s
reference count. If it is 1, then it will be deleted once the
operation is done and is a candidate for reuse.
Optional use of memory-mapped filesNumeric users sometimes need to access data from very large
files or to handle data that is greater than the available
memory. Memory-mapped arrays provide a mechanism to do this
by storing the data on disk while making it appear to be in
memory. Memory- mapped arrays should improve access to all
files by eliminating one of two copy steps during a file
access. Numeric should be able to access in-memory and
memory-mapped arrays transparently.
Record accessIn some fields of science, data is stored in files as binary
records. For example, in astronomy, photon data is stored as a
1 dimensional list of photons in order of arrival time. These
records or C-like structures contain information about the
detected photon, such as its arrival time, its position on the
detector, and its energy. Each field may be of a different
type, such as char, int, or float. Such arrays introduce new
issues that must be dealt with, in particular byte alignment
or byte swapping may need to be performed for the numeric
values to be properly accessed (though byte swapping is also
an issue for memory mapped data). Numeric 2 is designed to
automatically handle alignment and representational issues
when data is accessed or operated on. There are two
approaches to implementing records; as either a derived array
class or a special array type, depending on your point-of-view.
We defer this discussion to the Open Issues section.
Additional array typesNumeric 1 has 11 defined types: char, ubyte, sbyte, short, int,
long, float, double, cfloat, cdouble, and object. There are no
ushort, uint, or ulong types, nor are there more complex types
such as a bit type which is of use to some fields of science and
possibly for implementing masked-arrays. The design of Numeric 1
makes the addition of these and other types a difficult and
error-prone process. To enable the easy addition (and deletion)
of new array types such as a bit type described below, a re-design
of Numeric is necessary.
Bit typeThe result of a rich comparison between arrays is an array of
boolean values. The result can be stored in an array of type
char, but this is an unnecessary waste of memory. A better
implementation would use a bit or boolean type, compressing
the array size by a factor of eight. This is currently being
implemented for Numeric 1 (by Travis Oliphant) and should be
included in Numeric 2.
Enhanced array indexing syntaxThe extended slicing syntax was added to Python to provide greater
flexibility when manipulating Numeric arrays by allowing
step-sizes greater than 1. This syntax works well as a shorthand
for a list of regularly spaced indices. For those situations
where a list of irregularly spaced indices are needed, an enhanced
array indexing syntax would allow 1-D arrays to be arguments.
Rich comparisonsThe implementation of PEP 207: Rich Comparisons in Python 2.1
provides additional flexibility when manipulating arrays. We
intend to implement this feature in Numeric 2.
Array broadcasting rulesWhen an operation between a scalar and an array is done, the
implied behavior is to create a new array having the same shape as
the array operand containing the scalar value. This is called
array broadcasting. It also works with arrays of lesser rank,
such as vectors. This implicit behavior is implemented in Numeric
1 and will also be implemented in Numeric 2.
Design and Implementation
The design of Numeric 2 has four primary classes:
ArrayType:This is a simple class that describes the fundamental properties
of an array-type, e.g. its name, its size in bytes, its coercion
relations with respect to other types, etc., e.g.
Int32 = ArrayType('Int32', 4, 'doc-string')
Its relation to the other types is defined when the C-extension
module for that type is imported. The corresponding Python code
is:
Int32.astype[Real64] = Real64
This says that the Real64 array-type has higher priority than the
Int32 array-type.
The following attributes and methods are proposed for the core
implementation. Additional attributes can be added on an
individual basis, e.g. .bitsize or .bitstrides for the bit type.
Attributes:
.name: e.g. "Int32", "Float64", etc.
.typecode: e.g. 'i', 'f', etc.
(for backward compatibility)
.size (in bytes): e.g. 4, 8, etc.
.array_rules (mapping): rules between array types
.pyobj_rules (mapping): rules between array and python types
.doc: documentation string
Methods:
__init__(): initialization
__del__(): destruction
__repr__(): representation
C-API: This still needs to be fleshed-out.
UFunc:This class is the heart of Numeric 2. Its design is similar to
that of ArrayType in that the UFunc creates a singleton callable
object whose attributes are name, total and input number of
arguments, a document string, and an empty CFunc dictionary; e.g.
add = UFunc('add', 3, 2, 'doc-string')
When defined the add instance has no C functions associated with
it and therefore can do no work. The CFunc dictionary is
populated or registered later when the C-extension module for an
array-type is imported. The arguments of the register method are:
function name, function descriptor, and the CUFunc object. The
corresponding Python code is
add.register('add', (Int32, Int32, Int32), cfunc-add)
In the initialization function of an array type module, e.g.
Int32, there are two C API functions: one to initialize the
coercion rules and the other to register the CFunc objects.
When an operation is applied to some arrays, the __call__ method
is invoked. It gets the type of each array (if the output array
is not given, it is created from the coercion rules) and checks
the CFunc dictionary for a key that matches the argument types.
If it exists the operation is performed immediately, otherwise the
coercion rules are used to search for a related operation and set
of conversion functions. The __call__ method then invokes a
compute method written in C to iterate over slices of each array,
namely:
_ufunc.compute(slice, data, func, swap, conv)
The ‘func’ argument is a CFuncObject, while the ‘swap’ and ‘conv’
arguments are lists of CFuncObjects for those arrays needing pre- or
post-processing, otherwise None is used. The data argument is
a list of buffer objects, and the slice argument gives the number
of iterations for each dimension along with the buffer offset and
step size for each array and each dimension.
We have predefined several UFuncs for use by the __call__ method:
cast, swap, getobj, and setobj. The cast and swap functions do
coercion and byte-swapping, respectively and the getobj and setobj
functions do coercion between Numeric arrays and Python sequences.
The following attributes and methods are proposed for the core
implementation.
Attributes:
.name: e.g. "add", "subtract", etc.
.nargs: number of total arguments
.iargs: number of input arguments
.cfuncs (mapping): the set C functions
.doc: documentation string
Methods:
__init__(): initialization
__del__(): destruction
__repr__(): representation
__call__(): look-up and dispatch method
initrule(): initialize coercion rule
uninitrule(): uninitialize coercion rule
register(): register a CUFunc
unregister(): unregister a CUFunc
C-API: This still needs to be fleshed-out.
Array:This class contains information about the array, such as shape,
type, endian-ness of the data, etc.. Its operators, ‘+’, ‘-‘,
etc. just invoke the corresponding UFunc function, e.g.
def __add__(self, other):
return ufunc.add(self, other)
The following attributes, methods, and functions are proposed for
the core implementation.
Attributes:
.shape: shape of the array
.format: type of the array
.real (only complex): real part of a complex array
.imag (only complex): imaginary part of a complex array
Methods:
__init__(): initialization
__del__(): destruction
__repr_(): representation
__str__(): pretty representation
__cmp__(): rich comparison
__len__():
__getitem__():
__setitem__():
__getslice__():
__setslice__():
numeric methods:
copy(): copy of array
aslist(): create list from array
asstring(): create string from array
Functions:
fromlist(): create array from sequence
fromstring(): create array from string
array(): create array with shape and value
concat(): concatenate two arrays
resize(): resize array
C-API: This still needs to be fleshed-out.
ArrayViewThis class is similar to the Array class except that the reshape
and flat methods will raise exceptions, since non-contiguous
arrays cannot be reshaped or flattened using just pointer and
step-size information.
C-API: This still needs to be fleshed-out.
C-extension modules:Numeric2 will have several C-extension modules.
_ufunc:The primary module of this set is the _ufuncmodule.c. The
intention of this module is to do the bare minimum,
i.e. iterate over arrays using a specified C function. The
interface of these functions is the same as Numeric 1, i.e.
int (*CFunc)(char *data, int *steps, int repeat, void *func);
and their functionality is expected to be the same, i.e. they
iterate over the inner-most dimension.
The following attributes and methods are proposed for the core
implementation.
Attributes:
Methods:
compute():
C-API: This still needs to be fleshed-out.
_int32, _real64, etc.:There will also be C-extension modules for each array type,
e.g. _int32module.c, _real64module.c, etc. As mentioned
previously, when these modules are imported by the UFunc
module, they will automatically register their functions and
coercion rules. New or improved versions of these modules can
be easily implemented and used without affecting the rest of
Numeric 2.
Open Issues
Does slicing syntax default to copy or view behavior?The default behavior of Python is to return a copy of a sub-list
or tuple when slicing syntax is used, whereas Numeric 1 returns a
view into the array. The choice made for Numeric 1 is apparently
for reasons of performance: the developers wish to avoid the
penalty of allocating and copying the data buffer during each
array operation and feel that the need for a deep copy of an array
to be rare. Yet, some have argued that Numeric’s slice notation
should also have copy behavior to be consistent with Python lists.
In this case the performance penalty associated with copy behavior
can be minimized by implementing copy-on-write. This scheme has
both arrays sharing one data buffer (as in view behavior) until
either array is assigned new data at which point a copy of the
data buffer is made. View behavior would then be implemented by
an ArrayView class, whose behavior be similar to Numeric 1 arrays,
i.e. .shape is not settable for non-contiguous arrays. The use of
an ArrayView class also makes explicit what type of data the array
contains.
Does item syntax default to copy or view behavior?A similar question arises with the item syntax. For example, if
a = [[0,1,2], [3,4,5]] and b = a[0], then changing b[0] also changes
a[0][0], because a[0] is a reference or view of the first row of a.
Therefore, if c is a 2-d array, it would appear that c[i]
should return a 1-d array which is a view into, instead of a copy
of, c for consistency. Yet, c[i] can be considered just a
shorthand for c[i,:] which would imply copy behavior assuming
slicing syntax returns a copy. Should Numeric 2 behave the same
way as lists and return a view or should it return a copy.
How is scalar coercion implemented?Python has fewer numeric types than Numeric which can cause
coercion problems. For example, when multiplying a Python scalar
of type float and a Numeric array of type float, the Numeric array
is converted to a double, since the Python float type is actually
a double. This is often not the desired behavior, since the
Numeric array will be doubled in size which is likely to be
annoying, particularly for very large arrays. We prefer that the
array type trumps the python type for the same type class, namely
integer, float, and complex. Therefore, an operation between a
Python integer and an Int16 (short) array will return an Int16
array. Whereas an operation between a Python float and an Int16
array would return a Float64 (double) array. Operations between
two arrays use normal coercion rules.
How is integer division handled?In a future version of Python, the behavior of integer division
will change. The operands will be converted to floats, so the
result will be a float. If we implement the proposed scalar
coercion rules where arrays have precedence over Python scalars,
then dividing an array by an integer will return an integer array
and will not be consistent with a future version of Python which
would return an array of type double. Scientific programmers are
familiar with the distinction between integer and float-point
division, so should Numeric 2 continue with this behavior?
How should records be implemented?There are two approaches to implementing records depending on your
point-of-view. The first is two divide arrays into separate
classes depending on the behavior of their types. For example,
numeric arrays are one class, strings a second, and records a
third, because the range and type of operations of each class
differ. As such, a record array is not a new type, but a
mechanism for a more flexible form of array. To easily access and
manipulate such complex data, the class is comprised of numeric
arrays having different byte offsets into the data buffer. For
example, one might have a table consisting of an array of Int16,
Real32 values. Two numeric arrays, one with an offset of 0 bytes
and a stride of 6 bytes to be interpreted as Int16, and one with an
offset of 2 bytes and a stride of 6 bytes to be interpreted as
Real32 would represent the record array. Both numeric arrays
would refer to the same data buffer, but have different offset and
stride attributes, and a different numeric type.
The second approach is to consider a record as one of many array
types, albeit with fewer, and possibly different, array operations
than for numeric arrays. This approach considers an array type to
be a mapping of a fixed-length string. The mapping can either be
simple, like integer and floating-point numbers, or complex, like
a complex number, a byte string, and a C-structure. The record
type effectively merges the struct and Numeric modules into a
multi-dimensional struct array. This approach implies certain
changes to the array interface. For example, the ‘typecode’
keyword argument should probably be changed to the more
descriptive ‘format’ keyword.
How are record semantics defined and implemented?Which ever implementation approach is taken for records, the
syntax and semantics of how they are to be accessed and
manipulated must be decided, if one wishes to have access to
sub-fields of records. In this case, the record type can
essentially be considered an inhomogeneous list, like a tuple
returned by the unpack method of the struct module; and a 1-d
array of records may be interpreted as a 2-d array with the
second dimension being the index into the list of fields.
This enhanced array semantics makes access to an array of one
or more of the fields easy and straightforward. It also
allows a user to do array operations on a field in a natural
and intuitive way. If we assume that records are implemented
as an array type, then last dimension defaults to 0 and can
therefore be neglected for arrays comprised of simple types,
like numeric.
How are masked-arrays implemented?Masked-arrays in Numeric 1 are implemented as a separate array
class. With the ability to add new array types to Numeric 2, it
is possible that masked-arrays in Numeric 2 could be implemented
as a new array type instead of an array class.
How are numerical errors handled (IEEE floating-point errors in
particular)?It is not clear to the proposers (Paul Barrett and Travis
Oliphant) what is the best or preferred way of handling errors.
Since most of the C functions that do the operation, iterate over
the inner-most (last) dimension of the array. This dimension
could contain a thousand or more items having one or more errors
of differing type, such as divide-by-zero, underflow, and
overflow. Additionally, keeping track of these errors may come at
the expense of performance. Therefore, we suggest several
options:
Print a message of the most severe error, leaving it to
the user to locate the errors.
Print a message of all errors that occurred and the number
of occurrences, leaving it to the user to locate the errors.
Print a message of all errors that occurred and a list of
where they occurred.
Or use a hybrid approach, printing only the most severe
error, yet keeping track of what and where the errors
occurred. This would allow the user to locate the errors
while keeping the error message brief.
What features are needed to ease the integration of FORTRAN
libraries and code?
It would be a good idea at this stage to consider how to ease the
integration of FORTRAN libraries and user code in Numeric 2.
Implementation Steps
Implement basic UFunc capability
Minimal Array class:Necessary class attributes and methods, e.g. .shape, .data,
.type, etc.
Minimal ArrayType class:Int32, Real64, Complex64, Char, Object
Minimal UFunc class:UFunc instantiation, CFunction registration, UFunc call for
1-D arrays including the rules for doing alignment,
byte-swapping, and coercion.
Minimal C-extension module:_UFunc, which does the innermost array loop in C.
This step implements whatever is needed to do: ‘c = add(a, b)’
where a, b, and c are 1-D arrays. It teaches us how to add
new UFuncs, to coerce the arrays, to pass the necessary
information to a C iterator method and to do the actually
computation.
Continue enhancing the UFunc iterator and Array class
Implement some access methods for the Array class:
print, repr, getitem, setitem, etc.
Implement multidimensional arrays
Implement some of basic Array methods using UFuncs:
+, -, *, /, etc.
Enable UFuncs to use Python sequences.
Complete the standard UFunc and Array class behavior
Implement getslice and setslice behavior
Work on Array broadcasting rules
Implement Record type
Add additional functionality
Add more UFuncs
Implement buffer or mmap access
Incompatibilities
The following is a list of incompatibilities in behavior between
Numeric 1 and Numeric 2.
Scalar coercion rulesNumeric 1 has single set of coercion rules for array and Python
numeric types. This can cause unexpected and annoying problems
during the calculation of an array expression. Numeric 2 intends
to overcome these problems by having two sets of coercion rules:
one for arrays and Python numeric types, and another just for
arrays.
No savespace attributeThe savespace attribute in Numeric 1 makes arrays with this
attribute set take precedence over those that do not have it set.
Numeric 2 will not have such an attribute and therefore normal
array coercion rules will be in effect.
Slicing syntax returns a copyThe slicing syntax in Numeric 1 returns a view into the original
array. The slicing behavior for Numeric 2 will be a copy. You
should use the ArrayView class to get a view into an array.
Boolean comparisons return a boolean arrayA comparison between arrays in Numeric 1 results in a Boolean
scalar, because of current limitations in Python. The advent of
Rich Comparisons in Python 2.1 will allow an array of Booleans to
be returned.
Type characters are deprecatedNumeric 2 will have an ArrayType class composed of Type instances,
for example Int8, Int16, Int32, and Int for signed integers. The
typecode scheme in Numeric 1 will be available for backward
compatibility, but will be deprecated.
Appendices
Implicit sub-arrays iterationA computer animation is composed of a number of 2-D images or
frames of identical shape. By stacking these images into a single
block of memory, a 3-D array is created. Yet the operations to be
performed are not meant for the entire 3-D array, but on the set
of 2-D sub-arrays. In most array languages, each frame has to be
extracted, operated on, and then reinserted into the output array
using a for-like loop. The J language allows the programmer to
perform such operations implicitly by having a rank for the frame
and array. By default these ranks will be the same during the
creation of the array. It was the intention of the Numeric 1
developers to implement this feature, since it is based on the
language J. The Numeric 1 code has the required variables for
implementing this behavior, but was never implemented. We intend
to implement implicit sub-array iteration in Numeric 2, if the
array broadcasting rules found in Numeric 1 do not fully support
this behavior.
Copyright
This document is placed in the public domain.
Related PEPs
PEP 207: Rich Comparisons
by Guido van Rossum and David Ascher
PEP 208: Reworking the Coercion Model
by Neil Schemenauer and Marc-Andre’ Lemburg
PEP 211: Adding New Linear Algebra Operators to Python
by Greg Wilson
PEP 225: Elementwise/Objectwise Operators
by Huaiyu Zhu
PEP 228: Reworking Python’s Numeric Model
by Moshe Zadka
References
[1]
Greenfield 2000. private communication.
| Withdrawn | PEP 209 – Multi-dimensional Arrays | Standards Track | This PEP proposes a redesign and re-implementation of the
multi-dimensional array module, Numeric, to make it easier to add
new features and functionality to the module. Aspects of Numeric 2
that will receive special attention are efficient access to arrays
exceeding a gigabyte in size and composed of inhomogeneous data
structures or records. The proposed design uses four Python
classes: ArrayType, UFunc, Array, and ArrayView; and a low-level
C-extension module, _ufunc, to handle the array operations
efficiently. In addition, each array type has its own C-extension
module which defines the coercion rules, operations, and methods
for that type. This design enables new types, features, and
functionality to be added in a modular fashion. The new version
will introduce some incompatibilities with the current Numeric. |
PEP 215 – String Interpolation
Author:
Ka-Ping Yee <ping at zesty.ca>
Status:
Superseded
Type:
Standards Track
Created:
24-Jul-2000
Python-Version:
2.1
Post-History:
Superseded-By:
292
Table of Contents
Abstract
Copyright
Specification
Examples
Discussion
Security Issues
Implementation
References
Abstract
This document proposes a string interpolation feature for Python
to allow easier string formatting. The suggested syntax change
is the introduction of a ‘$’ prefix that triggers the special
interpretation of the ‘$’ character within a string, in a manner
reminiscent to the variable interpolation found in Unix shells,
awk, Perl, or Tcl.
Copyright
This document is in the public domain.
Specification
Strings may be preceded with a ‘$’ prefix that comes before the
leading single or double quotation mark (or triplet) and before
any of the other string prefixes (‘r’ or ‘u’). Such a string is
processed for interpolation after the normal interpretation of
backslash-escapes in its contents. The processing occurs just
before the string is pushed onto the value stack, each time the
string is pushed. In short, Python behaves exactly as if ‘$’
were a unary operator applied to the string. The operation
performed is as follows:
The string is scanned from start to end for the ‘$’ character
(\x24 in 8-bit strings or \u0024 in Unicode strings). If there
are no ‘$’ characters present, the string is returned unchanged.
Any ‘$’ found in the string, followed by one of the two kinds of
expressions described below, is replaced with the value of the
expression as evaluated in the current namespaces. The value is
converted with str() if the containing string is an 8-bit string,
or with unicode() if it is a Unicode string.
A Python identifier optionally followed by any number of
trailers, where a trailer consists of:
- a dot and an identifier,
- an expression enclosed in square brackets, or
- an argument list enclosed in parentheses
(This is exactly the pattern expressed in the Python grammar
by “NAME trailer*”, using the definitions in Grammar/Grammar.)
Any complete Python expression enclosed in curly braces.
Two dollar-signs (“$$”) are replaced with a single “$”.
Examples
Here is an example of an interactive session exhibiting the
expected behaviour of this feature.
>>> a, b = 5, 6
>>> print $'a = $a, b = $b'
a = 5, b = 6
>>> $u'uni${a}ode'
u'uni5ode'
>>> print $'\$a'
5
>>> print $r'\$a'
\5
>>> print $'$$$a.$b'
$5.6
>>> print $'a + b = ${a + b}'
a + b = 11
>>> import sys
>>> print $'References to $a: $sys.getrefcount(a)'
References to 5: 15
>>> print $"sys = $sys, sys = $sys.modules['sys']"
sys = <module 'sys' (built-in)>, sys = <module 'sys' (built-in)>
>>> print $'BDFL = $sys.copyright.split()[4].upper()'
BDFL = GUIDO
Discussion
‘$’ is chosen as the interpolation character within the
string for the sake of familiarity, since it is already used
for this purpose in many other languages and contexts.
It is then natural to choose ‘$’ as a prefix, since it is a
mnemonic for the interpolation character.
Trailers are permitted to give this interpolation mechanism
even more power than the interpolation available in most other
languages, while the expression to be interpolated remains
clearly visible and free of curly braces.
‘$’ works like an operator and could be implemented as an
operator, but that prevents the compile-time optimization
and presents security issues. So, it is only allowed as a
string prefix.
Security Issues
“$” has the power to eval, but only to eval a literal. As
described here (a string prefix rather than an operator), it
introduces no new security issues since the expressions to be
evaluated must be literally present in the code.
Implementation
The Itpl module at [1] provides a
prototype of this feature. It uses the tokenize module to find
the end of an expression to be interpolated, then calls eval()
on the expression each time a value is needed. In the prototype,
the expression is parsed and compiled again each time it is
evaluated.
As an optimization, interpolated strings could be compiled
directly into the corresponding bytecode; that is,
$'a = $a, b = $b'
could be compiled as though it were the expression
('a = ' + str(a) + ', b = ' + str(b))
so that it only needs to be compiled once.
References
[1]
http://www.lfw.org/python/Itpl.py
| Superseded | PEP 215 – String Interpolation | Standards Track | This document proposes a string interpolation feature for Python
to allow easier string formatting. The suggested syntax change
is the introduction of a ‘$’ prefix that triggers the special
interpretation of the ‘$’ character within a string, in a manner
reminiscent to the variable interpolation found in Unix shells,
awk, Perl, or Tcl. |
PEP 216 – Docstring Format
Author:
Moshe Zadka <moshez at zadka.site.co.il>
Status:
Rejected
Type:
Informational
Created:
31-Jul-2000
Post-History:
Superseded-By:
287
Table of Contents
Notice
Abstract
Perl Documentation
Java Documentation
Python Docstring Goals
High Level Solutions
Docstring Format Goals
Docstring Contents
Docstring Basic Structure
Unresolved Issues
Rejected Suggestions
Notice
This PEP is rejected by the author. It has been superseded by PEP
287.
Abstract
Named Python objects, such as modules, classes and functions, have a
string attribute called __doc__. If the first expression inside
the definition is a literal string, that string is assigned
to the __doc__ attribute.
The __doc__ attribute is called a documentation string, or docstring.
It is often used to summarize the interface of the module, class or
function. However, since there is no common format for documentation
string, tools for extracting docstrings and transforming those into
documentation in a standard format (e.g., DocBook) have not sprang
up in abundance, and those that do exist are for the most part
unmaintained and unused.
Perl Documentation
In Perl, most modules are documented in a format called POD – Plain
Old Documentation. This is an easy-to-type, very low level format
which integrates well with the Perl parser. Many tools exist to turn
POD documentation into other formats: info, HTML and man pages, among
others. However, in Perl, the information is not available at run-time.
Java Documentation
In Java, special comments before classes and functions function to
document the code. A program to extract these, and turn them into
HTML documentation is called javadoc, and is part of the standard
Java distribution. However, the only output format that is supported
is HTML, and JavaDoc has a very intimate relationship with HTML.
Python Docstring Goals
Python documentation string are easy to spot during parsing, and are
also available to the runtime interpreter. This double purpose is
a bit problematic, sometimes: for example, some are reluctant to have
too long docstrings, because they do not want to take much space in
the runtime. In addition, because of the current lack of tools, people
read objects’ docstrings by “print”ing them, so a tendency to make them
brief and free of markups has sprung up. This tendency hinders writing
better documentation-extraction tools, since it causes docstrings to
contain little information, which is hard to parse.
High Level Solutions
To counter the objection that the strings take up place in the running
program, it is suggested that documentation extraction tools will
concatenate a maximum prefix of string literals which appear in the
beginning of a definition. The first of these will also be available
in the interactive interpreter, so it should contain a few summary
lines.
Docstring Format Goals
These are the goals for the docstring format, as discussed ad nauseam
in the doc-sig.
It must be easy to type with any standard text editor.
It must be readable to the casual observer.
It must not contain information which can be deduced from parsing
the module.
It must contain sufficient information so it can be converted
to any reasonable markup format.
It must be possible to write a module’s entire documentation in
docstrings, without feeling hampered by the markup language.
Docstring Contents
For requirement 5. above, it is needed to specify what must be
in docstrings.
At least the following must be available:
A tag that means “this is a Python something, guess what”Example: In the sentence “The POP3 class”, we need to markup “POP3”
so. The parser will be able to guess it is a class from the contents
of the poplib module, but we need to make it guess.
Tags that mean “this is a Python class/module/class var/instance var…”Example: The usual Python idiom for singleton class A is to have _A
as the class, and A a function which returns _A objects. It’s usual
to document the class, nonetheless, as being A. This requires the
strength to say “The class A” and have A hyperlinked and marked-up
as a class.
An easy way to include Python source code/Python interactive sessions
Emphasis/bold
List/tables
Docstring Basic Structure
The documentation strings will be in StructuredTextNG
(http://www.zope.org/Members/jim/StructuredTextWiki/StructuredTextNG)
Since StructuredText is not yet strong enough to handle (a) and (b)
above, we will need to extend it. I suggest using
[<optional description>:python identifier].
E.g.: [class:POP3], [:POP3.list], etc. If the description is missing,
a guess will be made from the text.
Unresolved Issues
Is there a way to escape characters in ST? If so, how?
(example: * at the beginning of a line without being bullet symbol)
Is my suggestion above for Python symbols compatible with ST-NG?
How hard would it be to extend ST-NG to support it?
How do we describe input and output types of functions?
What additional constraint do we enforce on each docstring?
(module/class/function)?
What are the guesser rules?
Rejected Suggestions
XML – it’s very hard to type, and too cluttered to read it comfortably.
| Rejected | PEP 216 – Docstring Format | Informational | Named Python objects, such as modules, classes and functions, have a
string attribute called __doc__. If the first expression inside
the definition is a literal string, that string is assigned
to the __doc__ attribute. |
PEP 217 – Display Hook for Interactive Use
Author:
Moshe Zadka <moshez at zadka.site.co.il>
Status:
Final
Type:
Standards Track
Created:
31-Jul-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Interface
Solution
Jython Issues
Abstract
Python’s interactive mode is one of the implementation’s great
strengths – being able to write expressions on the command line
and get back a meaningful output. However, the output function
cannot be all things to all people, and the current output
function too often falls short of this goal. This PEP describes a
way to provides alternatives to the built-in display function in
Python, so users will have control over the output from the
interactive interpreter.
Interface
The current Python solution has worked for many users, and this
should not break it. Therefore, in the default configuration,
nothing will change in the REPL loop. To change the way the
interpreter prints interactively entered expressions, users
will have to rebind sys.displayhook to a callable object.
The result of calling this object with the result of the
interactively entered expression should be print-able,
and this is what will be printed on sys.stdout.
Solution
The bytecode PRINT_EXPR will call sys.displayhook(POP()).
A displayhook() will be added to the sys builtin module, which is
equivalent to:
import __builtin__
def displayhook(o):
if o is None:
return
__builtin__._ = None
print `o`
__builtin__._ = o
Jython Issues
The method Py.printResult will be similarly changed.
| Final | PEP 217 – Display Hook for Interactive Use | Standards Track | Python’s interactive mode is one of the implementation’s great
strengths – being able to write expressions on the command line
and get back a meaningful output. However, the output function
cannot be all things to all people, and the current output
function too often falls short of this goal. This PEP describes a
way to provides alternatives to the built-in display function in
Python, so users will have control over the output from the
interactive interpreter. |
PEP 220 – Coroutines, Generators, Continuations
Author:
Gordon McMillan <gmcm at hypernet.com>
Status:
Rejected
Type:
Informational
Created:
14-Aug-2000
Post-History:
Table of Contents
Abstract
Abstract
Demonstrates why the changes described in the stackless PEP are
desirable. A low-level continuations module exists. With it,
coroutines and generators and “green” threads can be written. A
higher level module that makes coroutines and generators easy to
create is desirable (and being worked on). The focus of this PEP
is on showing how coroutines, generators, and green threads can
simplify common programming problems.
| Rejected | PEP 220 – Coroutines, Generators, Continuations | Informational | Demonstrates why the changes described in the stackless PEP are
desirable. A low-level continuations module exists. With it,
coroutines and generators and “green” threads can be written. A
higher level module that makes coroutines and generators easy to
create is desirable (and being worked on). The focus of this PEP
is on showing how coroutines, generators, and green threads can
simplify common programming problems. |
PEP 222 – Web Library Enhancements
Author:
A.M. Kuchling <amk at amk.ca>
Status:
Deferred
Type:
Standards Track
Created:
18-Aug-2000
Python-Version:
2.1
Post-History:
22-Dec-2000
Table of Contents
Abstract
Open Issues
New Modules
Major Changes to Existing Modules
Minor Changes to Existing Modules
Rejected Changes
Proposed Interface
Copyright
Abstract
This PEP proposes a set of enhancements to the CGI development
facilities in the Python standard library. Enhancements might be
new features, new modules for tasks such as cookie support, or
removal of obsolete code.
The original intent was to make improvements to Python 2.1.
However, there seemed little interest from the Python community,
and time was lacking, so this PEP has been deferred to some future
Python release.
Open Issues
This section lists changes that have been suggested, but about
which no firm decision has yet been made. In the final version of
this PEP, this section should be empty, as all the changes should
be classified as accepted or rejected.
cgi.py: We should not be told to create our own subclass just so
we can handle file uploads. As a practical matter, I have yet to
find the time to do this right, so I end up reading cgi.py’s temp
file into, at best, another file. Some of our legacy code actually
reads it into a second temp file, then into a final destination!
And even if we did, that would mean creating yet another object
with its __init__ call and associated overhead.
cgi.py: Currently, query data with no = are ignored. Even if
keep_blank_values is set, queries like ...?value=&... are
returned with blank values but queries like ...?value&... are
completely lost. It would be great if such data were made
available through the FieldStorage interface, either as entries
with None as values, or in a separate list.
Utility function: build a query string from a list of 2-tuples
Dictionary-related utility classes: NoKeyErrors (returns an empty
string, never a KeyError), PartialStringSubstitution (returns
the original key string, never a KeyError)
New Modules
This section lists details about entire new packages or modules
that should be added to the Python standard library.
fcgi.py : A new module adding support for the FastCGI protocol.
Robin Dunn’s code needs to be ported to Windows, though.
Major Changes to Existing Modules
This section lists details of major changes to existing modules,
whether in implementation or in interface. The changes in this
section therefore carry greater degrees of risk, either in
introducing bugs or a backward incompatibility.
The cgi.py module would be deprecated. (XXX A new module or
package name hasn’t been chosen yet: ‘web’? ‘cgilib’?)
Minor Changes to Existing Modules
This section lists details of minor changes to existing modules.
These changes should have relatively small implementations, and
have little risk of introducing incompatibilities with previous
versions.
Rejected Changes
The changes listed in this section were proposed for Python 2.1,
but were rejected as unsuitable. For each rejected change, a
rationale is given describing why the change was deemed
inappropriate.
An HTML generation module is not part of this PEP. Several such
modules exist, ranging from HTMLgen’s purely programming
interface to ASP-inspired simple templating to DTML’s complex
templating. There’s no indication of which templating module to
enshrine in the standard library, and that probably means that
no module should be so chosen.
cgi.py: Allowing a combination of query data and POST data.
This doesn’t seem to be standard at all, and therefore is
dubious practice.
Proposed Interface
XXX open issues: naming convention (studlycaps or
underline-separated?); need to look at the cgi.parse*() functions
and see if they can be simplified, too.
Parsing functions: carry over most of the parse* functions from
cgi.py
# The Response class borrows most of its methods from Zope's
# HTTPResponse class.
class Response:
"""
Attributes:
status: HTTP status code to return
headers: dictionary of response headers
body: string containing the body of the HTTP response
"""
def __init__(self, status=200, headers={}, body=""):
pass
def setStatus(self, status, reason=None):
"Set the numeric HTTP response code"
pass
def setHeader(self, name, value):
"Set an HTTP header"
pass
def setBody(self, body):
"Set the body of the response"
pass
def setCookie(self, name, value,
path = '/',
comment = None,
domain = None,
max-age = None,
expires = None,
secure = 0
):
"Set a cookie"
pass
def expireCookie(self, name):
"Remove a cookie from the user"
pass
def redirect(self, url):
"Redirect the browser to another URL"
pass
def __str__(self):
"Convert entire response to a string"
pass
def dump(self):
"Return a string representation useful for debugging"
pass
# XXX methods for specific classes of error:serverError,
# badRequest, etc.?
class Request:
"""
Attributes:
XXX should these be dictionaries, or dictionary-like objects?
.headers : dictionary containing HTTP headers
.cookies : dictionary of cookies
.fields : data from the form
.env : environment dictionary
"""
def __init__(self, environ=os.environ, stdin=sys.stdin,
keep_blank_values=1, strict_parsing=0):
"""Initialize the request object, using the provided environment
and standard input."""
pass
# Should people just use the dictionaries directly?
def getHeader(self, name, default=None):
pass
def getCookie(self, name, default=None):
pass
def getField(self, name, default=None):
"Return field's value as a string (even if it's an uploaded file)"
pass
def getUploadedFile(self, name):
"""Returns a file object that can be read to obtain the contents
of an uploaded file. XXX should this report an error if the
field isn't actually an uploaded file? Or should it wrap
a StringIO around simple fields for consistency?
"""
def getURL(self, n=0, query_string=0):
"""Return the URL of the current request, chopping off 'n' path
components from the right. Eg. if the URL is
"http://foo.com/bar/baz/quux", n=2 would return
"http://foo.com/bar". Does not include the query string (if
any)
"""
def getBaseURL(self, n=0):
"""Return the base URL of the current request, adding 'n' path
components to the end to recreate more of the whole URL.
Eg. if the request URL is
"http://foo.com/q/bar/baz/qux", n=0 would return
"http://foo.com/", and n=2 "http://foo.com/q/bar".
Returned URL does not include the query string, if any.
"""
def dump(self):
"String representation suitable for debugging output"
pass
# Possibilities? I don't know if these are worth doing in the
# basic objects.
def getBrowser(self):
"Returns Mozilla/IE/Lynx/Opera/whatever"
def isSecure(self):
"Return true if this is an SSLified request"
# Module-level function
def wrapper(func, logfile=sys.stderr):
"""
Calls the function 'func', passing it the arguments
(request, response, logfile). Exceptions are trapped and
sent to the file 'logfile'.
"""
# This wrapper will detect if it's being called from the command-line,
# and if so, it will run in a debugging mode; name=value pairs
# can be entered on standard input to set field values.
# (XXX how to do file uploads in this syntax?)
Copyright
This document has been placed in the public domain.
| Deferred | PEP 222 – Web Library Enhancements | Standards Track | This PEP proposes a set of enhancements to the CGI development
facilities in the Python standard library. Enhancements might be
new features, new modules for tasks such as cookie support, or
removal of obsolete code. |
PEP 223 – Change the Meaning of \x Escapes
Author:
Tim Peters <tim.peters at gmail.com>
Status:
Final
Type:
Standards Track
Created:
20-Aug-2000
Python-Version:
2.0
Post-History:
23-Aug-2000
Table of Contents
Abstract
Syntax
Semantics
Example
History and Rationale
Development and Discussion
Backward Compatibility
Effects on Other Tools
Reference Implementation
BDFL Pronouncements
References
Copyright
Abstract
Change \x escapes, in both 8-bit and Unicode strings, to consume
exactly the two hex digits following. The proposal views this as
correcting an original design flaw, leading to clearer expression
in all flavors of string, a cleaner Unicode story, better
compatibility with Perl regular expressions, and with minimal risk
to existing code.
Syntax
The syntax of \x escapes, in all flavors of non-raw strings, becomes
\xhh
where h is a hex digit (0-9, a-f, A-F). The exact syntax in 1.5.2 is
not clearly specified in the Reference Manual; it says
\xhh...
implying “two or more” hex digits, but one-digit forms are also
accepted by the 1.5.2 compiler, and a plain \x is “expanded” to
itself (i.e., a backslash followed by the letter x). It’s unclear
whether the Reference Manual intended either of the 1-digit or
0-digit behaviors.
Semantics
In an 8-bit non-raw string,
\xij
expands to the character
chr(int(ij, 16))
Note that this is the same as in 1.6 and before.
In a Unicode string,
\xij
acts the same as
\u00ij
i.e. it expands to the obvious Latin-1 character from the initial
segment of the Unicode space.
An \x not followed by at least two hex digits is a compile-time error,
specifically ValueError in 8-bit strings, and UnicodeError (a subclass
of ValueError) in Unicode strings. Note that if an \x is followed by
more than two hex digits, only the first two are “consumed”. In 1.6
and before all but the last two were silently ignored.
Example
In 1.5.2:
>>> "\x123465" # same as "\x65"
'e'
>>> "\x65"
'e'
>>> "\x1"
'\001'
>>> "\x\x"
'\\x\\x'
>>>
In 2.0:
>>> "\x123465" # \x12 -> \022, "3456" left alone
'\0223456'
>>> "\x65"
'e'
>>> "\x1"
[ValueError is raised]
>>> "\x\x"
[ValueError is raised]
>>>
History and Rationale
\x escapes were introduced in C as a way to specify variable-width
character encodings. Exactly which encodings those were, and how many
hex digits they required, was left up to each implementation. The
language simply stated that \x “consumed” all hex digits following,
and left the meaning up to each implementation. So, in effect, \x in C
is a standard hook to supply platform-defined behavior.
Because Python explicitly aims at platform independence, the \x escape
in Python (up to and including 1.6) has been treated the same way
across all platforms: all except the last two hex digits were
silently ignored. So the only actual use for \x escapes in Python was
to specify a single byte using hex notation.
Larry Wall appears to have realized that this was the only real use for
\x escapes in a platform-independent language, as the proposed rule for
Python 2.0 is in fact what Perl has done from the start (although you
need to run in Perl -w mode to get warned about \x escapes with fewer
than 2 hex digits following – it’s clearly more Pythonic to insist on
2 all the time).
When Unicode strings were introduced to Python, \x was generalized so
as to ignore all but the last four hex digits in Unicode strings.
This caused a technical difficulty for the new regular expression engine:
SRE tries very hard to allow mixing 8-bit and Unicode patterns and
strings in intuitive ways, and it no longer had any way to guess what,
for example, r"\x123456" should mean as a pattern: is it asking to match
the 8-bit character \x56 or the Unicode character \u3456?
There are hacky ways to guess, but it doesn’t end there. The ISO C99
standard also introduces 8-digit \U12345678 escapes to cover the entire
ISO 10646 character space, and it’s also desired that Python 2 support
that from the start. But then what are \x escapes supposed to mean?
Do they ignore all but the last eight hex digits then? And if less
than 8 following in a Unicode string, all but the last 4? And if less
than 4, all but the last 2?
This was getting messier by the minute, and the proposal cuts the
Gordian knot by making \x simpler instead of more complicated. Note
that the 4-digit generalization to \xijkl in Unicode strings was also
redundant, because it meant exactly the same thing as \uijkl in Unicode
strings. It’s more Pythonic to have just one obvious way to specify a
Unicode character via hex notation.
Development and Discussion
The proposal was worked out among Guido van Rossum, Fredrik Lundh and
Tim Peters in email. It was subsequently explained and discussed on
Python-Dev under subject “Go x yourself” [1], starting 2000-08-03.
Response was overwhelmingly positive; no objections were raised.
Backward Compatibility
Changing the meaning of \x escapes does carry risk of breaking existing
code, although no instances of incompatibility have yet been discovered.
The risk is believed to be minimal.
Tim Peters verified that, except for pieces of the standard test suite
deliberately provoking end cases, there are no instances of \xabcdef...
with fewer or more than 2 hex digits following, in either the Python
CVS development tree, or in assorted Python packages sitting on his
machine.
It’s unlikely there are any with fewer than 2, because the Reference
Manual implied they weren’t legal (although this is debatable!). If
there are any with more than 2, Guido is ready to argue they were buggy
anyway <0.9 wink>.
Guido reported that the O’Reilly Python books already document that
Python works the proposed way, likely due to their Perl editing
heritage (as above, Perl worked (very close to) the proposed way from
its start).
Finn Bock reported that what JPython does with \x escapes is
unpredictable today. This proposal gives a clear meaning that can be
consistently and easily implemented across all Python implementations.
Effects on Other Tools
Believed to be none. The candidates for breakage would mostly be
parsing tools, but the author knows of none that worry about the
internal structure of Python strings beyond the approximation “when
there’s a backslash, swallow the next character”. Tim Peters checked
python-mode.el, the std tokenize.py and pyclbr.py, and the IDLE syntax
coloring subsystem, and believes there’s no need to change any of
them. Tools like tabnanny.py and checkappend.py inherit their immunity
from tokenize.py.
Reference Implementation
The code changes are so simple that a separate patch will not be produced.
Fredrik Lundh is writing the code, is an expert in the area, and will
simply check the changes in before 2.0b1 is released.
BDFL Pronouncements
Yes, ValueError, not SyntaxError. “Problems with literal interpretations
traditionally raise ‘runtime’ exceptions rather than syntax errors.”
References
[1]
Tim Peters, Go x yourself
https://mail.python.org/pipermail/python-dev/2000-August/007825.html
Copyright
This document has been placed in the public domain.
| Final | PEP 223 – Change the Meaning of \x Escapes | Standards Track | Change \x escapes, in both 8-bit and Unicode strings, to consume
exactly the two hex digits following. The proposal views this as
correcting an original design flaw, leading to clearer expression
in all flavors of string, a cleaner Unicode story, better
compatibility with Perl regular expressions, and with minimal risk
to existing code. |
PEP 226 – Python 2.1 Release Schedule
Author:
Jeremy Hylton <jeremy at alum.mit.edu>
Status:
Final
Type:
Informational
Topic:
Release
Created:
16-Oct-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Release Schedule
Open issues for Python 2.0 beta 2
Guidelines for making changes for Python 2.1
General guidelines for submitting patches and making changes
Abstract
This document describes the post Python 2.0 development and
release schedule. According to this schedule, Python 2.1 will be
released in April of 2001. The schedule primarily concerns
itself with PEP-size items. Small bug fixes and changes will
occur up until the first beta release.
Release Schedule
Tentative future release dates
[bugfix release dates go here]
Past release dates:
17-Apr-2001: 2.1 final release
15-Apr-2001: 2.1 release candidate 2
13-Apr-2001: 2.1 release candidate 1
23-Mar-2001: Python 2.1 beta 2 release
02-Mar-2001: First 2.1 beta release
02-Feb-2001: Python 2.1 alpha 2 release
22-Jan-2001: Python 2.1 alpha 1 release
16-Oct-2000: Python 2.0 final release
Open issues for Python 2.0 beta 2
Add a default unit testing framework to the standard library.
Guidelines for making changes for Python 2.1
The guidelines and schedule will be revised based on discussion in
the [email protected] mailing list.
The PEP system was instituted late in the Python 2.0 development
cycle and many changes did not follow the process described in PEP 1.
The development process for 2.1, however, will follow the PEP
process as documented.
The first eight weeks following 2.0 final will be the design and
review phase. By the end of this period, any PEP that is proposed
for 2.1 should be ready for review. This means that the PEP is
written and discussion has occurred on the [email protected]
and [email protected] mailing lists.
The next six weeks will be spent reviewing the PEPs and
implementing and testing the accepted PEPs. When this period
stops, we will end consideration of any incomplete PEPs. Near the
end of this period, there will be a feature freeze where any small
features not worthy of a PEP will not be accepted.
Before the final release, we will have six weeks of beta testing
and a release candidate or two.
General guidelines for submitting patches and making changes
Use good sense when committing changes. You should know what we
mean by good sense or we wouldn’t have given you commit privileges
<0.5 wink>. Some specific examples of good sense include:
Do whatever the dictator tells you.
Discuss any controversial changes on python-dev first. If you
get a lot of +1 votes and no -1 votes, make the change. If you
get a some -1 votes, think twice; consider asking Guido what he
thinks.
If the change is to code you contributed, it probably makes
sense for you to fix it.
If the change affects code someone else wrote, it probably makes
sense to ask him or her first.
You can use the SourceForge (SF) Patch Manager to submit a patch
and assign it to someone for review.
Any significant new feature must be described in a PEP and
approved before it is checked in.
Any significant code addition, such as a new module or large
patch, must include test cases for the regression test and
documentation. A patch should not be checked in until the tests
and documentation are ready.
If you fix a bug, you should write a test case that would have
caught the bug.
If you commit a patch from the SF Patch Manager or fix a bug from
the Jitterbug database, be sure to reference the patch/bug number
in the CVS log message. Also be sure to change the status in the
patch manager or bug database (if you have access to the bug
database).
It is not acceptable for any checked in code to cause the
regression test to fail. If a checkin causes a failure, it must
be fixed within 24 hours or it will be backed out.
All contributed C code must be ANSI C. If possible check it with
two different compilers, e.g. gcc and MSVC.
All contributed Python code must follow Guido’s Python style
guide. http://www.python.org/doc/essays/styleguide.html
It is understood that any code contributed will be released under
an Open Source license. Do not contribute code if it can’t be
released this way.
| Final | PEP 226 – Python 2.1 Release Schedule | Informational | This document describes the post Python 2.0 development and
release schedule. According to this schedule, Python 2.1 will be
released in April of 2001. The schedule primarily concerns
itself with PEP-size items. Small bug fixes and changes will
occur up until the first beta release. |
PEP 227 – Statically Nested Scopes
Author:
Jeremy Hylton <jeremy at alum.mit.edu>
Status:
Final
Type:
Standards Track
Created:
01-Nov-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Introduction
Specification
Discussion
Examples
Backwards compatibility
Compatibility of C API
locals() / vars()
Warnings and Errors
import * used in function scope
bare exec in function scope
local shadows global
Rebinding names in enclosing scopes
Implementation
References
Copyright
Abstract
This PEP describes the addition of statically nested scoping
(lexical scoping) for Python 2.2, and as a source level option
for python 2.1. In addition, Python 2.1 will issue warnings about
constructs whose meaning may change when this feature is enabled.
The old language definition (2.0 and before) defines exactly three
namespaces that are used to resolve names – the local, global,
and built-in namespaces. The addition of nested scopes allows
resolution of unbound local names in enclosing functions’
namespaces.
The most visible consequence of this change is that lambdas (and
other nested functions) can reference variables defined in the
surrounding namespace. Currently, lambdas must often use default
arguments to explicitly creating bindings in the lambda’s
namespace.
Introduction
This proposal changes the rules for resolving free variables in
Python functions. The new name resolution semantics will take
effect with Python 2.2. These semantics will also be available in
Python 2.1 by adding “from __future__ import nested_scopes” to the
top of a module. (See PEP 236.)
The Python 2.0 definition specifies exactly three namespaces to
check for each name – the local namespace, the global namespace,
and the builtin namespace. According to this definition, if a
function A is defined within a function B, the names bound in B
are not visible in A. The proposal changes the rules so that
names bound in B are visible in A (unless A contains a name
binding that hides the binding in B).
This specification introduces rules for lexical scoping that are
common in Algol-like languages. The combination of lexical
scoping and existing support for first-class functions is
reminiscent of Scheme.
The changed scoping rules address two problems – the limited
utility of lambda expressions (and nested functions in general),
and the frequent confusion of new users familiar with other
languages that support nested lexical scopes, e.g. the inability
to define recursive functions except at the module level.
The lambda expression yields an unnamed function that evaluates a
single expression. It is often used for callback functions. In
the example below (written using the Python 2.0 rules), any name
used in the body of the lambda must be explicitly passed as a
default argument to the lambda.
from Tkinter import *
root = Tk()
Button(root, text="Click here",
command=lambda root=root: root.test.configure(text="..."))
This approach is cumbersome, particularly when there are several
names used in the body of the lambda. The long list of default
arguments obscures the purpose of the code. The proposed
solution, in crude terms, implements the default argument approach
automatically. The “root=root” argument can be omitted.
The new name resolution semantics will cause some programs to
behave differently than they did under Python 2.0. In some cases,
programs will fail to compile. In other cases, names that were
previously resolved using the global namespace will be resolved
using the local namespace of an enclosing function. In Python
2.1, warnings will be issued for all statements that will behave
differently.
Specification
Python is a statically scoped language with block structure, in
the traditional of Algol. A code block or region, such as a
module, class definition, or function body, is the basic unit of a
program.
Names refer to objects. Names are introduced by name binding
operations. Each occurrence of a name in the program text refers
to the binding of that name established in the innermost function
block containing the use.
The name binding operations are argument declaration, assignment,
class and function definition, import statements, for statements,
and except clauses. Each name binding occurs within a block
defined by a class or function definition or at the module level
(the top-level code block).
If a name is bound anywhere within a code block, all uses of the
name within the block are treated as references to the current
block. (Note: This can lead to errors when a name is used within
a block before it is bound.)
If the global statement occurs within a block, all uses of the
name specified in the statement refer to the binding of that name
in the top-level namespace. Names are resolved in the top-level
namespace by searching the global namespace, i.e. the namespace of
the module containing the code block, and in the builtin
namespace, i.e. the namespace of the __builtin__ module. The
global namespace is searched first. If the name is not found
there, the builtin namespace is searched. The global statement
must precede all uses of the name.
If a name is used within a code block, but it is not bound there
and is not declared global, the use is treated as a reference to
the nearest enclosing function region. (Note: If a region is
contained within a class definition, the name bindings that occur
in the class block are not visible to enclosed functions.)
A class definition is an executable statement that may contain
uses and definitions of names. These references follow the normal
rules for name resolution. The namespace of the class definition
becomes the attribute dictionary of the class.
The following operations are name binding operations. If they
occur within a block, they introduce new local names in the
current block unless there is also a global declaration.
Function definition: def name ...
Argument declaration: def f(...name...), lambda ...name...
Class definition: class name ...
Assignment statement: name = ...
Import statement: import name, import module as name,
from module import name
Implicit assignment: names are bound by for statements and except
clauses
There are several cases where Python statements are illegal when
used in conjunction with nested scopes that contain free
variables.
If a variable is referenced in an enclosed scope, it is an error
to delete the name. The compiler will raise a SyntaxError for
‘del name’.
If the wild card form of import (import *) is used in a function
and the function contains a nested block with free variables, the
compiler will raise a SyntaxError.
If exec is used in a function and the function contains a nested
block with free variables, the compiler will raise a SyntaxError
unless the exec explicitly specifies the local namespace for the
exec. (In other words, “exec obj” would be illegal, but
“exec obj in ns” would be legal.)
If a name bound in a function scope is also the name of a module
global name or a standard builtin name, and the function contains
a nested function scope that references the name, the compiler
will issue a warning. The name resolution rules will result in
different bindings under Python 2.0 than under Python 2.2. The
warning indicates that the program may not run correctly with all
versions of Python.
Discussion
The specified rules allow names defined in a function to be
referenced in any nested function defined with that function. The
name resolution rules are typical for statically scoped languages,
with three primary exceptions:
Names in class scope are not accessible.
The global statement short-circuits the normal rules.
Variables are not declared.
Names in class scope are not accessible. Names are resolved in
the innermost enclosing function scope. If a class definition
occurs in a chain of nested scopes, the resolution process skips
class definitions. This rule prevents odd interactions between
class attributes and local variable access. If a name binding
operation occurs in a class definition, it creates an attribute on
the resulting class object. To access this variable in a method,
or in a function nested within a method, an attribute reference
must be used, either via self or via the class name.
An alternative would have been to allow name binding in class
scope to behave exactly like name binding in function scope. This
rule would allow class attributes to be referenced either via
attribute reference or simple name. This option was ruled out
because it would have been inconsistent with all other forms of
class and instance attribute access, which always use attribute
references. Code that used simple names would have been obscure.
The global statement short-circuits the normal rules. Under the
proposal, the global statement has exactly the same effect that it
does for Python 2.0. It is also noteworthy because it allows name
binding operations performed in one block to change bindings in
another block (the module).
Variables are not declared. If a name binding operation occurs
anywhere in a function, then that name is treated as local to the
function and all references refer to the local binding. If a
reference occurs before the name is bound, a NameError is raised.
The only kind of declaration is the global statement, which allows
programs to be written using mutable global variables. As a
consequence, it is not possible to rebind a name defined in an
enclosing scope. An assignment operation can only bind a name in
the current scope or in the global scope. The lack of
declarations and the inability to rebind names in enclosing scopes
are unusual for lexically scoped languages; there is typically a
mechanism to create name bindings (e.g. lambda and let in Scheme)
and a mechanism to change the bindings (set! in Scheme).
Examples
A few examples are included to illustrate the way the rules work.
>>> def make_adder(base):
... def adder(x):
... return base + x
... return adder
>>> add5 = make_adder(5)
>>> add5(6)
11
>>> def make_fact():
... def fact(n):
... if n == 1:
... return 1L
... else:
... return n * fact(n - 1)
... return fact
>>> fact = make_fact()
>>> fact(7)
5040L
>>> def make_wrapper(obj):
... class Wrapper:
... def __getattr__(self, attr):
... if attr[0] != '_':
... return getattr(obj, attr)
... else:
... raise AttributeError, attr
... return Wrapper()
>>> class Test:
... public = 2
... _private = 3
>>> w = make_wrapper(Test())
>>> w.public
2
>>> w._private
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: _private
An example from Tim Peters demonstrates the potential pitfalls of
nested scopes in the absence of declarations:
i = 6
def f(x):
def g():
print i
# ...
# skip to the next page
# ...
for i in x: # ah, i *is* local to f, so this is what g sees
pass
g()
The call to g() will refer to the variable i bound in f() by the for
loop. If g() is called before the loop is executed, a NameError will
be raised.
Backwards compatibility
There are two kinds of compatibility problems caused by nested
scopes. In one case, code that behaved one way in earlier
versions behaves differently because of nested scopes. In the
other cases, certain constructs interact badly with nested scopes
and will trigger SyntaxErrors at compile time.
The following example from Skip Montanaro illustrates the first
kind of problem:
x = 1
def f1():
x = 2
def inner():
print x
inner()
Under the Python 2.0 rules, the print statement inside inner()
refers to the global variable x and will print 1 if f1() is
called. Under the new rules, it refers to the f1()’s namespace,
the nearest enclosing scope with a binding.
The problem occurs only when a global variable and a local
variable share the same name and a nested function uses that name
to refer to the global variable. This is poor programming
practice, because readers will easily confuse the two different
variables. One example of this problem was found in the Python
standard library during the implementation of nested scopes.
To address this problem, which is unlikely to occur often, the
Python 2.1 compiler (when nested scopes are not enabled) issues a
warning.
The other compatibility problem is caused by the use of import *
and ‘exec’ in a function body, when that function contains a
nested scope and the contained scope has free variables. For
example:
y = 1
def f():
exec "y = 'gotcha'" # or from module import *
def g():
return y
...
At compile-time, the compiler cannot tell whether an exec that
operates on the local namespace or an import * will introduce
name bindings that shadow the global y. Thus, it is not possible
to tell whether the reference to y in g() should refer to the
global or to a local name in f().
In discussion of the python-list, people argued for both possible
interpretations. On the one hand, some thought that the reference
in g() should be bound to a local y if one exists. One problem
with this interpretation is that it is impossible for a human
reader of the code to determine the binding of y by local
inspection. It seems likely to introduce subtle bugs. The other
interpretation is to treat exec and import * as dynamic features
that do not effect static scoping. Under this interpretation, the
exec and import * would introduce local names, but those names
would never be visible to nested scopes. In the specific example
above, the code would behave exactly as it did in earlier versions
of Python.
Since each interpretation is problematic and the exact meaning
ambiguous, the compiler raises an exception. The Python 2.1
compiler issues a warning when nested scopes are not enabled.
A brief review of three Python projects (the standard library,
Zope, and a beta version of PyXPCOM) found four backwards
compatibility issues in approximately 200,000 lines of code.
There was one example of case #1 (subtle behavior change) and two
examples of import * problems in the standard library.
(The interpretation of the import * and exec restriction that was
implemented in Python 2.1a2 was much more restrictive, based on
language that in the reference manual that had never been
enforced. These restrictions were relaxed following the release.)
Compatibility of C API
The implementation causes several Python C API functions to
change, including PyCode_New(). As a result, C extensions may
need to be updated to work correctly with Python 2.1.
locals() / vars()
These functions return a dictionary containing the current scope’s
local variables. Modifications to the dictionary do not affect
the values of variables. Under the current rules, the use of
locals() and globals() allows the program to gain access to all
the namespaces in which names are resolved.
An analogous function will not be provided for nested scopes.
Under this proposal, it will not be possible to gain
dictionary-style access to all visible scopes.
Warnings and Errors
The compiler will issue warnings in Python 2.1 to help identify
programs that may not compile or run correctly under future
versions of Python. Under Python 2.2 or Python 2.1 if the
nested_scopes future statement is used, which are collectively
referred to as “future semantics” in this section, the compiler
will issue SyntaxErrors in some cases.
The warnings typically apply when a function that contains a
nested function that has free variables. For example, if function
F contains a function G and G uses the builtin len(), then F is a
function that contains a nested function (G) with a free variable
(len). The label “free-in-nested” will be used to describe these
functions.
import * used in function scope
The language reference specifies that import * may only occur
in a module scope. (Sec. 6.11) The implementation of C
Python has supported import * at the function scope.
If import * is used in the body of a free-in-nested function,
the compiler will issue a warning. Under future semantics,
the compiler will raise a SyntaxError.
bare exec in function scope
The exec statement allows two optional expressions following
the keyword “in” that specify the namespaces used for locals
and globals. An exec statement that omits both of these
namespaces is a bare exec.
If a bare exec is used in the body of a free-in-nested
function, the compiler will issue a warning. Under future
semantics, the compiler will raise a SyntaxError.
local shadows global
If a free-in-nested function has a binding for a local
variable that (1) is used in a nested function and (2) is the
same as a global variable, the compiler will issue a warning.
Rebinding names in enclosing scopes
There are technical issues that make it difficult to support
rebinding of names in enclosing scopes, but the primary reason
that it is not allowed in the current proposal is that Guido is
opposed to it. His motivation: it is difficult to support,
because it would require a new mechanism that would allow the
programmer to specify that an assignment in a block is supposed to
rebind the name in an enclosing block; presumably a keyword or
special syntax (x := 3) would make this possible. Given that this
would encourage the use of local variables to hold state that is
better stored in a class instance, it’s not worth adding new
syntax to make this possible (in Guido’s opinion).
The proposed rules allow programmers to achieve the effect of
rebinding, albeit awkwardly. The name that will be effectively
rebound by enclosed functions is bound to a container object. In
place of assignment, the program uses modification of the
container to achieve the desired effect:
def bank_account(initial_balance):
balance = [initial_balance]
def deposit(amount):
balance[0] = balance[0] + amount
return balance
def withdraw(amount):
balance[0] = balance[0] - amount
return balance
return deposit, withdraw
Support for rebinding in nested scopes would make this code
clearer. A class that defines deposit() and withdraw() methods
and the balance as an instance variable would be clearer still.
Since classes seem to achieve the same effect in a more
straightforward manner, they are preferred.
Implementation
The implementation for C Python uses flat closures [1]. Each def
or lambda expression that is executed will create a closure if the
body of the function or any contained function has free
variables. Using flat closures, the creation of closures is
somewhat expensive but lookup is cheap.
The implementation adds several new opcodes and two new kinds of
names in code objects. A variable can be either a cell variable
or a free variable for a particular code object. A cell variable
is referenced by containing scopes; as a result, the function
where it is defined must allocate separate storage for it on each
invocation. A free variable is referenced via a function’s
closure.
The choice of free closures was made based on three factors.
First, nested functions are presumed to be used infrequently,
deeply nested (several levels of nesting) still less frequently.
Second, lookup of names in a nested scope should be fast.
Third, the use of nested scopes, particularly where a function
that access an enclosing scope is returned, should not prevent
unreferenced objects from being reclaimed by the garbage
collector.
References
[1]
Luca Cardelli. Compiling a functional language. In Proc. of
the 1984 ACM Conference on Lisp and Functional Programming,
pp. 208-217, Aug. 1984
https://dl.acm.org/doi/10.1145/800055.802037
Copyright
| Final | PEP 227 – Statically Nested Scopes | Standards Track | This PEP describes the addition of statically nested scoping
(lexical scoping) for Python 2.2, and as a source level option
for python 2.1. In addition, Python 2.1 will issue warnings about
constructs whose meaning may change when this feature is enabled. |
PEP 228 – Reworking Python’s Numeric Model
Author:
Moshe Zadka <moshez at zadka.site.co.il>, Guido van Rossum <guido at python.org>
Status:
Withdrawn
Type:
Standards Track
Created:
04-Nov-2000
Post-History:
Table of Contents
Withdrawal
Abstract
Rationale
Other Numerical Models
Suggested Interface For Python’s Numerical Model
Coercion
Inexact Operations
Numerical Python Issues
Unresolved Issues
Copyright
Withdrawal
This PEP has been withdrawn in favor of PEP 3141.
Abstract
Today, Python’s numerical model is similar to the C numeric model:
there are several unrelated numerical types, and when operations
between numerical types are requested, coercions happen. While
the C rationale for the numerical model is that it is very similar
to what happens at the hardware level, that rationale does not
apply to Python. So, while it is acceptable to C programmers that
2/3 == 0, it is surprising to many Python programmers.
NOTE: in the light of recent discussions in the newsgroup, the
motivation in this PEP (and details) need to be extended.
Rationale
In usability studies, one of the least usable aspect of Python was
the fact that integer division returns the floor of the division.
This makes it hard to program correctly, requiring casts to
float() in various parts through the code. Python’s numerical
model stems from C, while a model that might be easier to work with
can be based on the mathematical understanding of numbers.
Other Numerical Models
Perl’s numerical model is that there is one type of numbers –
floating point numbers. While it is consistent and superficially
non-surprising, it tends to have subtle gotchas. One of these is
that printing numbers is very tricky, and requires correct
rounding. In Perl, there is also a mode where all numbers are
integers. This mode also has its share of problems, which arise
from the fact that there is not even an approximate way of
dividing numbers and getting meaningful answers.
Suggested Interface For Python’s Numerical Model
While coercion rules will remain for add-on types and classes, the
built in type system will have exactly one Python type – a
number. There are several things which can be considered “number
methods”:
isnatural()
isintegral()
isrational()
isreal()
iscomplex()
isexact()
Obviously, a number which answers true to a question from 1 to 5, will
also answer true to any following question. If isexact() is not true,
then any answer might be wrong.
(But not horribly wrong: it’s close to the truth.)
Now, there is two thing the models promises for the field operations
(+, -, /, *):
If both operands satisfy isexact(), the result satisfies
isexact().
All field rules are true, except that for not-isexact() numbers,
they might be only approximately true.
One consequence of these two rules is that all exact calculations
are done as (complex) rationals: since the field laws must hold,
then
(a/b)*b == a
must hold.
There is built-in function, inexact() which takes a number
and returns an inexact number which is a good approximation.
Inexact numbers must be as least as accurate as if they were
using IEEE-754.
Several of the classical Python functions will return exact numbers
even when given inexact numbers: e.g, int().
Coercion
The number type does not define nb_coerce
Any numeric operation slot, when receiving something other then PyNumber,
refuses to implement it.
Inexact Operations
The functions in the math module will be allowed to return
inexact results for exact values. However, they will never return
a non-real number. The functions in the cmath module are also
allowed to return an inexact result for an exact argument, and are
furthermore allowed to return a complex result for a real
argument.
Numerical Python Issues
People who use Numerical Python do so for high-performance vector
operations. Therefore, NumPy should keep its hardware based
numeric model.
Unresolved Issues
Which number literals will be exact, and which inexact?
How do we deal with IEEE 754 operations? (probably, isnan/isinf should
be methods)
On 64-bit machines, comparisons between ints and floats may be
broken when the comparison involves conversion to float. Ditto
for comparisons between longs and floats. This can be dealt with
by avoiding the conversion to float. (Due to Andrew Koenig.)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 228 – Reworking Python’s Numeric Model | Standards Track | Today, Python’s numerical model is similar to the C numeric model:
there are several unrelated numerical types, and when operations
between numerical types are requested, coercions happen. While
the C rationale for the numerical model is that it is very similar
to what happens at the hardware level, that rationale does not
apply to Python. So, while it is acceptable to C programmers that
2/3 == 0, it is surprising to many Python programmers. |
PEP 230 – Warning Framework
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
28-Nov-2000
Python-Version:
2.1
Post-History:
05-Nov-2000
Table of Contents
Abstract
Motivation
APIs For Issuing Warnings
Warnings Categories
The Warnings Filter
Warnings Output And Formatting Hooks
API For Manipulating Warning Filters
Command Line Syntax
Open Issues
Rejected Concerns
Implementation
Abstract
This PEP proposes a C and Python level API, as well as command
line flags, to issue warning messages and control what happens to
them. This is mostly based on GvR’s proposal posted to python-dev
on 05-Nov-2000, with some ideas (such as using classes to
categorize warnings) merged in from Paul Prescod’s
counter-proposal posted on the same date. Also, an attempt to
implement the proposal caused several small tweaks.
Motivation
With Python 3000 looming, it is necessary to start issuing
warnings about the use of obsolete or deprecated features, in
addition to errors. There are also lots of other reasons to be
able to issue warnings, both from C and from Python code, both at
compile time and at run time.
Warnings aren’t fatal, and thus it’s possible that a program
triggers the same warning many times during a single execution.
It would be annoying if a program emitted an endless stream of
identical warnings. Therefore, a mechanism is needed that
suppresses multiple identical warnings.
It is also desirable to have user control over which warnings are
printed. While in general it is useful to see all warnings all
the time, there may be times where it is impractical to fix the
code right away in a production program. In this case, there
should be a way to suppress warnings.
It is also useful to be able to suppress specific warnings during
program development, e.g. when a warning is generated by a piece
of 3rd party code that cannot be fixed right away, or when there
is no way to fix the code (possibly a warning message is generated
for a perfectly fine piece of code). It would be unwise to offer
to suppress all warnings in such cases: the developer would miss
warnings about the rest of the code.
On the other hand, there are also situations conceivable where
some or all warnings are better treated as errors. For example,
it may be a local coding standard that a particular deprecated
feature should not be used. In order to enforce this, it is
useful to be able to turn the warning about this particular
feature into an error, raising an exception (without necessarily
turning all warnings into errors).
Therefore, I propose to introduce a flexible “warning filter”
which can filter out warnings or change them into exceptions,
based on:
Where in the code they are generated (per package, module, or
function)
The warning category (warning categories are discussed below)
A specific warning message
The warning filter must be controllable both from the command line
and from Python code.
APIs For Issuing Warnings
To issue a warning from Python:import warnings
warnings.warn(message[, category[, stacklevel]])
The category argument, if given, must be a warning category
class (see below); it defaults to warnings.UserWarning. This
may raise an exception if the particular warning issued is
changed into an error by the warnings filter. The stacklevel
can be used by wrapper functions written in Python, like this:
def deprecation(message):
warn(message, DeprecationWarning, level=2)
This makes the warning refer to the deprecation()’s caller,
rather than to the source of deprecation() itself (since the
latter would defeat the purpose of the warning message).
To issue a warning from C:int PyErr_Warn(PyObject *category, char *message);
Return 0 normally, 1 if an exception is raised (either because
the warning was transformed into an exception, or because of a
malfunction in the implementation, such as running out of
memory). The category argument must be a warning category class
(see below) or NULL, in which case it defaults to
PyExc_RuntimeWarning. When PyErr_Warn() function returns 1, the
caller should do normal exception handling.
The current C implementation of PyErr_Warn() imports the
warnings module (implemented in Python) and calls its warn()
function. This minimizes the amount of C code that needs to be
added to implement the warning feature.
[XXX Open Issue: what about issuing warnings during lexing or
parsing, which don’t have the exception machinery available?]
Warnings Categories
There are a number of built-in exceptions that represent warning
categories. This categorization is useful to be able to filter
out groups of warnings. The following warnings category classes
are currently defined:
Warning – this is the base class of all warning category
classes and it itself a subclass of Exception
UserWarning – the default category for warnings.warn()
DeprecationWarning – base category for warnings about deprecated
features
SyntaxWarning – base category for warnings about dubious
syntactic features
RuntimeWarning – base category for warnings about dubious
runtime features
[XXX: Other warning categories may be proposed during the review
period for this PEP.]
These standard warning categories are available from C as
PyExc_Warning, PyExc_UserWarning, etc. From Python, they are
available in the __builtin__ module, so no import is necessary.
User code can define additional warning categories by subclassing
one of the standard warning categories. A warning category must
always be a subclass of the Warning class.
The Warnings Filter
The warnings filter control whether warnings are ignored,
displayed, or turned into errors (raising an exception).
There are three sides to the warnings filter:
The data structures used to efficiently determine the
disposition of a particular warnings.warn() or PyErr_Warn()
call.
The API to control the filter from Python source code.
The command line switches to control the filter.
The warnings filter works in several stages. It is optimized for
the (expected to be common) case where the same warning is issued
from the same place in the code over and over.
First, the warning filter collects the module and line number
where the warning is issued; this information is readily available
through sys._getframe().
Conceptually, the warnings filter maintains an ordered list of
filter specifications; any specific warning is matched against
each filter specification in the list in turn until a match is
found; the match determines the disposition of the match. Each
entry is a tuple as follows:
(category, message, module, lineno, action)
category is a class (a subclass of warnings.Warning) of which
the warning category must be a subclass in order to match
message is a compiled regular expression that the warning
message must match (the match is case-insensitive)
module is a compiled regular expression that the module name
must match
lineno is an integer that the line number where the warning
occurred must match, or 0 to match all line numbers
action is one of the following strings:
“error” – turn matching warnings into exceptions
“ignore” – never print matching warnings
“always” – always print matching warnings
“default” – print the first occurrence of matching warnings
for each location where the warning is issued
“module” – print the first occurrence of matching warnings
for each module where the warning is issued
“once” – print only the first occurrence of matching
warnings
Since the Warning class is derived from the built-in Exception
class, to turn a warning into an error we simply raise
category(message).
Warnings Output And Formatting Hooks
When the warnings filter decides to issue a warning (but not when
it decides to raise an exception), it passes the information about
the function warnings.showwarning(message, category, filename, lineno).
The default implementation of this function writes the warning text
to sys.stderr, and shows the source line of the filename. It has
an optional 5th argument which can be used to specify a different
file than sys.stderr.
The formatting of warnings is done by a separate function,
warnings.formatwarning(message, category, filename, lineno). This
returns a string (that may contain newlines and ends in a newline)
that can be printed to get the identical effect of the
showwarning() function.
API For Manipulating Warning Filters
warnings.filterwarnings(message, category, module, lineno, action)
This checks the types of the arguments, compiles the message and
module regular expressions, and inserts them as a tuple in front
of the warnings filter.
warnings.resetwarnings()
Reset the warnings filter to empty.
Command Line Syntax
There should be command line options to specify the most common
filtering actions, which I expect to include at least:
suppress all warnings
suppress a particular warning message everywhere
suppress all warnings in a particular module
turn all warnings into exceptions
I propose the following command line option syntax:
-Waction[:message[:category[:module[:lineno]]]]
Where:
‘action’ is an abbreviation of one of the allowed actions
(“error”, “default”, “ignore”, “always”, “once”, or “module”)
‘message’ is a message string; matches warnings whose message
text is an initial substring of ‘message’ (matching is
case-insensitive)
‘category’ is an abbreviation of a standard warning category
class name or a fully-qualified name for a user-defined
warning category class of the form [package.]module.classname
‘module’ is a module name (possibly package.module)
‘lineno’ is an integral line number
All parts except ‘action’ may be omitted, where an empty value
after stripping whitespace is the same as an omitted value.
The C code that parses the Python command line saves the body of
all -W options in a list of strings, which is made available to
the warnings module as sys.warnoptions. The warnings module
parses these when it is first imported. Errors detected during
the parsing of sys.warnoptions are not fatal; a message is written
to sys.stderr and processing continues with the option.
Examples:
-WerrorTurn all warnings into errors
-WallShow all warnings
-WignoreIgnore all warnings
-Wi:helloIgnore warnings whose message text starts with “hello”
-We::DeprecationTurn deprecation warnings into errors
-Wi:::spam:10Ignore all warnings on line 10 of module spam
-Wi:::spam -Wd:::spam:10Ignore all warnings in module spam except on line 10
-We::Deprecation -Wd::Deprecation:spamTurn deprecation warnings into errors except in module spam
Open Issues
Some open issues off the top of my head:
What about issuing warnings during lexing or parsing, which
don’t have the exception machinery available?
The proposed command line syntax is a bit ugly (although the
simple cases aren’t so bad: -Werror, -Wignore, etc.). Anybody
got a better idea?
I’m a bit worried that the filter specifications are too
complex. Perhaps filtering only on category and module (not on
message text and line number) would be enough?
There’s a bit of confusion between module names and file names.
The reporting uses file names, but the filter specification uses
module names. Maybe it should allow filenames as well?
I’m not at all convinced that packages are handled right.
Do we need more standard warning categories? Fewer?
In order to minimize the start-up overhead, the warnings module
is imported by the first call to PyErr_Warn(). It does the
command line parsing for -W options upon import. Therefore, it
is possible that warning-free programs will not complain about
invalid -W options.
Rejected Concerns
Paul Prescod, Barry Warsaw and Fred Drake have brought up several
additional concerns that I feel aren’t critical. I address them
here (the concerns are paraphrased, not exactly their words):
Paul: warn() should be a built-in or a statement to make it easily
available.Response: “from warnings import warn” is easy enough.
Paul: What if I have a speed-critical module that triggers
warnings in an inner loop. It should be possible to disable the
overhead for detecting the warning (not just suppress the
warning).Response: rewrite the inner loop to avoid triggering the
warning.
Paul: What if I want to see the full context of a warning?Response: use -Werror to turn it into an exception.
Paul: I prefer “:*:*:” to “:::” for leaving parts of the warning
spec out.Response: I don’t.
Barry: It would be nice if lineno can be a range specification.Response: Too much complexity already.
Barry: I’d like to add my own warning action. Maybe if ‘action’
could be a callable as well as a string. Then in my IDE, I
could set that to “mygui.popupWarningsDialog”.Response: For that purpose you would override
warnings.showwarning().
Fred: why do the Warning category classes have to be in
__builtin__?Response: that’s the simplest implementation, given that the
warning categories must be available in C before the first
PyErr_Warn() call, which imports the warnings module. I see no
problem with making them available as built-ins.
Implementation
Here’s a prototype implementation:
http://sourceforge.net/patch/?func=detailpatch&patch_id=102715&group_id=5470
| Final | PEP 230 – Warning Framework | Standards Track | This PEP proposes a C and Python level API, as well as command
line flags, to issue warning messages and control what happens to
them. This is mostly based on GvR’s proposal posted to python-dev
on 05-Nov-2000, with some ideas (such as using classes to
categorize warnings) merged in from Paul Prescod’s
counter-proposal posted on the same date. Also, an attempt to
implement the proposal caused several small tweaks. |
PEP 233 – Python Online Help
Author:
Paul Prescod <paul at prescod.net>
Status:
Deferred
Type:
Standards Track
Created:
11-Dec-2000
Python-Version:
2.1
Post-History:
Table of Contents
Abstract
Interactive use
Implementation
Built-in Topics
Security Issues
Abstract
This PEP describes a command-line driven online help facility for
Python. The facility should be able to build on existing
documentation facilities such as the Python documentation and
docstrings. It should also be extensible for new types and
modules.
Interactive use
Simply typing help describes the help function (through repr()
overloading).
help can also be used as a function.
The function takes the following forms of input:
help( "string" ) – built-in topic or global
help( <ob> ) – docstring from object or type
help( "doc:filename" ) – filename from Python documentation
If you ask for a global, it can be a fully-qualified name, such as:
help("xml.dom")
You can also use the facility from a command-line:
python --help if
In either situation, the output does paging similar to the more
command.
Implementation
The help function is implemented in an onlinehelp module which is
demand-loaded.
There should be options for fetching help information from
environments other than the command line through the onlinehelp
module:
onlinehelp.gethelp(object_or_string) -> string
It should also be possible to override the help display function
by assigning to onlinehelp.displayhelp(object_or_string).
The module should be able to extract module information from
either the HTML or LaTeX versions of the Python documentation.
Links should be accommodated in a “lynx-like” manner.
Over time, it should also be able to recognize when docstrings are
in “special” syntaxes like structured text, HTML and LaTeX and
decode them appropriately.
A prototype implementation is available with the Python source
distribution as nondist/sandbox/doctools/onlinehelp.py.
Built-in Topics
help( "intro" ) – What is Python? Read this first!
help( "keywords" ) – What are the keywords?
help( "syntax" ) – What is the overall syntax?
help( "operators" ) – What operators are available?
help( "builtins" ) – What functions, types, etc. are built-in?
help( "modules" ) – What modules are in the standard library?
help( "copyright" ) – Who owns Python?
help( "moreinfo" ) – Where is there more information?
help( "changes" ) – What changed in Python 2.0?
help( "extensions" ) – What extensions are installed?
help( "faq" ) – What questions are frequently asked?
help( "ack" ) – Who has done work on Python lately?
Security Issues
This module will attempt to import modules with the same names as
requested topics. Don’t use the modules if you are not confident
that everything in your PYTHONPATH is from a trusted source.
| Deferred | PEP 233 – Python Online Help | Standards Track | This PEP describes a command-line driven online help facility for
Python. The facility should be able to build on existing
documentation facilities such as the Python documentation and
docstrings. It should also be extensible for new types and
modules. |
PEP 234 – Iterators
Author:
Ka-Ping Yee <ping at zesty.ca>, Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
30-Jan-2001
Python-Version:
2.1
Post-History:
30-Apr-2001
Table of Contents
Abstract
C API Specification
Python API Specification
Dictionary Iterators
File Iterators
Rationale
Resolved Issues
Mailing Lists
Copyright
Abstract
This document proposes an iteration interface that objects can provide to
control the behaviour of for loops. Looping is customized by providing a
method that produces an iterator object. The iterator provides a get next
value operation that produces the next item in the sequence each time it is
called, raising an exception when no more items are available.
In addition, specific iterators over the keys of a dictionary and over the
lines of a file are proposed, and a proposal is made to allow spelling
dict.has_key(key) as key in dict.
Note: this is an almost complete rewrite of this PEP by the second author,
describing the actual implementation checked into the trunk of the Python 2.2
CVS tree. It is still open for discussion. Some of the more esoteric
proposals in the original version of this PEP have been withdrawn for now;
these may be the subject of a separate PEP in the future.
C API Specification
A new exception is defined, StopIteration, which can be used to signal the
end of an iteration.
A new slot named tp_iter for requesting an iterator is added to the type
object structure. This should be a function of one PyObject * argument
returning a PyObject *, or NULL. To use this slot, a new C API
function PyObject_GetIter() is added, with the same signature as the
tp_iter slot function.
Another new slot, named tp_iternext, is added to the type structure, for
obtaining the next value in the iteration. To use this slot, a new C API
function PyIter_Next() is added. The signature for both the slot and the
API function is as follows, although the NULL return conditions differ:
the argument is a PyObject * and so is the return value. When the return
value is non-NULL, it is the next value in the iteration. When it is
NULL, then for the tp_iternext slot there are three possibilities:
No exception is set; this implies the end of the iteration.
The StopIteration exception (or a derived exception class) is set; this
implies the end of the iteration.
Some other exception is set; this means that an error occurred that should be
propagated normally.
The higher-level PyIter_Next() function clears the StopIteration
exception (or derived exception) when it occurs, so its NULL return
conditions are simpler:
No exception is set; this means iteration has ended.
Some exception is set; this means an error occurred, and should be propagated
normally.
Iterators implemented in C should not implement a next() method with
similar semantics as the tp_iternext slot! When the type’s dictionary is
initialized (by PyType_Ready()), the presence of a tp_iternext slot
causes a method next() wrapping that slot to be added to the type’s
tp_dict. (Exception: if the type doesn’t use PyObject_GenericGetAttr()
to access instance attributes, the next() method in the type’s tp_dict
may not be seen.) (Due to a misunderstanding in the original text of this PEP,
in Python 2.2, all iterator types implemented a next() method that was
overridden by the wrapper; this has been fixed in Python 2.3.)
To ensure binary backwards compatibility, a new flag Py_TPFLAGS_HAVE_ITER
is added to the set of flags in the tp_flags field, and to the default
flags macro. This flag must be tested before accessing the tp_iter or
tp_iternext slots. The macro PyIter_Check() tests whether an object
has the appropriate flag set and has a non-NULL tp_iternext slot.
There is no such macro for the tp_iter slot (since the only place where
this slot is referenced should be PyObject_GetIter(), and this can check
for the Py_TPFLAGS_HAVE_ITER flag directly).
(Note: the tp_iter slot can be present on any object; the tp_iternext
slot should only be present on objects that act as iterators.)
For backwards compatibility, the PyObject_GetIter() function implements
fallback semantics when its argument is a sequence that does not implement a
tp_iter function: a lightweight sequence iterator object is constructed in
that case which iterates over the items of the sequence in the natural order.
The Python bytecode generated for for loops is changed to use new opcodes,
GET_ITER and FOR_ITER, that use the iterator protocol rather than the
sequence protocol to get the next value for the loop variable. This makes it
possible to use a for loop to loop over non-sequence objects that support
the tp_iter slot. Other places where the interpreter loops over the values
of a sequence should also be changed to use iterators.
Iterators ought to implement the tp_iter slot as returning a reference to
themselves; this is needed to make it possible to use an iterator (as opposed
to a sequence) in a for loop.
Iterator implementations (in C or in Python) should guarantee that once the
iterator has signalled its exhaustion, subsequent calls to tp_iternext or
to the next() method will continue to do so. It is not specified whether
an iterator should enter the exhausted state when an exception (other than
StopIteration) is raised. Note that Python cannot guarantee that
user-defined or 3rd party iterators implement this requirement correctly.
Python API Specification
The StopIteration exception is made visible as one of the standard
exceptions. It is derived from Exception.
A new built-in function is defined, iter(), which can be called in two
ways:
iter(obj) calls PyObject_GetIter(obj).
iter(callable, sentinel) returns a special kind of iterator that calls
the callable to produce a new value, and compares the return value to the
sentinel value. If the return value equals the sentinel, this signals the
end of the iteration and StopIteration is raised rather than returning
normal; if the return value does not equal the sentinel, it is returned as
the next value from the iterator. If the callable raises an exception, this
is propagated normally; in particular, the function is allowed to raise
StopIteration as an alternative way to end the iteration. (This
functionality is available from the C API as
PyCallIter_New(callable, sentinel).)
Iterator objects returned by either form of iter() have a next()
method. This method either returns the next value in the iteration, or raises
StopIteration (or a derived exception class) to signal the end of the
iteration. Any other exception should be considered to signify an error and
should be propagated normally, not taken to mean the end of the iteration.
Classes can define how they are iterated over by defining an __iter__()
method; this should take no additional arguments and return a valid iterator
object. A class that wants to be an iterator should implement two methods: a
next() method that behaves as described above, and an __iter__() method
that returns self.
The two methods correspond to two distinct protocols:
An object can be iterated over with for if it implements __iter__()
or __getitem__().
An object can function as an iterator if it implements next().
Container-like objects usually support protocol 1. Iterators are currently
required to support both protocols. The semantics of iteration come only from
protocol 2; protocol 1 is present to make iterators behave like sequences; in
particular so that code receiving an iterator can use a for-loop over the
iterator.
Dictionary Iterators
Dictionaries implement a sq_contains slot that implements the same test
as the has_key() method. This means that we can writeif k in dict: ...
which is equivalent to
if dict.has_key(k): ...
Dictionaries implement a tp_iter slot that returns an efficient iterator
that iterates over the keys of the dictionary. During such an iteration, the
dictionary should not be modified, except that setting the value for an
existing key is allowed (deletions or additions are not, nor is the
update() method). This means that we can writefor k in dict: ...
which is equivalent to, but much faster than
for k in dict.keys(): ...
as long as the restriction on modifications to the dictionary (either by the
loop or by another thread) are not violated.
Add methods to dictionaries that return different kinds of iterators
explicitly:for key in dict.iterkeys(): ...
for value in dict.itervalues(): ...
for key, value in dict.iteritems(): ...
This means that for x in dict is shorthand for
for x in dict.iterkeys().
Other mappings, if they support iterators at all, should also iterate over the
keys. However, this should not be taken as an absolute rule; specific
applications may have different requirements.
File Iterators
The following proposal is useful because it provides us with a good answer to
the complaint that the common idiom to iterate over the lines of a file is ugly
and slow.
Files implement a tp_iter slot that is equivalent to
iter(f.readline, ""). This means that we can writefor line in file:
...
as a shorthand for
for line in iter(file.readline, ""):
...
which is equivalent to, but faster than
while 1:
line = file.readline()
if not line:
break
...
This also shows that some iterators are destructive: they consume all the
values and a second iterator cannot easily be created that iterates
independently over the same values. You could open the file for a second time,
or seek() to the beginning, but these solutions don’t work for all file
types, e.g. they don’t work when the open file object really represents a pipe
or a stream socket.
Because the file iterator uses an internal buffer, mixing this with other file
operations (e.g. file.readline()) doesn’t work right. Also, the following
code:
for line in file:
if line == "\n":
break
for line in file:
print line,
doesn’t work as you might expect, because the iterator created by the second
for-loop doesn’t take the buffer read-ahead by the first for-loop into account.
A correct way to write this is:
it = iter(file)
for line in it:
if line == "\n":
break
for line in it:
print line,
(The rationale for these restrictions are that for line in file ought to
become the recommended, standard way to iterate over the lines of a file, and
this should be as fast as can be. The iterator version is considerable faster
than calling readline(), due to the internal buffer in the iterator.)
Rationale
If all the parts of the proposal are included, this addresses many concerns in
a consistent and flexible fashion. Among its chief virtues are the following
four – no, five – no, six – points:
It provides an extensible iterator interface.
It allows performance enhancements to list iteration.
It allows big performance enhancements to dictionary iteration.
It allows one to provide an interface for just iteration without pretending
to provide random access to elements.
It is backward-compatible with all existing user-defined classes and
extension objects that emulate sequences and mappings, even mappings that
only implement a subset of {__getitem__, keys, values,
items}.
It makes code iterating over non-sequence collections more concise and
readable.
Resolved Issues
The following topics have been decided by consensus or BDFL pronouncement.
Two alternative spellings for next() have been proposed but rejected:
__next__(), because it corresponds to a type object slot
(tp_iternext); and __call__(), because this is the only operation.Arguments against __next__(): while many iterators are used in for loops,
it is expected that user code will also call next() directly, so having
to write __next__() is ugly; also, a possible extension of the protocol
would be to allow for prev(), current() and reset() operations;
surely we don’t want to use __prev__(), __current__(),
__reset__().
Arguments against __call__() (the original proposal): taken out of
context, x() is not very readable, while x.next() is clear; there’s a
danger that every special-purpose object wants to use __call__() for its
most common operation, causing more confusion than clarity.
(In retrospect, it might have been better to go for __next__() and have a
new built-in, next(it), which calls it.__next__(). But alas, it’s too
late; this has been deployed in Python 2.2 since December 2001.)
Some folks have requested the ability to restart an iterator. This should be
dealt with by calling iter() on a sequence repeatedly, not by the
iterator protocol itself. (See also requested extensions below.)
It has been questioned whether an exception to signal the end of the
iteration isn’t too expensive. Several alternatives for the
StopIteration exception have been proposed: a special value End to
signal the end, a function end() to test whether the iterator is
finished, even reusing the IndexError exception.
A special value has the problem that if a sequence ever contains that
special value, a loop over that sequence will end prematurely without any
warning. If the experience with null-terminated C strings hasn’t taught us
the problems this can cause, imagine the trouble a Python introspection
tool would have iterating over a list of all built-in names, assuming that
the special End value was a built-in name!
Calling an end() function would require two calls per iteration. Two
calls is much more expensive than one call plus a test for an exception.
Especially the time-critical for loop can test very cheaply for an
exception.
Reusing IndexError can cause confusion because it can be a genuine
error, which would be masked by ending the loop prematurely.
Some have asked for a standard iterator type. Presumably all iterators would
have to be derived from this type. But this is not the Python way:
dictionaries are mappings because they support __getitem__() and a
handful other operations, not because they are derived from an abstract
mapping type.
Regarding if key in dict: there is no doubt that the dict.has_key(x)
interpretation of x in dict is by far the most useful interpretation,
probably the only useful one. There has been resistance against this because
x in list checks whether x is present among the values, while the
proposal makes x in dict check whether x is present among the keys.
Given that the symmetry between lists and dictionaries is very weak, this
argument does not have much weight.
The name iter() is an abbreviation. Alternatives proposed include
iterate(), traverse(), but these appear too long. Python has a
history of using abbrs for common builtins, e.g. repr(), str(),
len().Resolution: iter() it is.
Using the same name for two different operations (getting an iterator from an
object and making an iterator for a function with a sentinel value) is
somewhat ugly. I haven’t seen a better name for the second operation though,
and since they both return an iterator, it’s easy to remember.Resolution: the builtin iter() takes an optional argument, which is the
sentinel to look for.
Once a particular iterator object has raised StopIteration, will it also
raise StopIteration on all subsequent next() calls? Some say that it
would be useful to require this, others say that it is useful to leave this
open to individual iterators. Note that this may require an additional state
bit for some iterator implementations (e.g. function-wrapping iterators).Resolution: once StopIteration is raised, calling it.next() continues
to raise StopIteration.
Note: this was in fact not implemented in Python 2.2; there are many cases
where an iterator’s next() method can raise StopIteration on one call
but not on the next. This has been remedied in Python 2.3.
It has been proposed that a file object should be its own iterator, with a
next() method returning the next line. This has certain advantages, and
makes it even clearer that this iterator is destructive. The disadvantage is
that this would make it even more painful to implement the “sticky
StopIteration” feature proposed in the previous bullet.Resolution: tentatively rejected (though there are still people arguing for
this).
Some folks have requested extensions of the iterator protocol, e.g.
prev() to get the previous item, current() to get the current item
again, finished() to test whether the iterator is finished, and maybe
even others, like rewind(), __len__(), position().While some of these are useful, many of these cannot easily be implemented
for all iterator types without adding arbitrary buffering, and sometimes they
can’t be implemented at all (or not reasonably). E.g. anything to do with
reversing directions can’t be done when iterating over a file or function.
Maybe a separate PEP can be drafted to standardize the names for such
operations when they are implementable.
Resolution: rejected.
There has been a long discussion about whetherfor x in dict: ...
should assign x the successive keys, values, or items of the dictionary.
The symmetry between if x in y and for x in y suggests that it should
iterate over keys. This symmetry has been observed by many independently and
has even been used to “explain” one using the other. This is because for
sequences, if x in y iterates over y comparing the iterated values to
x. If we adopt both of the above proposals, this will also hold for
dictionaries.
The argument against making for x in dict iterate over the keys comes
mostly from a practicality point of view: scans of the standard library show
that there are about as many uses of for x in dict.items() as there are
of for x in dict.keys(), with the items() version having a small
majority. Presumably many of the loops using keys() use the
corresponding value anyway, by writing dict[x], so (the argument goes) by
making both the key and value available, we could support the largest number
of cases. While this is true, I (Guido) find the correspondence between
for x in dict and if x in dict too compelling to break, and there’s
not much overhead in having to write dict[x] to explicitly get the value.
For fast iteration over items, use for key, value in dict.iteritems().
I’ve timed the difference between
for key in dict: dict[key]
and
for key, value in dict.iteritems(): pass
and found that the latter is only about 7% faster.
Resolution: By BDFL pronouncement, for x in dict iterates over the keys,
and dictionaries have iteritems(), iterkeys(), and itervalues()
to return the different flavors of dictionary iterators.
Mailing Lists
The iterator protocol has been discussed extensively in a mailing list on
SourceForge:
http://lists.sourceforge.net/lists/listinfo/python-iterators
Initially, some of the discussion was carried out at Yahoo; archives are still
accessible:
http://groups.yahoo.com/group/python-iter
Copyright
This document is in the public domain.
| Final | PEP 234 – Iterators | Standards Track | This document proposes an iteration interface that objects can provide to
control the behaviour of for loops. Looping is customized by providing a
method that produces an iterator object. The iterator provides a get next
value operation that produces the next item in the sequence each time it is
called, raising an exception when no more items are available. |
PEP 237 – Unifying Long Integers and Integers
Author:
Moshe Zadka, Guido van Rossum
Status:
Final
Type:
Standards Track
Created:
11-Mar-2001
Python-Version:
2.2
Post-History:
16-Mar-2001, 14-Aug-2001, 23-Aug-2001
Table of Contents
Abstract
Rationale
Implementation
Incompatibilities
Literals
Built-in Functions
C API
Transition
OverflowWarning
Example
Resolved Issues
Implementation
Copyright
Abstract
Python currently distinguishes between two kinds of integers (ints): regular
or short ints, limited by the size of a C long (typically 32 or 64 bits), and
long ints, which are limited only by available memory. When operations on
short ints yield results that don’t fit in a C long, they raise an error.
There are some other distinctions too. This PEP proposes to do away with most
of the differences in semantics, unifying the two types from the perspective
of the Python user.
Rationale
Many programs find a need to deal with larger numbers after the fact, and
changing the algorithms later is bothersome. It can hinder performance in the
normal case, when all arithmetic is performed using long ints whether or not
they are needed.
Having the machine word size exposed to the language hinders portability. For
examples Python source files and .pyc’s are not portable between 32-bit and
64-bit machines because of this.
There is also the general desire to hide unnecessary details from the Python
user when they are irrelevant for most applications. An example is memory
allocation, which is explicit in C but automatic in Python, giving us the
convenience of unlimited sizes on strings, lists, etc. It makes sense to
extend this convenience to numbers.
It will give new Python programmers (whether they are new to programming in
general or not) one less thing to learn before they can start using the
language.
Implementation
Initially, two alternative implementations were proposed (one by each author):
The PyInt type’s slot for a C long will be turned into a:union {
long i;
struct {
unsigned long length;
digit digits[1];
} bignum;
};
Only the n-1 lower bits of the long have any meaning; the top bit
is always set. This distinguishes the union. All PyInt functions
will check this bit before deciding which types of operations to use.
The existing short and long int types remain, but operations return
a long int instead of raising OverflowError when a result cannot be
represented as a short int. A new type, integer, may be introduced
that is an abstract base type of which both the int and long
implementation types are subclassed. This is useful so that programs can
check integer-ness with a single test:if isinstance(i, integer): ...
After some consideration, the second implementation plan was selected, since
it is far easier to implement, is backwards compatible at the C API level, and
in addition can be implemented partially as a transitional measure.
Incompatibilities
The following operations have (usually subtly) different semantics for short
and for long integers, and one or the other will have to be changed somehow.
This is intended to be an exhaustive list. If you know of any other operation
that differ in outcome depending on whether a short or a long int with the same
value is passed, please write the second author.
Currently, all arithmetic operators on short ints except << raise
OverflowError if the result cannot be represented as a short int. This
will be changed to return a long int instead. The following operators can
currently raise OverflowError: x+y, x-y, x*y, x**y,
divmod(x, y), x/y, x%y, and -x. (The last four can only
overflow when the value -sys.maxint-1 is involved.)
Currently, x<<n can lose bits for short ints. This will be changed to
return a long int containing all the shifted-out bits, if returning a short
int would lose bits (where changing sign is considered a special case of
losing bits).
Currently, hex and oct literals for short ints may specify negative values;
for example 0xffffffff == -1 on a 32-bit machine. This will be changed
to equal 0xffffffffL (2**32-1).
Currently, the %u, %x, %X and %o string formatting operators
and the hex() and oct() built-in functions behave differently for
negative numbers: negative short ints are formatted as unsigned C long,
while negative long ints are formatted with a minus sign. This will be
changed to use the long int semantics in all cases (but without the trailing
L that currently distinguishes the output of hex() and oct() for
long ints). Note that this means that %u becomes an alias for %d.
It will eventually be removed.
Currently, repr() of a long int returns a string ending in L while
repr() of a short int doesn’t. The L will be dropped; but not before
Python 3.0.
Currently, an operation with long operands will never return a short int.
This may change, since it allows some optimization. (No changes have been
made in this area yet, and none are planned.)
The expression type(x).__name__ depends on whether x is a short or a
long int. Since implementation alternative 2 is chosen, this difference
will remain. (In Python 3.0, we may be able to deploy a trick to hide the
difference, because it is annoying to reveal the difference to user code,
and more so as the difference between the two types is less visible.)
Long and short ints are handled different by the marshal module, and by
the pickle and cPickle modules. This difference will remain (at
least until Python 3.0).
Short ints with small values (typically between -1 and 99 inclusive) are
interned – whenever a result has such a value, an existing short int with
the same value is returned. This is not done for long ints with the same
values. This difference will remain. (Since there is no guarantee of this
interning, it is debatable whether this is a semantic difference – but code
may exist that uses is for comparisons of short ints and happens to work
because of this interning. Such code may fail if used with long ints.)
Literals
A trailing L at the end of an integer literal will stop having any
meaning, and will be eventually become illegal. The compiler will choose the
appropriate type solely based on the value. (Until Python 3.0, it will force
the literal to be a long; but literals without a trailing L may also be
long, if they are not representable as short ints.)
Built-in Functions
The function int() will return a short or a long int depending on the
argument value. In Python 3.0, the function long() will call the function
int(); before then, it will continue to force the result to be a long int,
but otherwise work the same way as int(). The built-in name long will
remain in the language to represent the long implementation type (unless it is
completely eradicated in Python 3.0), but using the int() function is
still recommended, since it will automatically return a long when needed.
C API
The C API remains unchanged; C code will still need to be aware of the
difference between short and long ints. (The Python 3.0 C API will probably
be completely incompatible.)
The PyArg_Parse*() APIs already accept long ints, as long as they are
within the range representable by C ints or longs, so that functions taking C
int or long argument won’t have to worry about dealing with Python longs.
Transition
There are three major phases to the transition:
Short int operations that currently raise OverflowError return a long
int value instead. This is the only change in this phase. Literals will
still distinguish between short and long ints. The other semantic
differences listed above (including the behavior of <<) will remain.
Because this phase only changes situations that currently raise
OverflowError, it is assumed that this won’t break existing code.
(Code that depends on this exception would have to be too convoluted to be
concerned about it.) For those concerned about extreme backwards
compatibility, a command line option (or a call to the warnings module)
will allow a warning or an error to be issued at this point, but this is
off by default.
The remaining semantic differences are addressed. In all cases the long
int semantics will prevail. Since this will introduce backwards
incompatibilities which will break some old code, this phase may require a
future statement and/or warnings, and a prolonged transition phase. The
trailing L will continue to be used for longs as input and by
repr().
Warnings are enabled about operations that will change their numeric
outcome in stage 2B, in particular hex() and oct(), %u,
%x, %X and %o, hex and oct literals in the
(inclusive) range [sys.maxint+1, sys.maxint*2+1], and left shifts
losing bits.
The new semantic for these operations are implemented. Operations that
give different results than before will not issue a warning.
The trailing L is dropped from repr(), and made illegal on input.
(If possible, the long type completely disappears.) The trailing L
is also dropped from hex() and oct().
Phase 1 will be implemented in Python 2.2.
Phase 2 will be implemented gradually, with 2A in Python 2.3 and 2B in
Python 2.4.
Phase 3 will be implemented in Python 3.0 (at least two years after Python 2.4
is released).
OverflowWarning
Here are the rules that guide warnings generated in situations that currently
raise OverflowError. This applies to transition phase 1. Historical
note: despite that phase 1 was completed in Python 2.2, and phase 2A in Python
2.3, nobody noticed that OverflowWarning was still generated in Python 2.3.
It was finally disabled in Python 2.4. The Python builtin
OverflowWarning, and the corresponding C API PyExc_OverflowWarning,
are no longer generated or used in Python 2.4, but will remain for the
(unlikely) case of user code until Python 2.5.
A new warning category is introduced, OverflowWarning. This is a
built-in name.
If an int result overflows, an OverflowWarning warning is issued, with a
message argument indicating the operation, e.g. “integer addition”. This
may or may not cause a warning message to be displayed on sys.stderr, or
may cause an exception to be raised, all under control of the -W command
line and the warnings module.
The OverflowWarning warning is ignored by default.
The OverflowWarning warning can be controlled like all warnings, via the
-W command line option or via the warnings.filterwarnings() call.
For example:python -Wdefault::OverflowWarning
cause the OverflowWarning to be displayed the first time it occurs at a
particular source line, and:
python -Werror::OverflowWarning
cause the OverflowWarning to be turned into an exception whenever it
happens. The following code enables the warning from inside the program:
import warnings
warnings.filterwarnings("default", "", OverflowWarning)
See the python man page for the -W option and the warnings
module documentation for filterwarnings().
If the OverflowWarning warning is turned into an error,
OverflowError is substituted. This is needed for backwards
compatibility.
Unless the warning is turned into an exceptions, the result of the operation
(e.g., x+y) is recomputed after converting the arguments to long ints.
Example
If you pass a long int to a C function or built-in operation that takes an
integer, it will be treated the same as a short int as long as the value fits
(by virtue of how PyArg_ParseTuple() is implemented). If the long value
doesn’t fit, it will still raise an OverflowError. For example:
def fact(n):
if n <= 1:
return 1
return n*fact(n-1)
A = "ABCDEFGHIJKLMNOPQ"
n = input("Gimme an int: ")
print A[fact(n)%17]
For n >= 13, this currently raises OverflowError (unless the user
enters a trailing L as part of their input), even though the calculated
index would always be in range(17). With the new approach this code will
do the right thing: the index will be calculated as a long int, but its value
will be in range.
Resolved Issues
These issues, previously open, have been resolved.
hex() and oct() applied to longs will continue to produce a trailing
L until Python 3000. The original text above wasn’t clear about this,
but since it didn’t happen in Python 2.4 it was thought better to leave it
alone. BDFL pronouncement here:https://mail.python.org/pipermail/python-dev/2006-June/065918.html
What to do about sys.maxint? Leave it in, since it is still relevant
whenever the distinction between short and long ints is still relevant (e.g.
when inspecting the type of a value).
Should we remove %u completely? Remove it.
Should we warn about << not truncating integers? Yes.
Should the overflow warning be on a portable maximum size? No.
Implementation
The implementation work for the Python 2.x line is completed; phase 1 was
released with Python 2.2, phase 2A with Python 2.3, and phase 2B will be
released with Python 2.4 (and is already in CVS).
Copyright
This document has been placed in the public domain.
| Final | PEP 237 – Unifying Long Integers and Integers | Standards Track | Python currently distinguishes between two kinds of integers (ints): regular
or short ints, limited by the size of a C long (typically 32 or 64 bits), and
long ints, which are limited only by available memory. When operations on
short ints yield results that don’t fit in a C long, they raise an error.
There are some other distinctions too. This PEP proposes to do away with most
of the differences in semantics, unifying the two types from the perspective
of the Python user. |
PEP 238 – Changing the Division Operator
Author:
Moshe Zadka <moshez at zadka.site.co.il>,
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
11-Mar-2001
Python-Version:
2.2
Post-History:
16-Mar-2001, 26-Jul-2001, 27-Jul-2001
Table of Contents
Abstract
Motivation
Variations
Alternatives
API Changes
Command Line Option
Semantics of Floor Division
Semantics of True Division
The Future Division Statement
Open Issues
Resolved Issues
FAQ
When will Python 3.0 be released?
Why isn’t true division called float division?
Why is there a need for __truediv__ and __itruediv__?
How do I write code that works under the classic rules as well as under the new rules without using // or a future division statement?
How do I specify the division semantics for input(), compile(), execfile(), eval() and exec?
What about code compiled by the codeop module?
Will there be conversion tools or aids?
Why is my question not answered here?
Implementation
Copyright
Abstract
The current division (/) operator has an ambiguous meaning for numerical
arguments: it returns the floor of the mathematical result of division if the
arguments are ints or longs, but it returns a reasonable approximation of the
division result if the arguments are floats or complex. This makes
expressions expecting float or complex results error-prone when integers are
not expected but possible as inputs.
We propose to fix this by introducing different operators for different
operations: x/y to return a reasonable approximation of the mathematical
result of the division (“true division”), x//y to return the floor
(“floor division”). We call the current, mixed meaning of x/y
“classic division”.
Because of severe backwards compatibility issues, not to mention a major
flamewar on c.l.py, we propose the following transitional measures (starting
with Python 2.2):
Classic division will remain the default in the Python 2.x series; true
division will be standard in Python 3.0.
The // operator will be available to request floor division
unambiguously.
The future division statement, spelled from __future__ import division,
will change the / operator to mean true division throughout the module.
A command line option will enable run-time warnings for classic division
applied to int or long arguments; another command line option will make true
division the default.
The standard library will use the future division statement and the //
operator when appropriate, so as to completely avoid classic division.
Motivation
The classic division operator makes it hard to write numerical expressions
that are supposed to give correct results from arbitrary numerical inputs.
For all other operators, one can write down a formula such as x*y**2 + z,
and the calculated result will be close to the mathematical result (within the
limits of numerical accuracy, of course) for any numerical input type (int,
long, float, or complex). But division poses a problem: if the expressions
for both arguments happen to have an integral type, it implements floor
division rather than true division.
The problem is unique to dynamically typed languages: in a statically typed
language like C, the inputs, typically function arguments, would be declared
as double or float, and when a call passes an integer argument, it is
converted to double or float at the time of the call. Python doesn’t have
argument type declarations, so integer arguments can easily find their way
into an expression.
The problem is particularly pernicious since ints are perfect substitutes for
floats in all other circumstances: math.sqrt(2) returns the same value as
math.sqrt(2.0), 3.14*100 and 3.14*100.0 return the same value, and
so on. Thus, the author of a numerical routine may only use floating point
numbers to test his code, and believe that it works correctly, and a user may
accidentally pass in an integer input value and get incorrect results.
Another way to look at this is that classic division makes it difficult to
write polymorphic functions that work well with either float or int arguments;
all other operators already do the right thing. No algorithm that works for
both ints and floats has a need for truncating division in one case and true
division in the other.
The correct work-around is subtle: casting an argument to float() is wrong if
it could be a complex number; adding 0.0 to an argument doesn’t preserve the
sign of the argument if it was minus zero. The only solution without either
downside is multiplying an argument (typically the first) by 1.0. This leaves
the value and sign unchanged for float and complex, and turns int and long
into a float with the corresponding value.
It is the opinion of the authors that this is a real design bug in Python, and
that it should be fixed sooner rather than later. Assuming Python usage will
continue to grow, the cost of leaving this bug in the language will eventually
outweigh the cost of fixing old code – there is an upper bound to the amount
of code to be fixed, but the amount of code that might be affected by the bug
in the future is unbounded.
Another reason for this change is the desire to ultimately unify Python’s
numeric model. This is the subject of PEP 228 (which is currently
incomplete). A unified numeric model removes most of the user’s need to be
aware of different numerical types. This is good for beginners, but also
takes away concerns about different numeric behavior for advanced programmers.
(Of course, it won’t remove concerns about numerical stability and accuracy.)
In a unified numeric model, the different types (int, long, float, complex,
and possibly others, such as a new rational type) serve mostly as storage
optimizations, and to some extent to indicate orthogonal properties such as
inexactness or complexity. In a unified model, the integer 1 should be
indistinguishable from the floating point number 1.0 (except for its
inexactness), and both should behave the same in all numeric contexts.
Clearly, in a unified numeric model, if a==b and c==d, a/c should
equal b/d (taking some liberties due to rounding for inexact numbers), and
since everybody agrees that 1.0/2.0 equals 0.5, 1/2 should also equal
0.5. Likewise, since 1//2 equals zero, 1.0//2.0 should also equal
zero.
Variations
Aesthetically, x//y doesn’t please everyone, and hence several variations
have been proposed. They are addressed here:
x div y. This would introduce a new keyword. Since div is a
popular identifier, this would break a fair amount of existing code, unless
the new keyword was only recognized under a future division statement.
Since it is expected that the majority of code that needs to be converted is
dividing integers, this would greatly increase the need for the future
division statement. Even with a future statement, the general sentiment
against adding new keywords unless absolutely necessary argues against this.
div(x, y). This makes the conversion of old code much harder.
Replacing x/y with x//y or x div y can be done with a simple
query replace; in most cases the programmer can easily verify that a
particular module only works with integers so all occurrences of x/y can
be replaced. (The query replace is still needed to weed out slashes
occurring in comments or string literals.) Replacing x/y with
div(x, y) would require a much more intelligent tool, since the extent
of the expressions to the left and right of the / must be analyzed
before the placement of the div( and ) part can be decided.
x \ y. The backslash is already a token, meaning line continuation, and
in general it suggests an escape to Unix eyes. In addition (this due to
Terry Reedy) this would make things like eval("x\y") harder to get
right.
Alternatives
In order to reduce the amount of old code that needs to be converted, several
alternative proposals have been put forth. Here is a brief discussion of each
proposal (or category of proposals). If you know of an alternative that was
discussed on c.l.py that isn’t mentioned here, please mail the second author.
Let / keep its classic semantics; introduce // for true division.
This still leaves a broken operator in the language, and invites to use the
broken behavior. It also shuts off the road to a unified numeric model a la
PEP 228.
Let int division return a special “portmanteau” type that behaves as an
integer in integer context, but like a float in a float context. The
problem with this is that after a few operations, the int and the float
value could be miles apart, it’s unclear which value should be used in
comparisons, and of course many contexts (like conversion to string) don’t
have a clear integer or float preference.
Use a directive to use specific division semantics in a module, rather than
a future statement. This retains classic division as a permanent wart in
the language, requiring future generations of Python programmers to be
aware of the problem and the remedies.
Use from __past__ import division to use classic division semantics in a
module. This also retains the classic division as a permanent wart, or at
least for a long time (eventually the past division statement could raise an
ImportError).
Use a directive (or some other way) to specify the Python version for which
a specific piece of code was developed. This requires future Python
interpreters to be able to emulate exactly several previous versions of
Python, and moreover to do so for multiple versions within the same
interpreter. This is way too much work. A much simpler solution is to keep
multiple interpreters installed. Another argument against this is that the
version directive is almost always overspecified: most code written for
Python X.Y, works for Python X.(Y-1) and X.(Y+1) as well, so specifying X.Y
as a version is more constraining than it needs to be. At the same time,
there’s no way to know at which future or past version the code will break.
API Changes
During the transitional phase, we have to support three division operators
within the same program: classic division (for / in modules without a
future division statement), true division (for / in modules with a future
division statement), and floor division (for //). Each operator comes in
two flavors: regular, and as an augmented assignment operator (/= or
//=).
The names associated with these variations are:
Overloaded operator methods:__div__(), __floordiv__(), __truediv__();
__idiv__(), __ifloordiv__(), __itruediv__().
Abstract API C functions:PyNumber_Divide(), PyNumber_FloorDivide(),
PyNumber_TrueDivide();
PyNumber_InPlaceDivide(), PyNumber_InPlaceFloorDivide(),
PyNumber_InPlaceTrueDivide().
Byte code opcodes:BINARY_DIVIDE, BINARY_FLOOR_DIVIDE, BINARY_TRUE_DIVIDE;
INPLACE_DIVIDE, INPLACE_FLOOR_DIVIDE, INPLACE_TRUE_DIVIDE.
PyNumberMethod slots:nb_divide, nb_floor_divide, nb_true_divide,
nb_inplace_divide, nb_inplace_floor_divide,
nb_inplace_true_divide.
The added PyNumberMethod slots require an additional flag in tp_flags;
this flag will be named Py_TPFLAGS_HAVE_NEWDIVIDE and will be included in
Py_TPFLAGS_DEFAULT.
The true and floor division APIs will look for the corresponding slots and
call that; when that slot is NULL, they will raise an exception. There is
no fallback to the classic divide slot.
In Python 3.0, the classic division semantics will be removed; the classic
division APIs will become synonymous with true division.
Command Line Option
The -Q command line option takes a string argument that can take four
values: old, warn, warnall, or new. The default is old
in Python 2.2 but will change to warn in later 2.x versions. The old
value means the classic division operator acts as described. The warn
value means the classic division operator issues a warning (a
DeprecationWarning using the standard warning framework) when applied
to ints or longs. The warnall value also issues warnings for classic
division when applied to floats or complex; this is for use by the
fixdiv.py conversion script mentioned below. The new value changes
the default globally so that the / operator is always interpreted as
true division. The new option is only intended for use in certain
educational environments, where true division is required, but asking the
students to include the future division statement in all their code would be a
problem.
This option will not be supported in Python 3.0; Python 3.0 will always
interpret / as true division.
(This option was originally proposed as -D, but that turned out to be an
existing option for Jython, hence the Q – mnemonic for Quotient. Other names
have been proposed, like -Qclassic, -Qclassic-warn, -Qtrue, or
-Qold_division etc.; these seem more verbose to me without much advantage.
After all the term classic division is not used in the language at all (only
in the PEP), and the term true division is rarely used in the language – only
in __truediv__.)
Semantics of Floor Division
Floor division will be implemented in all the Python numeric types, and will
have the semantics of:
a // b == floor(a/b)
except that the result type will be the common type into which a and b are
coerced before the operation.
Specifically, if a and b are of the same type, a//b will be of that
type too. If the inputs are of different types, they are first coerced to a
common type using the same rules used for all other arithmetic operators.
In particular, if a and b are both ints or longs, the result has the same
type and value as for classic division on these types (including the case of
mixed input types; int//long and long//int will both return a long).
For floating point inputs, the result is a float. For example:
3.5//2.0 == 1.0
For complex numbers, // raises an exception, since floor() of a
complex number is not allowed.
For user-defined classes and extension types, all semantics are up to the
implementation of the class or type.
Semantics of True Division
True division for ints and longs will convert the arguments to float and then
apply a float division. That is, even 2/1 will return a float (2.0),
not an int. For floats and complex, it will be the same as classic division.
The 2.2 implementation of true division acts as if the float type had
unbounded range, so that overflow doesn’t occur unless the magnitude of the
mathematical result is too large to represent as a float. For example,
after x = 1L << 40000, float(x) raises OverflowError (note that
this is also new in 2.2: previously the outcome was platform-dependent, most
commonly a float infinity). But x/x returns 1.0 without exception,
while x/1 raises OverflowError.
Note that for int and long arguments, true division may lose information; this
is in the nature of true division (as long as rationals are not in the
language). Algorithms that consciously use longs should consider using
//, as true division of longs retains no more than 53 bits of precision
(on most platforms).
If and when a rational type is added to Python (see PEP 239), true
division for ints and longs should probably return a rational. This avoids
the problem with true division of ints and longs losing information. But
until then, for consistency, float is the only choice for true division.
The Future Division Statement
If from __future__ import division is present in a module, or if
-Qnew is used, the / and /= operators are translated to true
division opcodes; otherwise they are translated to classic division (until
Python 3.0 comes along, where they are always translated to true division).
The future division statement has no effect on the recognition or translation
of // and //=.
See PEP 236 for the general rules for future statements.
(It has been proposed to use a longer phrase, like true_division or
modern_division. These don’t seem to add much information.)
Open Issues
We expect that these issues will be resolved over time, as more feedback is
received or we gather more experience with the initial implementation.
It has been proposed to call // the quotient operator, and the /
operator the ratio operator. I’m not sure about this – for some people
quotient is just a synonym for division, and ratio suggests rational
numbers, which is wrong. I prefer the terminology to be slightly awkward
if that avoids unambiguity. Also, for some folks quotient suggests
truncation towards zero, not towards infinity as floor division
says explicitly.
It has been argued that a command line option to change the default is
evil. It can certainly be dangerous in the wrong hands: for example, it
would be impossible to combine a 3rd party library package that requires
-Qnew with another one that requires -Qold. But I believe that the
VPython folks need a way to enable true division by default, and other
educators might need the same. These usually have enough control over the
library packages available in their environment.
For classes to have to support all three of __div__(),
__floordiv__() and __truediv__() seems painful; and what to do in
3.0? Maybe we only need __div__() and __floordiv__(), or maybe at
least true division should try __truediv__() first and __div__()
second.
Resolved Issues
Issue: For very large long integers, the definition of true division as
returning a float causes problems, since the range of Python longs is much
larger than that of Python floats. This problem will disappear if and when
rational numbers are supported.Resolution: For long true division, Python uses an internal float type with
native double precision but unbounded range, so that OverflowError doesn’t
occur unless the quotient is too large to represent as a native double.
Issue: In the interim, maybe the long-to-float conversion could be made to
raise OverflowError if the long is out of range.Resolution: This has been implemented, but, as above, the magnitude of the
inputs to long true division doesn’t matter; only the magnitude of the
quotient matters.
Issue: Tim Peters will make sure that whenever an in-range float is
returned, decent precision is guaranteed.Resolution: Provided the quotient of long true division is representable as
a float, it suffers no more than 3 rounding errors: one each for converting
the inputs to an internal float type with native double precision but
unbounded range, and one more for the division. However, note that if the
magnitude of the quotient is too small to represent as a native double,
0.0 is returned without exception (“silent underflow”).
FAQ
When will Python 3.0 be released?
We don’t plan that long ahead, so we can’t say for sure. We want to allow
at least two years for the transition. If Python 3.0 comes out sooner,
we’ll keep the 2.x line alive for backwards compatibility until at least
two years from the release of Python 2.2. In practice, you will be able
to continue to use the Python 2.x line for several years after Python 3.0
is released, so you can take your time with the transition. Sites are
expected to have both Python 2.x and Python 3.x installed simultaneously.
Why isn’t true division called float division?
Because I want to keep the door open to possibly introducing rationals
and making 1/2 return a rational rather than a float. See PEP 239.
Why is there a need for __truediv__ and __itruediv__?
We don’t want to make user-defined classes second-class citizens.
Certainly not with the type/class unification going on.
How do I write code that works under the classic rules as well as under the new rules without using // or a future division statement?
Use x*1.0/y for true division, divmod(x, y) (PEP 228) for int
division. Especially the latter is best hidden inside a function. You
may also write float(x)/y for true division if you are sure that you
don’t expect complex numbers. If you know your integers are never
negative, you can use int(x/y) – while the documentation of int()
says that int() can round or truncate depending on the C
implementation, we know of no C implementation that doesn’t truncate, and
we’re going to change the spec for int() to promise truncation. Note
that classic division (and floor division) round towards negative
infinity, while int() rounds towards zero, giving different answers
for negative numbers.
How do I specify the division semantics for input(), compile(), execfile(), eval() and exec?
They inherit the choice from the invoking module. PEP 236 now lists
this as a resolved problem, referring to PEP 264.
What about code compiled by the codeop module?
This is dealt with properly; see PEP 264.
Will there be conversion tools or aids?
Certainly. While these are outside the scope of the PEP, I should point
out two simple tools that will be released with Python 2.2a3:
Tools/scripts/finddiv.py finds division operators (slightly smarter
than grep /) and Tools/scripts/fixdiv.py can produce patches based
on run-time analysis.
Why is my question not answered here?
Because we weren’t aware of it. If it’s been discussed on c.l.py and you
believe the answer is of general interest, please notify the second
author. (We don’t have the time or inclination to answer every question
sent in private email, hence the requirement that it be discussed on
c.l.py first.)
Implementation
Essentially everything mentioned here is implemented in CVS and will be
released with Python 2.2a3; most of it was already released with Python 2.2a2.
Copyright
This document has been placed in the public domain.
| Final | PEP 238 – Changing the Division Operator | Standards Track | The current division (/) operator has an ambiguous meaning for numerical
arguments: it returns the floor of the mathematical result of division if the
arguments are ints or longs, but it returns a reasonable approximation of the
division result if the arguments are floats or complex. This makes
expressions expecting float or complex results error-prone when integers are
not expected but possible as inputs. |
PEP 239 – Adding a Rational Type to Python
Author:
Christopher A. Craig <python-pep at ccraig.org>, Moshe Zadka <moshez at zadka.site.co.il>
Status:
Rejected
Type:
Standards Track
Created:
11-Mar-2001
Python-Version:
2.2
Post-History:
16-Mar-2001
Table of Contents
Abstract
BDFL Pronouncement
Rationale
RationalType
The rational() Builtin
Open Issues
References
Copyright
Abstract
Python has no numeric type with the semantics of an unboundedly
precise rational number. This proposal explains the semantics of
such a type, and suggests builtin functions and literals to
support such a type. This PEP suggests no literals for rational
numbers; that is left for another PEP.
BDFL Pronouncement
This PEP is rejected. The needs outlined in the rationale section
have been addressed to some extent by the acceptance of PEP 327
for decimal arithmetic. Guido also noted, “Rational arithmetic
was the default ‘exact’ arithmetic in ABC and it did not work out as
expected”. See the python-dev discussion on 17 June 2005 [1].
Postscript: With the acceptance of PEP 3141, “A Type Hierarchy
for Numbers”, a ‘Rational’ numeric abstract base class was added
with a concrete implementation in the ‘fractions’ module.
Rationale
While sometimes slower and more memory intensive (in general,
unboundedly so) rational arithmetic captures more closely the
mathematical ideal of numbers, and tends to have behavior which is
less surprising to newbies. Though many Python implementations of
rational numbers have been written, none of these exist in the
core, or are documented in any way. This has made them much less
accessible to people who are less Python-savvy.
RationalType
There will be a new numeric type added called RationalType. Its
unary operators will do the obvious thing. Binary operators will
coerce integers and long integers to rationals, and rationals to
floats and complexes.
The following attributes will be supported: .numerator and
.denominator. The language definition will promise that:
r.denominator * r == r.numerator
that the GCD of the numerator and the denominator is 1 and that
the denominator is positive.
The method r.trim(max_denominator) will return the closest
rational s to r such that abs(s.denominator) <= max_denominator.
The rational() Builtin
This function will have the signature rational(n, d=1). n and d
must both be integers, long integers or rationals. A guarantee is
made that:
rational(n, d) * d == n
Open Issues
Maybe the type should be called rat instead of rational.
Somebody proposed that we have “abstract” pure mathematical
types named complex, real, rational, integer, and “concrete”
representation types with names like float, rat, long, int.
Should a rational number with an integer value be allowed as a
sequence index? For example, should s[5/3 - 2/3] be equivalent
to s[1]?
Should shift and mask operators be allowed for rational numbers?
For rational numbers with integer values?
Marcin ‘Qrczak’ Kowalczyk summarized the arguments for and
against unifying ints with rationals nicely on c.l.pyArguments for unifying ints with rationals:
Since 2 == 2/1 and maybe str(2/1) == '2', it reduces surprises
where objects seem equal but behave differently.
/ can be freely used for integer division when I know that
there is no remainder (if I am wrong and there is a remainder,
there will probably be some exception later).
Arguments against:
When I use the result of / as a sequence index, it’s usually
an error which should not be hidden by making the program
working for some data, since it will break for other data.
(this assumes that after unification int and rational would be
different types:) Types should rarely depend on values. It’s
easier to reason when the type of a variable is known: I know
how I can use it. I can determine that something is an int and
expect that other objects used in this place will be ints too.
(this assumes the same type for them:) Int is a good type in
itself, not to be mixed with rationals. The fact that
something is an integer should be expressible as a statement
about its type. Many operations require ints and don’t accept
rationals. It’s natural to think about them as about different
types.
References
[1]
Raymond Hettinger, Propose rejection of PEPs 239 and 240 – a builtin
rational type and rational literals
https://mail.python.org/pipermail/python-dev/2005-June/054281.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 239 – Adding a Rational Type to Python | Standards Track | Python has no numeric type with the semantics of an unboundedly
precise rational number. This proposal explains the semantics of
such a type, and suggests builtin functions and literals to
support such a type. This PEP suggests no literals for rational
numbers; that is left for another PEP. |
PEP 240 – Adding a Rational Literal to Python
Author:
Christopher A. Craig <python-pep at ccraig.org>, Moshe Zadka <moshez at zadka.site.co.il>
Status:
Rejected
Type:
Standards Track
Created:
11-Mar-2001
Python-Version:
2.2
Post-History:
16-Mar-2001
Table of Contents
Abstract
BDFL Pronouncement
Rationale
Proposal
Backwards Compatibility
Common Objections
References
Copyright
Abstract
A different PEP suggests adding a builtin rational type to
Python. This PEP suggests changing the ddd.ddd float literal to a
rational in Python, and modifying non-integer division to return
it.
BDFL Pronouncement
This PEP is rejected. The needs outlined in the rationale section
have been addressed to some extent by the acceptance of PEP 327
for decimal arithmetic. Guido also noted, “Rational arithmetic
was the default ‘exact’ arithmetic in ABC and it did not work out as
expected”. See the python-dev discussion on 17 June 2005 [1].
Rationale
Rational numbers are useful for exact and unsurprising arithmetic.
They give the correct results people have been taught in various
math classes. Making the “obvious” non-integer type one with more
predictable semantics will surprise new programmers less than
using floating point numbers. As quite a few posts on c.l.py and
on [email protected] have shown, people often get bit by strange
semantics of floating point numbers: for example, round(0.98, 2)
still gives 0.97999999999999998.
Proposal
Literals conforming to the regular expression '\d*.\d*' will be
rational numbers.
Backwards Compatibility
The only backwards compatible issue is the type of literals
mentioned above. The following migration is suggested:
The next Python after approval will allow
from __future__ import rational_literals
to cause all such literals to be treated as rational numbers.
Python 3.0 will have a warning, turned on by default, about
such literals in the absence of a __future__ statement. The
warning message will contain information about the __future__
statement, and indicate that to get floating point literals,
they should be suffixed with “e0”.
Python 3.1 will have the warning turned off by default. This
warning will stay in place for 24 months, at which time the
literals will be rationals and the warning will be removed.
Common Objections
Rationals are slow and memory intensive!
(Relax, I’m not taking floats away, I’m just adding two more characters.
1e0 will still be a float)
Rationals must present themselves as a decimal float or they will be
horrible for users expecting decimals (i.e. str(.5) should return '.5' and
not '1/2'). This means that many rationals must be truncated at some
point, which gives us a new loss of precision.
References
[1]
Raymond Hettinger, Propose rejection of PEPs 239 and 240 – a builtin
rational type and rational literals
https://mail.python.org/pipermail/python-dev/2005-June/054281.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 240 – Adding a Rational Literal to Python | Standards Track | A different PEP suggests adding a builtin rational type to
Python. This PEP suggests changing the ddd.ddd float literal to a
rational in Python, and modifying non-integer division to return
it. |
PEP 242 – Numeric Kinds
Author:
Paul F. Dubois <paul at pfdubois.com>
Status:
Rejected
Type:
Standards Track
Created:
17-Mar-2001
Python-Version:
2.2
Post-History:
17-Apr-2001
Table of Contents
Abstract
Rationale
Supported Kinds of Ints and Floats
Kind Objects
Attributes of Module kinds
Complex Numbers
Examples
Open Issues
Rejection
Copyright
Abstract
This proposal gives the user optional control over the precision
and range of numeric computations so that a computation can be
written once and run anywhere with at least the desired precision
and range. It is backward compatible with existing code. The
meaning of decimal literals is clarified.
Rationale
Currently it is impossible in every language except Fortran 90 to
write a program in a portable way that uses floating point and
gets roughly the same answer regardless of platform – or refuses
to compile if that is not possible. Python currently has only one
floating point type, equal to a C double in the C implementation.
No type exists corresponding to single or quad floats. It would
complicate the language to try to introduce such types directly
and their subsequent use would not be portable. This proposal is
similar to the Fortran 90 “kind” solution, adapted to the Python
environment. With this facility an entire calculation can be
switched from one level of precision to another by changing a
single line. If the desired precision does not exist on a
particular machine, the program will fail rather than get the
wrong answer. Since coding in this style would involve an early
call to the routine that will fail, this is the next best thing to
not compiling.
Supported Kinds of Ints and Floats
Complex numbers are treated separately below, since Python can be
built without them.
Each Python compiler may define as many “kinds” of integer and
floating point numbers as it likes, except that it must support at
least two kinds of integer corresponding to the existing int and
long, and must support at least one kind of floating point number,
equivalent to the present float.
The range and precision of these required kinds are processor
dependent, as at present, except for the “long integer” kind,
which can hold an arbitrary integer.
The built-in functions int(), long(), and float() convert inputs
to these default kinds as they do at present. (Note that a
Unicode string is actually a different “kind” of string and that a
sufficiently knowledgeable person might be able to expand this PEP
to cover that case.)
Within each type (integer, floating) the compiler supports a
linearly-ordered set of kinds, with the ordering determined by the
ability to hold numbers of an increased range and/or precision.
Kind Objects
Two new standard functions are defined in a module named “kinds”.
They return callable objects called kind objects. Each int or
floating kind object f has the signature result = f(x), and each
complex kind object has the signature result = f(x, y=0.).
int_kind(n)For an integer argument n >= 1, return a callable object whose
result is an integer kind that will hold an integer number in
the open interval (-10**n, 10**n). The kind object accepts
arguments that are integers including longs. If n == 0,
returns the kind object corresponding to the Python literal 0.
float_kind(nd, n)For nd >= 0 and n >= 1, return a callable object whose result
is a floating point kind that will hold a floating-point
number with at least nd digits of precision and a base-10
exponent in the closed interval [-n, n]. The kind object
accepts arguments that are integer or float.If nd and n are both zero, returns the kind object
corresponding to the Python literal 0.0.
The compiler will return a kind object corresponding to the least
of its available set of kinds for that type that has the desired
properties. If no kind with the desired qualities exists in a
given implementation an OverflowError exception is thrown. A kind
function converts its argument to the target kind, but if the
result does not fit in the target kind’s range, an OverflowError
exception is thrown.
Besides their callable behavior, kind objects have attributes
giving the traits of the kind in question.
name is the name of the kind. The standard kinds are called
int, long, double.
typecode is a single-letter string that would be appropriate
for use with Numeric or module array to form an array of this
kind. The standard types’ typecodes are ‘i’, ‘O’, ‘d’
respectively.
Integer kinds have these additional attributes: MAX, equal to
the maximum permissible integer of this kind, or None for the
long kind. MIN, equal to the most negative permissible integer
of this kind, or None for the long kind.
Float kinds have these additional attributes whose properties
are equal to the corresponding value for the corresponding C
type in the standard header file “float.h”. MAX, MIN, DIG,
MANT_DIG, EPSILON, MAX_EXP, MAX_10_EXP, MIN_EXP,
MIN_10_EXP, RADIX, ROUNDS
(== FLT_RADIX, FLT_ROUNDS in float.h). These
values are of type integer except for MAX, MIN, and EPSILON,
which are of the Python floating type to which the kind
corresponds.
Attributes of Module kinds
int_kinds is a list of the available integer kinds, sorted from lowest
to highest kind. By definition, int_kinds[-1] is the long kind.
float_kinds is a list of the available floating point kinds, sorted
from lowest to highest kind.
default_int_kind is the kind object corresponding to the Python
literal 0
default_long_kind is the kind object corresponding to the Python
literal 0L
default_float_kind is the kind object corresponding to the Python
literal 0.0
Complex Numbers
If supported, complex numbers have real and imaginary parts that
are floating-point numbers with the same kind. A Python compiler
must support a complex analog of each floating point kind it
supports, if it supports complex numbers at all.
If complex numbers are supported, the following are available in
module kinds:
complex_kind(nd, n)Return a callable object whose result is a complex kind that
will hold a complex number each of whose components (.real,
.imag) is of kind float_kind(nd, n). The kind object will
accept one argument that is of any integer, real, or complex
kind, or two arguments, each integer or real.
complex_kinds is a list of the available complex kinds, sorted
from lowest to highest kind.
default_complex_kind is the kind object corresponding to the
Python literal 0.0j. The name of this kind
is doublecomplex, and its typecode is ‘D’.
Complex kind objects have these addition attributes:
floatkind is the kind object of the corresponding float type.
Examples
In module myprecision.py:
import kinds
tinyint = kinds.int_kind(1)
single = kinds.float_kind(6, 90)
double = kinds.float_kind(15, 300)
csingle = kinds.complex_kind(6, 90)
In the rest of my code:
from myprecision import tinyint, single, double, csingle
n = tinyint(3)
x = double(1.e20)
z = 1.2
# builtin float gets you the default float kind, properties unknown
w = x * float(x)
# but in the following case we know w has kind "double".
w = x * double(z)
u = csingle(x + z * 1.0j)
u2 = csingle(x+z, 1.0)
Note how that entire code can then be changed to a higher
precision by changing the arguments in myprecision.py.
Comment: note that you aren’t promised that single != double; but
you are promised that double(1.e20) will hold a number with 15
decimal digits of precision and a range up to 10**300 or that the
float_kind call will fail.
Open Issues
No open issues have been raised at this time.
Rejection
This PEP has been closed by the author. The kinds module will not
be added to the standard library.
There was no opposition to the proposal but only mild interest in
using it, not enough to justify adding the module to the standard
library. Instead, it will be made available as a separate
distribution item at the Numerical Python site. At the next
release of Numerical Python, it will no longer be a part of the
Numeric distribution.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 242 – Numeric Kinds | Standards Track | This proposal gives the user optional control over the precision
and range of numeric computations so that a computation can be
written once and run anywhere with at least the desired precision
and range. It is backward compatible with existing code. The
meaning of decimal literals is clarified. |
PEP 246 – Object Adaptation
Author:
Alex Martelli <aleaxit at gmail.com>,
Clark C. Evans <cce at clarkevans.com>
Status:
Rejected
Type:
Standards Track
Created:
21-Mar-2001
Python-Version:
2.5
Post-History:
29-Mar-2001, 10-Jan-2005
Table of Contents
Rejection Notice
Abstract
Motivation
Requirements
Specification
Intended Use
Guido’s “Optional Static Typing: Stop the Flames” Blog Entry
Reference Implementation and Test Cases
Relationship To Microsoft’s QueryInterface
Questions and Answers
Backwards Compatibility
Credits
References and Footnotes
Copyright
Rejection Notice
I’m rejecting this PEP. Something much better is about to happen;
it’s too early to say exactly what, but it’s not going to resemble
the proposal in this PEP too closely so it’s better to start a new
PEP. GvR.
Abstract
This proposal puts forth an extensible cooperative mechanism for
the adaptation of an incoming object to a context which expects an
object supporting a specific protocol (say a specific type, class,
or interface).
This proposal provides a built-in “adapt” function that, for any
object X and any protocol Y, can be used to ask the Python
environment for a version of X compliant with Y. Behind the
scenes, the mechanism asks object X: “Are you now, or do you know
how to wrap yourself to provide, a supporter of protocol Y?”.
And, if this request fails, the function then asks protocol Y:
“Does object X support you, or do you know how to wrap it to
obtain such a supporter?” This duality is important, because
protocols can be developed after objects are, or vice-versa, and
this PEP lets either case be supported non-invasively with regard
to the pre-existing component[s].
Lastly, if neither the object nor the protocol know about each
other, the mechanism may check a registry of adapter factories,
where callables able to adapt certain objects to certain protocols
can be registered dynamically. This part of the proposal is
optional: the same effect could be obtained by ensuring that
certain kinds of protocols and/or objects can accept dynamic
registration of adapter factories, for example via suitable custom
metaclasses. However, this optional part allows adaptation to be
made more flexible and powerful in a way that is not invasive to
either protocols or other objects, thereby gaining for adaptation
much the same kind of advantage that Python standard library’s
“copy_reg” module offers for serialization and persistence.
This proposal does not specifically constrain what a protocol
is, what “compliance to a protocol” exactly means, nor what
precisely a wrapper is supposed to do. These omissions are
intended to leave this proposal compatible with both existing
categories of protocols, such as the existing system of type and
classes, as well as the many concepts for “interfaces” as such
which have been proposed or implemented for Python, such as the
one in PEP 245, the one in Zope3 [2], or the ones discussed in
the BDFL’s Artima blog in late 2004 and early 2005 [3]. However,
some reflections on these subjects, intended to be suggestive and
not normative, are also included.
Motivation
Currently there is no standardized mechanism in Python for
checking if an object supports a particular protocol. Typically,
existence of certain methods, particularly special methods such as
__getitem__, is used as an indicator of support for a particular
protocol. This technique works well for a few specific protocols
blessed by the BDFL (Benevolent Dictator for Life). The same can
be said for the alternative technique based on checking
‘isinstance’ (the built-in class “basestring” exists specifically
to let you use ‘isinstance’ to check if an object “is a [built-in]
string”). Neither approach is easily and generally extensible to
other protocols, defined by applications and third party
frameworks, outside of the standard Python core.
Even more important than checking if an object already supports a
given protocol can be the task of obtaining a suitable adapter
(wrapper or proxy) for the object, if the support is not already
there. For example, a string does not support the file protocol,
but you can wrap it into a StringIO instance to obtain an object
which does support that protocol and gets its data from the string
it wraps; that way, you can pass the string (suitably wrapped) to
subsystems which require as their arguments objects that are
readable as files. Unfortunately, there is currently no general,
standardized way to automate this extremely important kind of
“adaptation by wrapping” operations.
Typically, today, when you pass objects to a context expecting a
particular protocol, either the object knows about the context and
provides its own wrapper or the context knows about the object and
wraps it appropriately. The difficulty with these approaches is
that such adaptations are one-offs, are not centralized in a
single place of the users code, and are not executed with a common
technique, etc. This lack of standardization increases code
duplication with the same adapter occurring in more than one place
or it encourages classes to be re-written instead of adapted. In
either case, maintainability suffers.
It would be very nice to have a standard function that can be
called upon to verify an object’s compliance with a particular
protocol and provide for a wrapper if one is readily available –
all without having to hunt through each library’s documentation
for the incantation appropriate to that particular, specific case.
Requirements
When considering an object’s compliance with a protocol, there are
several cases to be examined:
When the protocol is a type or class, and the object has
exactly that type or is an instance of exactly that class (not
a subclass). In this case, compliance is automatic.
When the object knows about the protocol, and either considers
itself compliant, or knows how to wrap itself suitably.
When the protocol knows about the object, and either the object
already complies or the protocol knows how to suitably wrap the
object.
When the protocol is a type or class, and the object is a
member of a subclass. This is distinct from the first case (a)
above, since inheritance (unfortunately) does not necessarily
imply substitutability, and thus must be handled carefully.
When the context knows about the object and the protocol and
knows how to adapt the object so that the required protocol is
satisfied. This could use an adapter registry or similar
approaches.
The fourth case above is subtle. A break of substitutability can
occur when a subclass changes a method’s signature, or restricts
the domains accepted for a method’s argument (“co-variance” on
arguments types), or extends the co-domain to include return
values which the base class may never produce (“contra-variance”
on return types). While compliance based on class inheritance
should be automatic, this proposal allows an object to signal
that it is not compliant with a base class protocol.
If Python gains some standard “official” mechanism for interfaces,
however, then the “fast-path” case (a) can and should be extended
to the protocol being an interface, and the object an instance of
a type or class claiming compliance with that interface. For
example, if the “interface” keyword discussed in [3] is adopted
into Python, the “fast path” of case (a) could be used, since
instantiable classes implementing an interface would not be
allowed to break substitutability.
Specification
This proposal introduces a new built-in function, adapt(), which
is the basis for supporting these requirements.
The adapt() function has three parameters:
obj, the object to be adapted
protocol, the protocol requested of the object
alternate, an optional object to return if the object could
not be adapted
A successful result of the adapt() function returns either the
object passed obj, if the object is already compliant with the
protocol, or a secondary object wrapper, which provides a view
of the object compliant with the protocol. The definition of
wrapper is deliberately vague, and a wrapper is allowed to be a
full object with its own state if necessary. However, the design
intention is that an adaptation wrapper should hold a reference to
the original object it wraps, plus (if needed) a minimum of extra
state which it cannot delegate to the wrapper object.
An excellent example of adaptation wrapper is an instance of
StringIO which adapts an incoming string to be read as if it was a
textfile: the wrapper holds a reference to the string, but deals
by itself with the “current point of reading” (from where in the
wrapped strings will the characters for the next, e.g., “readline”
call come from), because it cannot delegate it to the wrapped
object (a string has no concept of “current point of reading” nor
anything else even remotely related to that concept).
A failure to adapt the object to the protocol raises an
AdaptationError (which is a subclass of TypeError), unless the
alternate parameter is used, in this case the alternate argument
is returned instead.
To enable the first case listed in the requirements, the adapt()
function first checks to see if the object’s type or the object’s
class are identical to the protocol. If so, then the adapt()
function returns the object directly without further ado.
To enable the second case, when the object knows about the
protocol, the object must have a __conform__() method. This
optional method takes two arguments:
self, the object being adapted
protocol, the protocol requested
Just like any other special method in today’s Python, __conform__
is meant to be taken from the object’s class, not from the object
itself (for all objects, except instances of “classic classes” as
long as we must still support the latter). This enables a
possible ‘tp_conform’ slot to be added to Python’s type objects in
the future, if desired.
The object may return itself as the result of __conform__ to
indicate compliance. Alternatively, the object also has the
option of returning a wrapper object compliant with the protocol.
If the object knows it is not compliant although it belongs to a
type which is a subclass of the protocol, then __conform__ should
raise a LiskovViolation exception (a subclass of AdaptationError).
Finally, if the object cannot determine its compliance, it should
return None to enable the remaining mechanisms. If __conform__
raises any other exception, “adapt” just propagates it.
To enable the third case, when the protocol knows about the
object, the protocol must have an __adapt__() method. This
optional method takes two arguments:
self, the protocol requested
obj, the object being adapted
If the protocol finds the object to be compliant, it can return
obj directly. Alternatively, the method may return a wrapper
compliant with the protocol. If the protocol knows the object is
not compliant although it belongs to a type which is a subclass of
the protocol, then __adapt__ should raise a LiskovViolation
exception (a subclass of AdaptationError). Finally, when
compliance cannot be determined, this method should return None to
enable the remaining mechanisms. If __adapt__ raises any other
exception, “adapt” just propagates it.
The fourth case, when the object’s class is a sub-class of the
protocol, is handled by the built-in adapt() function. Under
normal circumstances, if “isinstance(object, protocol)” then
adapt() returns the object directly. However, if the object is
not substitutable, either the __conform__() or __adapt__()
methods, as above mentioned, may raise an LiskovViolation (a
subclass of AdaptationError) to prevent this default behavior.
If none of the first four mechanisms worked, as a last-ditch
attempt, ‘adapt’ falls back to checking a registry of adapter
factories, indexed by the protocol and the type of obj, to meet
the fifth case. Adapter factories may be dynamically registered
and removed from that registry to provide “third party adaptation”
of objects and protocols that have no knowledge of each other, in
a way that is not invasive to either the object or the protocols.
Intended Use
The typical intended use of adapt is in code which has received
some object X “from the outside”, either as an argument or as the
result of calling some function, and needs to use that object
according to a certain protocol Y. A “protocol” such as Y is
meant to indicate an interface, usually enriched with some
semantics constraints (such as are typically used in the “design
by contract” approach), and often also some pragmatical
expectation (such as “the running time of a certain operation
should be no worse than O(N)”, or the like); this proposal does
not specify how protocols are designed as such, nor how or whether
compliance to a protocol is checked, nor what the consequences may
be of claiming compliance but not actually delivering it (lack of
“syntactic” compliance – names and signatures of methods – will
often lead to exceptions being raised; lack of “semantic”
compliance may lead to subtle and perhaps occasional errors
[imagine a method claiming to be threadsafe but being in fact
subject to some subtle race condition, for example]; lack of
“pragmatic” compliance will generally lead to code that runs
correctly, but too slowly for practical use, or sometimes to
exhaustion of resources such as memory or disk space).
When protocol Y is a concrete type or class, compliance to it is
intended to mean that an object allows all of the operations that
could be performed on instances of Y, with “comparable” semantics
and pragmatics. For example, a hypothetical object X that is a
singly-linked list should not claim compliance with protocol
‘list’, even if it implements all of list’s methods: the fact that
indexing X[n] takes time O(n), while the same operation would be
O(1) on a list, makes a difference. On the other hand, an
instance of StringIO.StringIO does comply with protocol ‘file’,
even though some operations (such as those of module ‘marshal’)
may not allow substituting one for the other because they perform
explicit type-checks: such type-checks are “beyond the pale” from
the point of view of protocol compliance.
While this convention makes it feasible to use a concrete type or
class as a protocol for purposes of this proposal, such use will
often not be optimal. Rarely will the code calling ‘adapt’ need
ALL of the features of a certain concrete type, particularly for
such rich types as file, list, dict; rarely can all those features
be provided by a wrapper with good pragmatics, as well as syntax
and semantics that are really the same as a concrete type’s.
Rather, once this proposal is accepted, a design effort needs to
start to identify the essential characteristics of those protocols
which are currently used in Python, particularly within the
standard library, and to formalize them using some kind of
“interface” construct (not necessarily requiring any new syntax: a
simple custom metaclass would let us get started, and the results
of the effort could later be migrated to whatever “interface”
construct is eventually accepted into the Python language). With
such a palette of more formally designed protocols, the code using
‘adapt’ will be able to ask for, say, adaptation into “a filelike
object that is readable and seekable”, or whatever else it
specifically needs with some decent level of “granularity”, rather
than too-generically asking for compliance to the ‘file’ protocol.
Adaptation is NOT “casting”. When object X itself does not
conform to protocol Y, adapting X to Y means using some kind of
wrapper object Z, which holds a reference to X, and implements
whatever operation Y requires, mostly by delegating to X in
appropriate ways. For example, if X is a string and Y is ‘file’,
the proper way to adapt X to Y is to make a StringIO(X), NOT to
call file(X) [which would try to open a file named by X].
Numeric types and protocols may need to be an exception to this
“adaptation is not casting” mantra, however.
Guido’s “Optional Static Typing: Stop the Flames” Blog Entry
A typical simple use case of adaptation would be:
def f(X):
X = adapt(X, Y)
# continue by using X according to protocol Y
In [4], the BDFL has proposed introducing the syntax:
def f(X: Y):
# continue by using X according to protocol Y
to be a handy shortcut for exactly this typical use of adapt, and,
as a basis for experimentation until the parser has been modified
to accept this new syntax, a semantically equivalent decorator:
@arguments(Y)
def f(X):
# continue by using X according to protocol Y
These BDFL ideas are fully compatible with this proposal, as are
other of Guido’s suggestions in the same blog.
Reference Implementation and Test Cases
The following reference implementation does not deal with classic
classes: it consider only new-style classes. If classic classes
need to be supported, the additions should be pretty clear, though
a bit messy (x.__class__ vs type(x), getting boundmethods directly
from the object rather than from the type, and so on).
-----------------------------------------------------------------
adapt.py
-----------------------------------------------------------------
class AdaptationError(TypeError):
pass
class LiskovViolation(AdaptationError):
pass
_adapter_factory_registry = {}
def registerAdapterFactory(objtype, protocol, factory):
_adapter_factory_registry[objtype, protocol] = factory
def unregisterAdapterFactory(objtype, protocol):
del _adapter_factory_registry[objtype, protocol]
def _adapt_by_registry(obj, protocol, alternate):
factory = _adapter_factory_registry.get((type(obj), protocol))
if factory is None:
adapter = alternate
else:
adapter = factory(obj, protocol, alternate)
if adapter is AdaptationError:
raise AdaptationError
else:
return adapter
def adapt(obj, protocol, alternate=AdaptationError):
t = type(obj)
# (a) first check to see if object has the exact protocol
if t is protocol:
return obj
try:
# (b) next check if t.__conform__ exists & likes protocol
conform = getattr(t, '__conform__', None)
if conform is not None:
result = conform(obj, protocol)
if result is not None:
return result
# (c) then check if protocol.__adapt__ exists & likes obj
adapt = getattr(type(protocol), '__adapt__', None)
if adapt is not None:
result = adapt(protocol, obj)
if result is not None:
return result
except LiskovViolation:
pass
else:
# (d) check if object is instance of protocol
if isinstance(obj, protocol):
return obj
# (e) last chance: try the registry
return _adapt_by_registry(obj, protocol, alternate)
-----------------------------------------------------------------
test.py
-----------------------------------------------------------------
from adapt import AdaptationError, LiskovViolation, adapt
from adapt import registerAdapterFactory, unregisterAdapterFactory
import doctest
class A(object):
'''
>>> a = A()
>>> a is adapt(a, A) # case (a)
True
'''
class B(A):
'''
>>> b = B()
>>> b is adapt(b, A) # case (d)
True
'''
class C(object):
'''
>>> c = C()
>>> c is adapt(c, B) # case (b)
True
>>> c is adapt(c, A) # a failure case
Traceback (most recent call last):
...
AdaptationError
'''
def __conform__(self, protocol):
if protocol is B:
return self
class D(C):
'''
>>> d = D()
>>> d is adapt(d, D) # case (a)
True
>>> d is adapt(d, C) # case (d) explicitly blocked
Traceback (most recent call last):
...
AdaptationError
'''
def __conform__(self, protocol):
if protocol is C:
raise LiskovViolation
class MetaAdaptingProtocol(type):
def __adapt__(cls, obj):
return cls.adapt(obj)
class AdaptingProtocol:
__metaclass__ = MetaAdaptingProtocol
@classmethod
def adapt(cls, obj):
pass
class E(AdaptingProtocol):
'''
>>> a = A()
>>> a is adapt(a, E) # case (c)
True
>>> b = A()
>>> b is adapt(b, E) # case (c)
True
>>> c = C()
>>> c is adapt(c, E) # a failure case
Traceback (most recent call last):
...
AdaptationError
'''
@classmethod
def adapt(cls, obj):
if isinstance(obj, A):
return obj
class F(object):
pass
def adapt_F_to_A(obj, protocol, alternate):
if isinstance(obj, F) and issubclass(protocol, A):
return obj
else:
return alternate
def test_registry():
'''
>>> f = F()
>>> f is adapt(f, A) # a failure case
Traceback (most recent call last):
...
AdaptationError
>>> registerAdapterFactory(F, A, adapt_F_to_A)
>>> f is adapt(f, A) # case (e)
True
>>> unregisterAdapterFactory(F, A)
>>> f is adapt(f, A) # a failure case again
Traceback (most recent call last):
...
AdaptationError
>>> registerAdapterFactory(F, A, adapt_F_to_A)
'''
doctest.testmod()
Relationship To Microsoft’s QueryInterface
Although this proposal has some similarities to Microsoft’s (COM)
QueryInterface, it differs by a number of aspects.
First, adaptation in this proposal is bi-directional, allowing the
interface (protocol) to be queried as well, which gives more
dynamic abilities (more Pythonic). Second, there is no special
“IUnknown” interface which can be used to check or obtain the
original unwrapped object identity, although this could be
proposed as one of those “special” blessed interface protocol
identifiers. Third, with QueryInterface, once an object supports
a particular interface it must always there after support this
interface; this proposal makes no such guarantee, since, in
particular, adapter factories can be dynamically added to the
registried and removed again later.
Fourth, implementations of Microsoft’s QueryInterface must support
a kind of equivalence relation – they must be reflexive,
symmetrical, and transitive, in specific senses. The equivalent
conditions for protocol adaptation according to this proposal
would also represent desirable properties:
# given, to start with, a successful adaptation:
X_as_Y = adapt(X, Y)
# reflexive:
assert adapt(X_as_Y, Y) is X_as_Y
# transitive:
X_as_Z = adapt(X, Z, None)
X_as_Y_as_Z = adapt(X_as_Y, Z, None)
assert (X_as_Y_as_Z is None) == (X_as_Z is None)
# symmetrical:
X_as_Z_as_Y = adapt(X_as_Z, Y, None)
assert (X_as_Y_as_Z is None) == (X_as_Z_as_Y is None)
However, while these properties are desirable, it may not be
possible to guarantee them in all cases. QueryInterface can
impose their equivalents because it dictates, to some extent, how
objects, interfaces, and adapters are to be coded; this proposal
is meant to be not necessarily invasive, usable and to “retrofit”
adaptation between two frameworks coded in mutual ignorance of
each other without having to modify either framework.
Transitivity of adaptation is in fact somewhat controversial, as
is the relationship (if any) between adaptation and inheritance.
The latter would not be controversial if we knew that inheritance
always implies Liskov substitutability, which, unfortunately we
don’t. If some special form, such as the interfaces proposed in
[4], could indeed ensure Liskov substitutability, then for that
kind of inheritance, only, we could perhaps assert that if X
conforms to Y and Y inherits from Z then X conforms to Z… but
only if substitutability was taken in a very strong sense to
include semantics and pragmatics, which seems doubtful. (For what
it’s worth: in QueryInterface, inheritance does not require nor
imply conformance). This proposal does not include any “strong”
effects of inheritance, beyond the small ones specifically
detailed above.
Similarly, transitivity might imply multiple “internal” adaptation
passes to get the result of adapt(X, Z) via some intermediate Y,
intrinsically like adapt(adapt(X, Y), Z), for some suitable and
automatically chosen Y. Again, this may perhaps be feasible under
suitably strong constraints, but the practical implications of
such a scheme are still unclear to this proposal’s authors. Thus,
this proposal does not include any automatic or implicit
transitivity of adaptation, under whatever circumstances.
For an implementation of the original version of this proposal
which performs more advanced processing in terms of transitivity,
and of the effects of inheritance, see Phillip J. Eby’s
PyProtocols [5]. The documentation accompanying PyProtocols is
well worth studying for its considerations on how adapters should
be coded and used, and on how adaptation can remove any need for
typechecking in application code.
Questions and Answers
Q: What benefit does this proposal provide?A: The typical Python programmer is an integrator, someone who is
connecting components from various suppliers. Often, to
interface between these components, one needs intermediate
adapters. Usually the burden falls upon the programmer to
study the interface exposed by one component and required by
another, determine if they are directly compatible, or develop
an adapter. Sometimes a supplier may even include the
appropriate adapter, but even then searching for the adapter
and figuring out how to deploy the adapter takes time.
This technique enables suppliers to work with each other
directly, by implementing __conform__ or __adapt__ as
necessary. This frees the integrator from making their own
adapters. In essence, this allows the components to have a
simple dialogue among themselves. The integrator simply
connects one component to another, and if the types don’t
automatically match an adapting mechanism is built-in.
Moreover, thanks to the adapter registry, a “fourth party” may
supply adapters to allow interoperation of frameworks which
are totally unaware of each other, non-invasively, and without
requiring the integrator to do anything more than install the
appropriate adapter factories in the registry at start-up.
As long as libraries and frameworks cooperate with the
adaptation infrastructure proposed here (essentially by
defining and using protocols appropriately, and calling
‘adapt’ as needed on arguments received and results of
call-back factory functions), the integrator’s work thereby
becomes much simpler.
For example, consider SAX1 and SAX2 interfaces: there is an
adapter required to switch between them. Normally, the
programmer must be aware of this; however, with this
adaptation proposal in place, this is no longer the case –
indeed, thanks to the adapter registry, this need may be
removed even if the framework supplying SAX1 and the one
requiring SAX2 are unaware of each other.
Q: Why does this have to be built-in, can’t it be standalone?A: Yes, it does work standalone. However, if it is built-in, it
has a greater chance of usage. The value of this proposal is
primarily in standardization: having libraries and frameworks
coming from different suppliers, including the Python standard
library, use a single approach to adaptation. Furthermore:
The mechanism is by its very nature a singleton.
If used frequently, it will be much faster as a built-in.
It is extensible and unassuming.
Once ‘adapt’ is built-in, it can support syntax extensions
and even be of some help to a type inference system.
Q: Why the verbs __conform__ and __adapt__?A: conform, verb intransitive
To correspond in form or character; be similar.
To act or be in accord or agreement; comply.
To act in accordance with current customs or modes.
adapt, verb transitive
To make suitable to or fit for a specific use or situation.
Source: The American Heritage Dictionary of the English
Language, Third Edition
Backwards Compatibility
There should be no problem with backwards compatibility unless
someone had used the special names __conform__ or __adapt__ in
other ways, but this seems unlikely, and, in any case, user code
should never use special names for non-standard purposes.
This proposal could be implemented and tested without changes to
the interpreter.
Credits
This proposal was created in large part by the feedback of the
talented individuals on the main Python mailing lists and the
type-sig list. To name specific contributors (with apologies if
we missed anyone!), besides the proposal’s authors: the main
suggestions for the proposal’s first versions came from Paul
Prescod, with significant feedback from Robin Thomas, and we also
borrowed ideas from Marcin ‘Qrczak’ Kowalczyk and Carlos Ribeiro.
Other contributors (via comments) include Michel Pelletier, Jeremy
Hylton, Aahz Maruch, Fredrik Lundh, Rainer Deyke, Timothy Delaney,
and Huaiyu Zhu. The current version owes a lot to discussions
with (among others) Phillip J. Eby, Guido van Rossum, Bruce Eckel,
Jim Fulton, and Ka-Ping Yee, and to study and reflection of their
proposals, implementations, and documentation about use and
adaptation of interfaces and protocols in Python.
References and Footnotes
[2]
http://www.zope.org/Wikis/Interfaces/FrontPage
[3] (1, 2)
http://www.artima.com/weblogs/index.jsp?blogger=guido
[4] (1, 2)
http://www.artima.com/weblogs/viewpost.jsp?thread=87182
[5]
http://peak.telecommunity.com/PyProtocols.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 246 – Object Adaptation | Standards Track | This proposal puts forth an extensible cooperative mechanism for
the adaptation of an incoming object to a context which expects an
object supporting a specific protocol (say a specific type, class,
or interface). |
PEP 247 – API for Cryptographic Hash Functions
Author:
A.M. Kuchling <amk at amk.ca>
Status:
Final
Type:
Informational
Created:
23-Mar-2001
Post-History:
20-Sep-2001
Table of Contents
Abstract
Specification
Rationale
Changes
Acknowledgements
Copyright
Abstract
There are several different modules available that implement cryptographic
hashing algorithms such as MD5 or SHA. This document specifies a standard API
for such algorithms, to make it easier to switch between different
implementations.
Specification
All hashing modules should present the same interface. Additional methods or
variables can be added, but those described in this document should always be
present.
Hash function modules define one function:
new([string]) (unkeyed hashes)
new([key] , [string]) (keyed hashes)
Create a new hashing object and return it. The first form is for hashes
that are unkeyed, such as MD5 or SHA. For keyed hashes such as HMAC, key
is a required parameter containing a string giving the key to use. In both
cases, the optional string parameter, if supplied, will be immediately
hashed into the object’s starting state, as if obj.update(string)
was called.After creating a hashing object, arbitrary strings can be fed into the
object using its update() method, and the hash value can be obtained at
any time by calling the object’s digest() method.
Arbitrary additional keyword arguments can be added to this function, but if
they’re not supplied, sensible default values should be used. For example,
rounds and digest_size keywords could be added for a hash function
which supports a variable number of rounds and several different output
sizes, and they should default to values believed to be secure.
Hash function modules define one variable:
digest_size
An integer value; the size of the digest produced by the hashing objects
created by this module, measured in bytes. You could also obtain this value
by creating a sample object and accessing its digest_size attribute, but
it can be convenient to have this value available from the module. Hashes
with a variable output size will set this variable to None.
Hashing objects require a single attribute:
digest_size
This attribute is identical to the module-level digest_size variable,
measuring the size of the digest produced by the hashing object, measured in
bytes. If the hash has a variable output size, this output size must be
chosen when the hashing object is created, and this attribute must contain
the selected size. Therefore, None is not a legal value for this
attribute.
Hashing objects require the following methods:
copy()
Return a separate copy of this hashing object. An update to this copy won’t
affect the original object.
digest()
Return the hash value of this hashing object as a string containing 8-bit
data. The object is not altered in any way by this function; you can
continue updating the object after calling this function.
hexdigest()
Return the hash value of this hashing object as a string containing
hexadecimal digits. Lowercase letters should be used for the digits a
through f. Like the .digest() method, this method mustn’t alter the
object.
update(string)
Hash string into the current state of the hashing object. update() can
be called any number of times during a hashing object’s lifetime.
Hashing modules can define additional module-level functions or object methods
and still be compliant with this specification.
Here’s an example, using a module named MD5:
>>> from Crypto.Hash import MD5
>>> m = MD5.new()
>>> m.digest_size
16
>>> m.update('abc')
>>> m.digest()
'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
>>> m.hexdigest()
'900150983cd24fb0d6963f7d28e17f72'
>>> MD5.new('abc').digest()
'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
Rationale
The digest size is measured in bytes, not bits, even though hash algorithm
sizes are usually quoted in bits; MD5 is a 128-bit algorithm and not a 16-byte
one, for example. This is because, in the sample code I looked at, the length
in bytes is often needed (to seek ahead or behind in a file; to compute the
length of an output string) while the length in bits is rarely used. Therefore,
the burden will fall on the few people actually needing the size in bits, who
will have to multiply digest_size by 8.
It’s been suggested that the update() method would be better named
append(). However, that method is really causing the current state of the
hashing object to be updated, and update() is already used by the md5 and
sha modules included with Python, so it seems simplest to leave the name
update() alone.
The order of the constructor’s arguments for keyed hashes was a sticky issue.
It wasn’t clear whether the key should come first or second. It’s a required
parameter, and the usual convention is to place required parameters first, but
that also means that the string parameter moves from the first position to
the second. It would be possible to get confused and pass a single argument to
a keyed hash, thinking that you’re passing an initial string to an unkeyed
hash, but it doesn’t seem worth making the interface for keyed hashes more
obscure to avoid this potential error.
Changes
2001-09-17: Renamed clear() to reset(); added digest_size attribute
to objects; added .hexdigest() method.
2001-09-20: Removed reset() method completely.
2001-09-28: Set digest_size to None for variable-size hashes.
Acknowledgements
Thanks to Aahz, Andrew Archibald, Rich Salz, Itamar Shtull-Trauring, and the
readers of the python-crypto list for their comments on this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 247 – API for Cryptographic Hash Functions | Informational | There are several different modules available that implement cryptographic
hashing algorithms such as MD5 or SHA. This document specifies a standard API
for such algorithms, to make it easier to switch between different
implementations. |
PEP 250 – Using site-packages on Windows
Author:
Paul Moore <p.f.moore at gmail.com>
Status:
Final
Type:
Standards Track
Created:
30-Mar-2001
Python-Version:
2.2
Post-History:
30-Mar-2001
Table of Contents
Abstract
Motivation
Implementation
Notes
Open Issues
Copyright
Abstract
The standard Python distribution includes a directory
Lib/site-packages, which is used on Unix platforms to hold
locally installed modules and packages. The site.py module
distributed with Python includes support for locating other
modules in the site-packages directory.
This PEP proposes that the site-packages directory should be used
on the Windows platform in a similar manner.
Motivation
On Windows platforms, the default setting for sys.path does not
include a directory suitable for users to install locally
developed modules. The “expected” location appears to be the
directory containing the Python executable itself. This is also
the location where distutils (and distutils-generated installers)
installs packages. Including locally developed code in the same
directory as installed executables is not good practice.
Clearly, users can manipulate sys.path, either in a locally
modified site.py, or in a suitable sitecustomize.py, or even via
.pth files. However, there should be a standard location for such
files, rather than relying on every individual site having to set
their own policy.
In addition, with distutils becoming more prevalent as a means of
distributing modules, the need for a standard install location for
distributed modules will become more common. It would be better
to define such a standard now, rather than later when more
distutils-based packages exist which will need rebuilding.
It is relevant to note that prior to Python 2.1, the site-packages
directory was not included in sys.path for Macintosh platforms.
This has been changed in 2.1, and Macintosh includes sys.path now,
leaving Windows as the only major platform with no site-specific
modules directory.
Implementation
The implementation of this feature is fairly trivial. All that
would be required is a change to site.py, to change the section
setting sitedirs. The Python 2.1 version has:
if os.sep == '/':
sitedirs = [makepath(prefix,
"lib",
"python" + sys.version[:3],
"site-packages"),
makepath(prefix, "lib", "site-python")]
elif os.sep == ':':
sitedirs = [makepath(prefix, "lib", "site-packages")]
else:
sitedirs = [prefix]
A suitable change would be to simply replace the last 4 lines with:
else:
sitedirs == [prefix, makepath(prefix, "lib", "site-packages")]
Changes would also be required to distutils, to reflect this change
in policy. A patch is available on Sourceforge, patch ID 445744,
which implements this change. Note that the patch checks the Python
version and only invokes the new behaviour for Python versions from
2.2 onwards. This is to ensure that distutils remains compatible
with earlier versions of Python.
Finally, the executable code which implements the Windows installer
used by the bdist_wininst command will need changing to use the new
location. A separate patch is available for this, currently
maintained by Thomas Heller.
Notes
This change does not preclude packages using the current
location – the change only adds a directory to sys.path, it
does not remove anything.
Both the current location (sys.prefix) and the new directory
(site-packages) are included in sitedirs, so that .pth files
will be recognised in either location.
This proposal adds a single additional site-packages directory
to sitedirs. On Unix platforms, two directories are added, one
for version-independent files (Python code) and one for
version-dependent code (C extensions). This is necessary on
Unix, as the sitedirs include a common (across Python versions)
package location, in /usr/local by default. As there is no such
common location available on Windows, there is also no need for
having two separate package directories.
If users want to keep DLLs in a single location on Windows, rather
than keeping them in the package directory, the DLLs subdirectory
of the Python install directory is already available for that
purpose. Adding an extra directory solely for DLLs should not be
necessary.
Open Issues
Comments from Unix users indicate that there may be issues with
the current setup on the Unix platform. Rather than become
involved in cross-platform issues, this PEP specifically limits
itself to the Windows platform, leaving changes for other platforms
to be covered in other PEPs.
There could be issues with applications which embed Python. To the
author’s knowledge, there should be no problem as a result of this
change. There have been no comments (supportive or otherwise) from
users who embed Python.
Copyright
This document has been placed in the public domain.
| Final | PEP 250 – Using site-packages on Windows | Standards Track | The standard Python distribution includes a directory
Lib/site-packages, which is used on Unix platforms to hold
locally installed modules and packages. The site.py module
distributed with Python includes support for locating other
modules in the site-packages directory. |
PEP 251 – Python 2.2 Release Schedule
Author:
Barry Warsaw <barry at python.org>, Guido van Rossum <guido at python.org>
Status:
Final
Type:
Informational
Topic:
Release
Created:
17-Apr-2001
Python-Version:
2.2
Post-History:
14-Aug-2001
Table of Contents
Abstract
Release Schedule
Release Manager
Release Mechanics
New features for Python 2.2
References
Copyright
Abstract
This document describes the Python 2.2 development and release
schedule. The schedule primarily concerns itself with PEP-sized
items. Small bug fixes and changes will occur up until the first
beta release.
The schedule below represents the actual release dates of Python
2.2. Note that any subsequent maintenance releases of Python 2.2
should be covered by separate PEPs.
Release Schedule
Tentative future release dates. Note that we’ve slipped this
compared to the schedule posted around the release of 2.2a1.
21-Dec-2001: 2.2 [Released] (final release)
14-Dec-2001: 2.2c1 [Released]
14-Nov-2001: 2.2b2 [Released]
19-Oct-2001: 2.2b1 [Released]
28-Sep-2001: 2.2a4 [Released]
7-Sep-2001: 2.2a3 [Released]
22-Aug-2001: 2.2a2 [Released]
18-Jul-2001: 2.2a1 [Released]
Release Manager
Barry Warsaw was the Python 2.2 release manager.
Release Mechanics
We experimented with a new mechanism for releases: a week before
every alpha, beta or other release, we forked off a branch which
became the release. Changes to the branch are limited to the
release manager and his designated ‘bots. This experiment was
deemed a success and should be observed for future releases. See
PEP 101 for the actual release mechanics.
New features for Python 2.2
The following new features are introduced in Python 2.2. For a
more detailed account, see Misc/NEWS [2] in the Python
distribution, or Andrew Kuchling’s “What’s New in Python 2.2”
document [3].
iterators (PEP 234)
generators (PEP 255)
unifying long ints and plain ints (PEP 237)
division (PEP 238)
unification of types and classes (PEP 252, PEP 253)
References
[2]
Misc/NEWS file from CVS
http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Misc/NEWS?rev=1.337.2.4&content-type=text/vnd.viewcvs-markup
[3]
Andrew Kuchling, What’s New in Python 2.2
http://www.python.org/doc/2.2.1/whatsnew/whatsnew22.html
Copyright
This document has been placed in the public domain.
| Final | PEP 251 – Python 2.2 Release Schedule | Informational | This document describes the Python 2.2 development and release
schedule. The schedule primarily concerns itself with PEP-sized
items. Small bug fixes and changes will occur up until the first
beta release. |
PEP 252 – Making Types Look More Like Classes
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
19-Apr-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Introduction
Introspection APIs
Specification of the class-based introspection API
Specification of the attribute descriptor API
Static methods and class methods
C API
Discussion
Examples
Backwards compatibility
Warnings and Errors
Implementation
References
Copyright
Abstract
This PEP proposes changes to the introspection API for types that
makes them look more like classes, and their instances more like
class instances. For example, type(x) will be equivalent to
x.__class__ for most built-in types. When C is x.__class__,
x.meth(a) will generally be equivalent to C.meth(x, a), and
C.__dict__ contains x’s methods and other attributes.
This PEP also introduces a new approach to specifying attributes,
using attribute descriptors, or descriptors for short.
Descriptors unify and generalize several different common
mechanisms used for describing attributes: a descriptor can
describe a method, a typed field in the object structure, or a
generalized attribute represented by getter and setter functions.
Based on the generalized descriptor API, this PEP also introduces
a way to declare class methods and static methods.
[Editor’s note: the ideas described in this PEP have been incorporated
into Python. The PEP no longer accurately describes the implementation.]
Introduction
One of Python’s oldest language warts is the difference between
classes and types. For example, you can’t directly subclass the
dictionary type, and the introspection interface for finding out
what methods and instance variables an object has is different for
types and for classes.
Healing the class/type split is a big effort, because it affects
many aspects of how Python is implemented. This PEP concerns
itself with making the introspection API for types look the same
as that for classes. Other PEPs will propose making classes look
more like types, and subclassing from built-in types; these topics
are not on the table for this PEP.
Introspection APIs
Introspection concerns itself with finding out what attributes an
object has. Python’s very general getattr/setattr API makes it
impossible to guarantee that there always is a way to get a list
of all attributes supported by a specific object, but in practice
two conventions have appeared that together work for almost all
objects. I’ll call them the class-based introspection API and the
type-based introspection API; class API and type API for short.
The class-based introspection API is used primarily for class
instances; it is also used by Jim Fulton’s ExtensionClasses. It
assumes that all data attributes of an object x are stored in the
dictionary x.__dict__, and that all methods and class variables
can be found by inspection of x’s class, written as x.__class__.
Classes have a __dict__ attribute, which yields a dictionary
containing methods and class variables defined by the class
itself, and a __bases__ attribute, which is a tuple of base
classes that must be inspected recursively. Some assumptions here
are:
attributes defined in the instance dict override attributes
defined by the object’s class;
attributes defined in a derived class override attributes
defined in a base class;
attributes in an earlier base class (meaning occurring earlier
in __bases__) override attributes in a later base class.
(The last two rules together are often summarized as the
left-to-right, depth-first rule for attribute search. This is the
classic Python attribute lookup rule. Note that PEP 253 will
propose to change the attribute lookup order, and if accepted,
this PEP will follow suit.)
The type-based introspection API is supported in one form or
another by most built-in objects. It uses two special attributes,
__members__ and __methods__. The __methods__ attribute, if
present, is a list of method names supported by the object. The
__members__ attribute, if present, is a list of data attribute
names supported by the object.
The type API is sometimes combined with a __dict__ that works the
same as for instances (for example for function objects in
Python 2.1, f.__dict__ contains f’s dynamic attributes, while
f.__members__ lists the names of f’s statically defined
attributes).
Some caution must be exercised: some objects don’t list their
“intrinsic” attributes (like __dict__ and __doc__) in __members__,
while others do; sometimes attribute names occur both in
__members__ or __methods__ and as keys in __dict__, in which case
it’s anybody’s guess whether the value found in __dict__ is used
or not.
The type API has never been carefully specified. It is part of
Python folklore, and most third party extensions support it
because they follow examples that support it. Also, any type that
uses Py_FindMethod() and/or PyMember_Get() in its tp_getattr
handler supports it, because these two functions special-case the
attribute names __methods__ and __members__, respectively.
Jim Fulton’s ExtensionClasses ignore the type API, and instead
emulate the class API, which is more powerful. In this PEP, I
propose to phase out the type API in favor of supporting the class
API for all types.
One argument in favor of the class API is that it doesn’t require
you to create an instance in order to find out which attributes a
type supports; this in turn is useful for documentation
processors. For example, the socket module exports the SocketType
object, but this currently doesn’t tell us what methods are
defined on socket objects. Using the class API, SocketType would
show exactly what the methods for socket objects are, and we can
even extract their docstrings, without creating a socket. (Since
this is a C extension module, the source-scanning approach to
docstring extraction isn’t feasible in this case.)
Specification of the class-based introspection API
Objects may have two kinds of attributes: static and dynamic. The
names and sometimes other properties of static attributes are
knowable by inspection of the object’s type or class, which is
accessible through obj.__class__ or type(obj). (I’m using type
and class interchangeably; a clumsy but descriptive term that fits
both is “meta-object”.)
(XXX static and dynamic are not great terms to use here, because
“static” attributes may actually behave quite dynamically, and
because they have nothing to do with static class members in C++
or Java. Barry suggests to use immutable and mutable instead, but
those words already have precise and different meanings in
slightly different contexts, so I think that would still be
confusing.)
Examples of dynamic attributes are instance variables of class
instances, module attributes, etc. Examples of static attributes
are the methods of built-in objects like lists and dictionaries,
and the attributes of frame and code objects (f.f_code,
c.co_filename, etc.). When an object with dynamic attributes
exposes these through its __dict__ attribute, __dict__ is a static
attribute.
The names and values of dynamic properties are typically stored in
a dictionary, and this dictionary is typically accessible as
obj.__dict__. The rest of this specification is more concerned
with discovering the names and properties of static attributes
than with dynamic attributes; the latter are easily discovered by
inspection of obj.__dict__.
In the discussion below, I distinguish two kinds of objects:
regular objects (like lists, ints, functions) and meta-objects.
Types and classes are meta-objects. Meta-objects are also regular
objects, but we’re mostly interested in them because they are
referenced by the __class__ attribute of regular objects (or by
the __bases__ attribute of other meta-objects).
The class introspection API consists of the following elements:
the __class__ and __dict__ attributes on regular objects;
the __bases__ and __dict__ attributes on meta-objects;
precedence rules;
attribute descriptors.
Together, these not only tell us about all attributes defined by
a meta-object, but they also help us calculate the value of a
specific attribute of a given object.
The __dict__ attribute on regular objectsA regular object may have a __dict__ attribute. If it does,
this should be a mapping (not necessarily a dictionary)
supporting at least __getitem__(), keys(), and has_key(). This
gives the dynamic attributes of the object. The keys in the
mapping give attribute names, and the corresponding values give
their values.
Typically, the value of an attribute with a given name is the
same object as the value corresponding to that name as a key in
the __dict__. In other words, obj.__dict__['spam'] is obj.spam.
(But see the precedence rules below; a static attribute with
the same name may override the dictionary item.)
The __class__ attribute on regular objectsA regular object usually has a __class__ attribute. If it
does, this references a meta-object. A meta-object can define
static attributes for the regular object whose __class__ it
is. This is normally done through the following mechanism:
The __dict__ attribute on meta-objectsA meta-object may have a __dict__ attribute, of the same form
as the __dict__ attribute for regular objects (a mapping but
not necessarily a dictionary). If it does, the keys of the
meta-object’s __dict__ are names of static attributes for the
corresponding regular object. The values are attribute
descriptors; we’ll explain these later. An unbound method is a
special case of an attribute descriptor.
Because a meta-object is also a regular object, the items in a
meta-object’s __dict__ correspond to attributes of the
meta-object; however, some transformation may be applied, and
bases (see below) may define additional dynamic attributes. In
other words, mobj.spam is not always mobj.__dict__['spam'].
(This rule contains a loophole because for classes, if
C.__dict__['spam'] is a function, C.spam is an unbound method
object.)
The __bases__ attribute on meta-objectsA meta-object may have a __bases__ attribute. If it does, this
should be a sequence (not necessarily a tuple) of other
meta-objects, the bases. An absent __bases__ is equivalent to
an empty sequence of bases. There must never be a cycle in the
relationship between meta-objects defined by __bases__
attributes; in other words, the __bases__ attributes define a
directed acyclic graph, with arcs pointing from derived
meta-objects to their base meta-objects. (It is not
necessarily a tree, since multiple classes can have the same
base class.) The __dict__ attributes of a meta-object in the
inheritance graph supply attribute descriptors for the regular
object whose __class__ attribute points to the root of the
inheritance tree (which is not the same as the root of the
inheritance hierarchy – rather more the opposite, at the
bottom given how inheritance trees are typically drawn).
Descriptors are first searched in the dictionary of the root
meta-object, then in its bases, according to a precedence rule
(see the next paragraph).
Precedence rulesWhen two meta-objects in the inheritance graph for a given
regular object both define an attribute descriptor with the
same name, the search order is up to the meta-object. This
allows different meta-objects to define different search
orders. In particular, classic classes use the old
left-to-right depth-first rule, while new-style classes use a
more advanced rule (see the section on method resolution order
in PEP 253).
When a dynamic attribute (one defined in a regular object’s
__dict__) has the same name as a static attribute (one defined
by a meta-object in the inheritance graph rooted at the regular
object’s __class__), the static attribute has precedence if it
is a descriptor that defines a __set__ method (see below);
otherwise (if there is no __set__ method) the dynamic attribute
has precedence. In other words, for data attributes (those
with a __set__ method), the static definition overrides the
dynamic definition, but for other attributes, dynamic overrides
static.
Rationale: we can’t have a simple rule like “static overrides
dynamic” or “dynamic overrides static”, because some static
attributes indeed override dynamic attributes; for example, a
key ‘__class__’ in an instance’s __dict__ is ignored in favor
of the statically defined __class__ pointer, but on the other
hand most keys in inst.__dict__ override attributes defined in
inst.__class__. Presence of a __set__ method on a descriptor
indicates that this is a data descriptor. (Even read-only data
descriptors have a __set__ method: it always raises an
exception.) Absence of a __set__ method on a descriptor
indicates that the descriptor isn’t interested in intercepting
assignment, and then the classic rule applies: an instance
variable with the same name as a method hides the method until
it is deleted.
Attribute descriptorsThis is where it gets interesting – and messy. Attribute
descriptors (descriptors for short) are stored in the
meta-object’s __dict__ (or in the __dict__ of one of its
ancestors), and have two uses: a descriptor can be used to get
or set the corresponding attribute value on the (regular,
non-meta) object, and it has an additional interface that
describes the attribute for documentation and introspection
purposes.
There is little prior art in Python for designing the
descriptor’s interface, neither for getting/setting the value
nor for describing the attribute otherwise, except some trivial
properties (it’s reasonable to assume that __name__ and __doc__
should be the attribute’s name and docstring). I will propose
such an API below.
If an object found in the meta-object’s __dict__ is not an
attribute descriptor, backward compatibility dictates certain
minimal semantics. This basically means that if it is a Python
function or an unbound method, the attribute is a method;
otherwise, it is the default value for a dynamic data
attribute. Backwards compatibility also dictates that (in the
absence of a __setattr__ method) it is legal to assign to an
attribute corresponding to a method, and that this creates a
data attribute shadowing the method for this particular
instance. However, these semantics are only required for
backwards compatibility with regular classes.
The introspection API is a read-only API. We don’t define the
effect of assignment to any of the special attributes (__dict__,
__class__ and __bases__), nor the effect of assignment to the
items of a __dict__. Generally, such assignments should be
considered off-limits. A future PEP may define some semantics for
some such assignments. (Especially because currently instances
support assignment to __class__ and __dict__, and classes support
assignment to __bases__ and __dict__.)
Specification of the attribute descriptor API
Attribute descriptors may have the following attributes. In the
examples, x is an object, C is x.__class__, x.meth() is a method,
and x.ivar is a data attribute or instance variable. All
attributes are optional – a specific attribute may or may not be
present on a given descriptor. An absent attribute means that the
corresponding information is not available or the corresponding
functionality is not implemented.
__name__: the attribute name. Because of aliasing and renaming,
the attribute may (additionally or exclusively) be known under a
different name, but this is the name under which it was born.
Example: C.meth.__name__ == 'meth'.
__doc__: the attribute’s documentation string. This may be
None.
__objclass__: the class that declared this attribute. The
descriptor only applies to objects that are instances of this
class (this includes instances of its subclasses). Example:
C.meth.__objclass__ is C.
__get__(): a function callable with one or two arguments that
retrieves the attribute value from an object. This is also
referred to as a “binding” operation, because it may return a
“bound method” object in the case of method descriptors. The
first argument, X, is the object from which the attribute must
be retrieved or to which it must be bound. When X is None, the
optional second argument, T, should be meta-object and the
binding operation may return an unbound method restricted to
instances of T. When both X and T are specified, X should be an
instance of T. Exactly what is returned by the binding
operation depends on the semantics of the descriptor; for
example, static methods and class methods (see below) ignore the
instance and bind to the type instead.
__set__(): a function of two arguments that sets the attribute
value on the object. If the attribute is read-only, this method
may raise a TypeError or AttributeError exception (both are
allowed, because both are historically found for undefined or
unsettable attributes). Example:
C.ivar.set(x, y) ~~ x.ivar = y.
Static methods and class methods
The descriptor API makes it possible to add static methods and
class methods. Static methods are easy to describe: they behave
pretty much like static methods in C++ or Java. Here’s an
example:
class C:
def foo(x, y):
print "staticmethod", x, y
foo = staticmethod(foo)
C.foo(1, 2)
c = C()
c.foo(1, 2)
Both the call C.foo(1, 2) and the call c.foo(1, 2) call foo() with
two arguments, and print “staticmethod 1 2”. No “self” is declared in
the definition of foo(), and no instance is required in the call.
The line “foo = staticmethod(foo)” in the class statement is the
crucial element: this makes foo() a static method. The built-in
staticmethod() wraps its function argument in a special kind of
descriptor whose __get__() method returns the original function
unchanged. Without this, the __get__() method of standard
function objects would have created a bound method object for
‘c.foo’ and an unbound method object for ‘C.foo’.
(XXX Barry suggests to use “sharedmethod” instead of
“staticmethod”, because the word static is being overloaded in so
many ways already. But I’m not sure if shared conveys the right
meaning.)
Class methods use a similar pattern to declare methods that
receive an implicit first argument that is the class for which
they are invoked. This has no C++ or Java equivalent, and is not
quite the same as what class methods are in Smalltalk, but may
serve a similar purpose. According to Armin Rigo, they are
similar to “virtual class methods” in Borland Pascal dialect
Delphi. (Python also has real metaclasses, and perhaps methods
defined in a metaclass have more right to the name “class method”;
but I expect that most programmers won’t be using metaclasses.)
Here’s an example:
class C:
def foo(cls, y):
print "classmethod", cls, y
foo = classmethod(foo)
C.foo(1)
c = C()
c.foo(1)
Both the call C.foo(1) and the call c.foo(1) end up calling foo()
with two arguments, and print “classmethod __main__.C 1”. The
first argument of foo() is implied, and it is the class, even if
the method was invoked via an instance. Now let’s continue the
example:
class D(C):
pass
D.foo(1)
d = D()
d.foo(1)
This prints “classmethod __main__.D 1” both times; in other words,
the class passed as the first argument of foo() is the class
involved in the call, not the class involved in the definition of
foo().
But notice this:
class E(C):
def foo(cls, y): # override C.foo
print "E.foo() called"
C.foo(y)
foo = classmethod(foo)
E.foo(1)
e = E()
e.foo(1)
In this example, the call to C.foo() from E.foo() will see class C
as its first argument, not class E. This is to be expected, since
the call specifies the class C. But it stresses the difference
between these class methods and methods defined in metaclasses,
where an upcall to a metamethod would pass the target class as an
explicit first argument. (If you don’t understand this, don’t
worry, you’re not alone.) Note that calling cls.foo(y) would be a
mistake – it would cause infinite recursion. Also note that you
can’t specify an explicit ‘cls’ argument to a class method. If
you want this (e.g. the __new__ method in PEP 253 requires this),
use a static method with a class as its explicit first argument
instead.
C API
XXX The following is VERY rough text that I wrote with a different
audience in mind; I’ll have to go through this to edit it more.
XXX It also doesn’t go into enough detail for the C API.
A built-in type can declare special data attributes in two ways:
using a struct memberlist (defined in structmember.h) or a struct
getsetlist (defined in descrobject.h). The struct memberlist is
an old mechanism put to new use: each attribute has a descriptor
record including its name, an enum giving its type (various C
types are supported as well as PyObject *), an offset from the
start of the instance, and a read-only flag.
The struct getsetlist mechanism is new, and intended for cases
that don’t fit in that mold, because they either require
additional checking, or are plain calculated attributes. Each
attribute here has a name, a getter C function pointer, a setter C
function pointer, and a context pointer. The function pointers
are optional, so that for example setting the setter function
pointer to NULL makes a read-only attribute. The context pointer
is intended to pass auxiliary information to generic getter/setter
functions, but I haven’t found a need for this yet.
Note that there is also a similar mechanism to declare built-in
methods: these are PyMethodDef structures, which contain a name
and a C function pointer (and some flags for the calling
convention).
Traditionally, built-in types have had to define their own
tp_getattro and tp_setattro slot functions to make these attribute
definitions work (PyMethodDef and struct memberlist are quite
old). There are convenience functions that take an array of
PyMethodDef or memberlist structures, an object, and an attribute
name, and return or set the attribute if found in the list, or
raise an exception if not found. But these convenience functions
had to be explicitly called by the tp_getattro or tp_setattro
method of the specific type, and they did a linear search of the
array using strcmp() to find the array element describing the
requested attribute.
I now have a brand spanking new generic mechanism that improves
this situation substantially.
Pointers to arrays of PyMethodDef, memberlist, getsetlist
structures are part of the new type object (tp_methods,
tp_members, tp_getset).
At type initialization time (in PyType_InitDict()), for each
entry in those three arrays, a descriptor object is created and
placed in a dictionary that belongs to the type (tp_dict).
Descriptors are very lean objects that mostly point to the
corresponding structure. An implementation detail is that all
descriptors share the same object type, and a discriminator
field tells what kind of descriptor it is (method, member, or
getset).
As explained in PEP 252, descriptors have a get() method that
takes an object argument and returns that object’s attribute;
descriptors for writable attributes also have a set() method
that takes an object and a value and set that object’s
attribute. Note that the get() object also serves as a bind()
operation for methods, binding the unbound method implementation
to the object.
Instead of providing their own tp_getattro and tp_setattro
implementation, almost all built-in objects now place
PyObject_GenericGetAttr and (if they have any writable
attributes) PyObject_GenericSetAttr in their tp_getattro and
tp_setattro slots. (Or, they can leave these NULL, and inherit
them from the default base object, if they arrange for an
explicit call to PyType_InitDict() for the type before the first
instance is created.)
In the simplest case, PyObject_GenericGetAttr() does exactly one
dictionary lookup: it looks up the attribute name in the type’s
dictionary (obj->ob_type->tp_dict). Upon success, there are two
possibilities: the descriptor has a get method, or it doesn’t.
For speed, the get and set methods are type slots: tp_descr_get
and tp_descr_set. If the tp_descr_get slot is non-NULL, it is
called, passing the object as its only argument, and the return
value from this call is the result of the getattr operation. If
the tp_descr_get slot is NULL, as a fallback the descriptor
itself is returned (compare class attributes that are not
methods but simple values).
PyObject_GenericSetAttr() works very similar but uses the
tp_descr_set slot and calls it with the object and the new
attribute value; if the tp_descr_set slot is NULL, an
AttributeError is raised.
But now for a more complicated case. The approach described
above is suitable for most built-in objects such as lists,
strings, numbers. However, some object types have a dictionary
in each instance that can store arbitrary attributes. In fact,
when you use a class statement to subtype an existing built-in
type, you automatically get such a dictionary (unless you
explicitly turn it off, using another advanced feature,
__slots__). Let’s call this the instance dict, to distinguish
it from the type dict.
In the more complicated case, there’s a conflict between names
stored in the instance dict and names stored in the type dict.
If both dicts have an entry with the same key, which one should
we return? Looking at classic Python for guidance, I find
conflicting rules: for class instances, the instance dict
overrides the class dict, except for the special attributes
(like __dict__ and __class__), which have priority over the
instance dict.
I resolved this with the following set of rules, implemented in
PyObject_GenericGetAttr():
Look in the type dict. If you find a data descriptor, use
its get() method to produce the result. This takes care of
special attributes like __dict__ and __class__.
Look in the instance dict. If you find anything, that’s it.
(This takes care of the requirement that normally the
instance dict overrides the class dict.)
Look in the type dict again (in reality this uses the saved
result from step 1, of course). If you find a descriptor,
use its get() method; if you find something else, that’s it;
if it’s not there, raise AttributeError.
This requires a classification of descriptors as data and
nondata descriptors. The current implementation quite sensibly
classifies member and getset descriptors as data (even if they
are read-only!) and method descriptors as nondata.
Non-descriptors (like function pointers or plain values) are
also classified as non-data (!).
This scheme has one drawback: in what I assume to be the most
common case, referencing an instance variable stored in the
instance dict, it does two dictionary lookups, whereas the
classic scheme did a quick test for attributes starting with two
underscores plus a single dictionary lookup. (Although the
implementation is sadly structured as instance_getattr() calling
instance_getattr1() calling instance_getattr2() which finally
calls PyDict_GetItem(), and the underscore test calls
PyString_AsString() rather than inlining this. I wonder if
optimizing the snot out of this might not be a good idea to
speed up Python 2.2, if we weren’t going to rip it all out. :-)
A benchmark verifies that in fact this is as fast as classic
instance variable lookup, so I’m no longer worried.
Modification for dynamic types: step 1 and 3 look in the
dictionary of the type and all its base classes (in MRO
sequence, or course).
Discussion
XXX
Examples
Let’s look at lists. In classic Python, the method names of
lists were available as the __methods__ attribute of list objects:
>>> [].__methods__
['append', 'count', 'extend', 'index', 'insert', 'pop',
'remove', 'reverse', 'sort']
>>>
Under the new proposal, the __methods__ attribute no longer exists:
>>> [].__methods__
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'list' object has no attribute '__methods__'
>>>
Instead, you can get the same information from the list type:
>>> T = [].__class__
>>> T
<type 'list'>
>>> dir(T) # like T.__dict__.keys(), but sorted
['__add__', '__class__', '__contains__', '__eq__', '__ge__',
'__getattr__', '__getitem__', '__getslice__', '__gt__',
'__iadd__', '__imul__', '__init__', '__le__', '__len__',
'__lt__', '__mul__', '__ne__', '__new__', '__radd__',
'__repr__', '__rmul__', '__setitem__', '__setslice__', 'append',
'count', 'extend', 'index', 'insert', 'pop', 'remove',
'reverse', 'sort']
>>>
The new introspection API gives more information than the old one:
in addition to the regular methods, it also shows the methods that
are normally invoked through special notations, e.g. __iadd__
(+=), __len__ (len), __ne__ (!=).
You can invoke any method from this list directly:
>>> a = ['tic', 'tac']
>>> T.__len__(a) # same as len(a)
2
>>> T.append(a, 'toe') # same as a.append('toe')
>>> a
['tic', 'tac', 'toe']
>>>
This is just like it is for user-defined classes.
Notice a familiar yet surprising name in the list: __init__. This
is the domain of PEP 253.
Backwards compatibility
XXX
Warnings and Errors
XXX
Implementation
A partial implementation of this PEP is available from CVS as a
branch named “descr-branch”. To experiment with this
implementation, proceed to check out Python from CVS according to
the instructions at http://sourceforge.net/cvs/?group_id=5470 but
add the arguments “-r descr-branch” to the cvs checkout command.
(You can also start with an existing checkout and do “cvs update
-r descr-branch”.) For some examples of the features described
here, see the file Lib/test/test_descr.py.
Note: the code in this branch goes way beyond this PEP; it is also
the experimentation area for PEP 253 (Subtyping Built-in Types).
References
XXX
Copyright
This document has been placed in the public domain.
| Final | PEP 252 – Making Types Look More Like Classes | Standards Track | This PEP proposes changes to the introspection API for types that
makes them look more like classes, and their instances more like
class instances. For example, type(x) will be equivalent to
x.__class__ for most built-in types. When C is x.__class__,
x.meth(a) will generally be equivalent to C.meth(x, a), and
C.__dict__ contains x’s methods and other attributes. |
PEP 253 – Subtyping Built-in Types
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
14-May-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Introduction
About metatypes
Making a type a factory for its instances
Preparing a type for subtyping
Creating a subtype of a built-in type in C
Subtyping in Python
Multiple inheritance
MRO: Method resolution order (the lookup rule)
XXX To be done
open issues
Implementation
References
Copyright
Abstract
This PEP proposes additions to the type object API that will allow
the creation of subtypes of built-in types, in C and in Python.
[Editor’s note: the ideas described in this PEP have been incorporated
into Python. The PEP no longer accurately describes the implementation.]
Introduction
Traditionally, types in Python have been created statically, by
declaring a global variable of type PyTypeObject and initializing
it with a static initializer. The slots in the type object
describe all aspects of a Python type that are relevant to the
Python interpreter. A few slots contain dimensional information
(like the basic allocation size of instances), others contain
various flags, but most slots are pointers to functions to
implement various kinds of behaviors. A NULL pointer means that
the type does not implement the specific behavior; in that case
the system may provide a default behavior or raise an exception
when the behavior is invoked for an instance of the type. Some
collections of function pointers that are usually defined together
are obtained indirectly via a pointer to an additional structure
containing more function pointers.
While the details of initializing a PyTypeObject structure haven’t
been documented as such, they are easily gleaned from the examples
in the source code, and I am assuming that the reader is
sufficiently familiar with the traditional way of creating new
Python types in C.
This PEP will introduce the following features:
a type can be a factory function for its instances
types can be subtyped in C
types can be subtyped in Python with the class statement
multiple inheritance from types is supported (insofar as
practical – you still can’t multiply inherit from list and
dictionary)
the standard coercion functions (int, tuple, str etc.) will
be redefined to be the corresponding type objects, which serve
as their own factory functions
a class statement can contain a __metaclass__ declaration,
specifying the metaclass to be used to create the new class
a class statement can contain a __slots__ declaration,
specifying the specific names of the instance variables
supported
This PEP builds on PEP 252, which adds standard introspection to
types; for example, when a particular type object initializes the
tp_hash slot, that type object has a __hash__ method when
introspected. PEP 252 also adds a dictionary to type objects
which contains all methods. At the Python level, this dictionary
is read-only for built-in types; at the C level, it is accessible
directly (but it should not be modified except as part of
initialization).
For binary compatibility, a flag bit in the tp_flags slot
indicates the existence of the various new slots in the type
object introduced below. Types that don’t have the
Py_TPFLAGS_HAVE_CLASS bit set in their tp_flags slot are assumed
to have NULL values for all the subtyping slots. (Warning: the
current implementation prototype is not yet consistent in its
checking of this flag bit. This should be fixed before the final
release.)
In current Python, a distinction is made between types and
classes. This PEP together with PEP 254 will remove that
distinction. However, for backwards compatibility the distinction
will probably remain for years to come, and without PEP 254, the
distinction is still large: types ultimately have a built-in type
as a base class, while classes ultimately derive from a
user-defined class. Therefore, in the rest of this PEP, I will
use the word type whenever I can – including base type or
supertype, derived type or subtype, and metatype. However,
sometimes the terminology necessarily blends, for example an
object’s type is given by its __class__ attribute, and subtyping
in Python is spelled with a class statement. If further
distinction is necessary, user-defined classes can be referred to
as “classic” classes.
About metatypes
Inevitably the discussion comes to metatypes (or metaclasses).
Metatypes are nothing new in Python: Python has always been able
to talk about the type of a type:
>>> a = 0
>>> type(a)
<type 'int'>
>>> type(type(a))
<type 'type'>
>>> type(type(type(a)))
<type 'type'>
>>>
In this example, type(a) is a “regular” type, and type(type(a)) is
a metatype. While as distributed all types have the same metatype
(PyType_Type, which is also its own metatype), this is not a
requirement, and in fact a useful and relevant 3rd party extension
(ExtensionClasses by Jim Fulton) creates an additional metatype.
The type of classic classes, known as types.ClassType, can also be
considered a distinct metatype.
A feature closely connected to metatypes is the “Don Beaudry
hook”, which says that if a metatype is callable, its instances
(which are regular types) can be subclassed (really subtyped)
using a Python class statement. I will use this rule to support
subtyping of built-in types, and in fact it greatly simplifies the
logic of class creation to always simply call the metatype. When
no base class is specified, a default metatype is called – the
default metatype is the “ClassType” object, so the class statement
will behave as before in the normal case. (This default can be
changed per module by setting the global variable __metaclass__.)
Python uses the concept of metatypes or metaclasses in a different
way than Smalltalk. In Smalltalk-80, there is a hierarchy of
metaclasses that mirrors the hierarchy of regular classes,
metaclasses map 1-1 to classes (except for some funny business at
the root of the hierarchy), and each class statement creates both
a regular class and its metaclass, putting class methods in the
metaclass and instance methods in the regular class.
Nice though this may be in the context of Smalltalk, it’s not
compatible with the traditional use of metatypes in Python, and I
prefer to continue in the Python way. This means that Python
metatypes are typically written in C, and may be shared between
many regular types. (It will be possible to subtype metatypes in
Python, so it won’t be absolutely necessary to write C to use
metatypes; but the power of Python metatypes will be limited. For
example, Python code will never be allowed to allocate raw memory
and initialize it at will.)
Metatypes determine various policies for types, such as what
happens when a type is called, how dynamic types are (whether a
type’s __dict__ can be modified after it is created), what the
method resolution order is, how instance attributes are looked
up, and so on.
I’ll argue that left-to-right depth-first is not the best
solution when you want to get the most use from multiple
inheritance.
I’ll argue that with multiple inheritance, the metatype of the
subtype must be a descendant of the metatypes of all base types.
I’ll come back to metatypes later.
Making a type a factory for its instances
Traditionally, for each type there is at least one C factory
function that creates instances of the type (PyTuple_New(),
PyInt_FromLong() and so on). These factory functions take care of
both allocating memory for the object and initializing that
memory. As of Python 2.0, they also have to interface with the
garbage collection subsystem, if the type chooses to participate
in garbage collection (which is optional, but strongly recommended
for so-called “container” types: types that may contain references
to other objects, and hence may participate in reference cycles).
In this proposal, type objects can be factory functions for their
instances, making the types directly callable from Python. This
mimics the way classes are instantiated. The C APIs for creating
instances of various built-in types will remain valid and in some
cases more efficient. Not all types will become their own factory
functions.
The type object has a new slot, tp_new, which can act as a factory
for instances of the type. Types are now callable, because the
tp_call slot is set in PyType_Type (the metatype); the function
looks for the tp_new slot of the type that is being called.
Explanation: the tp_call slot of a regular type object (such as
PyInt_Type or PyList_Type) defines what happens when instances
of that type are called; in particular, the tp_call slot in the
function type, PyFunction_Type, is the key to making functions
callable. As another example, PyInt_Type.tp_call is NULL, because
integers are not callable. The new paradigm makes type objects
callable. Since type objects are instances of their metatype
(PyType_Type), the metatype’s tp_call slot (PyType_Type.tp_call)
points to a function that is invoked when any type object is
called. Now, since each type has to do something different to
create an instance of itself, PyType_Type.tp_call immediately
defers to the tp_new slot of the type that is being called.
PyType_Type itself is also callable: its tp_new slot creates a new
type. This is used by the class statement (formalizing the Don
Beaudry hook, see above). And what makes PyType_Type callable?
The tp_call slot of its metatype – but since it is its own
metatype, that is its own tp_call slot!
If the type’s tp_new slot is NULL, an exception is raised.
Otherwise, the tp_new slot is called. The signature for the
tp_new slot is
PyObject *tp_new(PyTypeObject *type,
PyObject *args,
PyObject *kwds)
where ‘type’ is the type whose tp_new slot is called, and ‘args’
and ‘kwds’ are the sequential and keyword arguments to the call,
passed unchanged from tp_call. (The ‘type’ argument is used in
combination with inheritance, see below.)
There are no constraints on the object type that is returned,
although by convention it should be an instance of the given
type. It is not necessary that a new object is returned; a
reference to an existing object is fine too. The return value
should always be a new reference, owned by the caller.
Once the tp_new slot has returned an object, further initialization
is attempted by calling the tp_init() slot of the resulting
object’s type, if not NULL. This has the following signature:
int tp_init(PyObject *self,
PyObject *args,
PyObject *kwds)
It corresponds more closely to the __init__() method of classic
classes, and in fact is mapped to that by the slot/special-method
correspondence rules. The difference in responsibilities between
the tp_new() slot and the tp_init() slot lies in the invariants
they ensure. The tp_new() slot should ensure only the most
essential invariants, without which the C code that implements the
objects would break. The tp_init() slot should be used for
overridable user-specific initializations. Take for example the
dictionary type. The implementation has an internal pointer to a
hash table which should never be NULL. This invariant is taken
care of by the tp_new() slot for dictionaries. The dictionary
tp_init() slot, on the other hand, could be used to give the
dictionary an initial set of keys and values based on the
arguments passed in.
Note that for immutable object types, the initialization cannot be
done by the tp_init() slot: this would provide the Python user
with a way to change the initialization. Therefore, immutable
objects typically have an empty tp_init() implementation and do
all their initialization in their tp_new() slot.
You may wonder why the tp_new() slot shouldn’t call the tp_init()
slot itself. The reason is that in certain circumstances (like
support for persistent objects), it is important to be able to
create an object of a particular type without initializing it any
further than necessary. This may conveniently be done by calling
the tp_new() slot without calling tp_init(). It is also possible
that tp_init() is not called, or called more than once – its
operation should be robust even in these anomalous cases.
For some objects, tp_new() may return an existing object. For
example, the factory function for integers caches the integers -1
through 99. This is permissible only when the type argument to
tp_new() is the type that defined the tp_new() function (in the
example, if type == &PyInt_Type), and when the tp_init() slot for
this type does nothing. If the type argument differs, the
tp_new() call is initiated by a derived type’s tp_new() to
create the object and initialize the base type portion of the
object; in this case tp_new() should always return a new object
(or raise an exception).
Both tp_new() and tp_init() should receive exactly the same ‘args’
and ‘kwds’ arguments, and both should check that the arguments are
acceptable, because they may be called independently.
There’s a third slot related to object creation: tp_alloc(). Its
responsibility is to allocate the memory for the object,
initialize the reference count (ob_refcnt) and the type pointer
(ob_type), and initialize the rest of the object to all zeros. It
should also register the object with the garbage collection
subsystem if the type supports garbage collection. This slot
exists so that derived types can override the memory allocation
policy (like which heap is being used) separately from the
initialization code. The signature is:
PyObject *tp_alloc(PyTypeObject *type, int nitems)
The type argument is the type of the new object. The nitems
argument is normally zero, except for objects with a variable
allocation size (basically strings, tuples, and longs). The
allocation size is given by the following expression:
type->tp_basicsize + nitems * type->tp_itemsize
The tp_alloc slot is only used for subclassable types. The tp_new()
function of the base class must call the tp_alloc() slot of the
type passed in as its first argument. It is the tp_new()
function’s responsibility to calculate the number of items. The
tp_alloc() slot will set the ob_size member of the new object if
the type->tp_itemsize member is nonzero.
(Note: in certain debugging compilation modes, the type structure
used to have members named tp_alloc and a tp_free slot already,
counters for the number of allocations and deallocations. These
are renamed to tp_allocs and tp_deallocs.)
Standard implementations for tp_alloc() and tp_new() are
available. PyType_GenericAlloc() allocates an object from the
standard heap and initializes it properly. It uses the above
formula to determine the amount of memory to allocate, and takes
care of GC registration. The only reason not to use this
implementation would be to allocate objects from a different heap
(as is done by some very small frequently used objects like ints
and tuples). PyType_GenericNew() adds very little: it just calls
the type’s tp_alloc() slot with zero for nitems. But for mutable
types that do all their initialization in their tp_init() slot,
this may be just the ticket.
Preparing a type for subtyping
The idea behind subtyping is very similar to that of single
inheritance in C++. A base type is described by a structure
declaration (similar to the C++ class declaration) plus a type
object (similar to the C++ vtable). A derived type can extend the
structure (but must leave the names, order and type of the members
of the base structure unchanged) and can override certain slots in
the type object, leaving others the same. (Unlike C++ vtables,
all Python type objects have the same memory layout.)
The base type must do the following:
Add the flag value Py_TPFLAGS_BASETYPE to tp_flags.
Declare and use tp_new(), tp_alloc() and optional tp_init()
slots.
Declare and use tp_dealloc() and tp_free().
Export its object structure declaration.
Export a subtyping-aware type-checking macro.
The requirements and signatures for tp_new(), tp_alloc() and
tp_init() have already been discussed above: tp_alloc() should
allocate the memory and initialize it to mostly zeros; tp_new()
should call the tp_alloc() slot and then proceed to do the
minimally required initialization; tp_init() should be used for
more extensive initialization of mutable objects.
It should come as no surprise that there are similar conventions
at the end of an object’s lifetime. The slots involved are
tp_dealloc() (familiar to all who have ever implemented a Python
extension type) and tp_free(), the new kid on the block. (The
names aren’t quite symmetric; tp_free() corresponds to tp_alloc(),
which is fine, but tp_dealloc() corresponds to tp_new(). Maybe
the tp_dealloc slot should be renamed?)
The tp_free() slot should be used to free the memory and
unregister the object with the garbage collection subsystem, and
can be overridden by a derived class; tp_dealloc() should
deinitialize the object (usually by calling Py_XDECREF() for
various sub-objects) and then call tp_free() to deallocate the
memory. The signature for tp_dealloc() is the same as it always
was:
void tp_dealloc(PyObject *object)
The signature for tp_free() is the same:
void tp_free(PyObject *object)
(In a previous version of this PEP, there was also a role reserved
for the tp_clear() slot. This turned out to be a bad idea.)
To be usefully subtyped in C, a type must export the structure
declaration for its instances through a header file, as it is
needed to derive a subtype. The type object for the base type
must also be exported.
If the base type has a type-checking macro (like PyDict_Check()),
this macro should be made to recognize subtypes. This can be done
by using the new PyObject_TypeCheck(object, type) macro, which
calls a function that follows the base class links.
The PyObject_TypeCheck() macro contains a slight optimization: it
first compares object->ob_type directly to the type argument, and
if this is a match, bypasses the function call. This should make
it fast enough for most situations.
Note that this change in the type-checking macro means that C
functions that require an instance of the base type may be invoked
with instances of the derived type. Before enabling subtyping of
a particular type, its code should be checked to make sure that
this won’t break anything. It has proved useful in the prototype
to add another type-checking macro for the built-in Python object
types, to check for exact type match too (for example,
PyDict_Check(x) is true if x is an instance of dictionary or of a
dictionary subclass, while PyDict_CheckExact(x) is true only if x
is a dictionary).
Creating a subtype of a built-in type in C
The simplest form of subtyping is subtyping in C. It is the
simplest form because we can require the C code to be aware of
some of the problems, and it’s acceptable for C code that doesn’t
follow the rules to dump core. For added simplicity, it is
limited to single inheritance.
Let’s assume we’re deriving from a mutable base type whose
tp_itemsize is zero. The subtype code is not GC-aware, although
it may inherit GC-awareness from the base type (this is
automatic). The base type’s allocation uses the standard heap.
The derived type begins by declaring a type structure which
contains the base type’s structure. For example, here’s the type
structure for a subtype of the built-in list type:
typedef struct {
PyListObject list;
int state;
} spamlistobject;
Note that the base type structure member (here PyListObject) must
be the first member of the structure; any following members are
additions. Also note that the base type is not referenced via a
pointer; the actual contents of its structure must be included!
(The goal is for the memory layout of the beginning of the
subtype instance to be the same as that of the base type
instance.)
Next, the derived type must declare a type object and initialize
it. Most of the slots in the type object may be initialized to
zero, which is a signal that the base type slot must be copied
into it. Some slots that must be initialized properly:
The object header must be filled in as usual; the type should
be &PyType_Type.
The tp_basicsize slot must be set to the size of the subtype
instance struct (in the above example: sizeof(spamlistobject)).
The tp_base slot must be set to the address of the base type’s
type object.
If the derived slot defines any pointer members, the
tp_dealloc slot function requires special attention, see
below; otherwise, it can be set to zero, to inherit the base
type’s deallocation function.
The tp_flags slot must be set to the usual Py_TPFLAGS_DEFAULT
value.
The tp_name slot must be set; it is recommended to set tp_doc
as well (these are not inherited).
If the subtype defines no additional structure members (it only
defines new behavior, no new data), the tp_basicsize and the
tp_dealloc slots may be left set to zero.
The subtype’s tp_dealloc slot deserves special attention. If the
derived type defines no additional pointer members that need to be
DECREF’ed or freed when the object is deallocated, it can be set
to zero. Otherwise, the subtype’s tp_dealloc() function must call
Py_XDECREF() for any PyObject * members and the correct memory
freeing function for any other pointers it owns, and then call the
base class’s tp_dealloc() slot. This call has to be made via the
base type’s type structure, for example, when deriving from the
standard list type:
PyList_Type.tp_dealloc(self);
If the subtype wants to use a different allocation heap than the
base type, the subtype must override both the tp_alloc() and the
tp_free() slots. These will be called by the base class’s
tp_new() and tp_dealloc() slots, respectively.
To complete the initialization of the type, PyType_InitDict() must
be called. This replaces slots initialized to zero in the subtype
with the value of the corresponding base type slots. (It also
fills in tp_dict, the type’s dictionary, and does various other
initializations necessary for type objects.)
A subtype is not usable until PyType_InitDict() is called for it;
this is best done during module initialization, assuming the
subtype belongs to a module. An alternative for subtypes added to
the Python core (which don’t live in a particular module) would be
to initialize the subtype in their constructor function. It is
allowed to call PyType_InitDict() more than once; the second and
further calls have no effect. To avoid unnecessary calls, a test
for tp_dict==NULL can be made.
(During initialization of the Python interpreter, some types are
actually used before they are initialized. As long as the slots
that are actually needed are initialized, especially tp_dealloc,
this works, but it is fragile and not recommended as a general
practice.)
To create a subtype instance, the subtype’s tp_new() slot is
called. This should first call the base type’s tp_new() slot and
then initialize the subtype’s additional data members. To further
initialize the instance, the tp_init() slot is typically called.
Note that the tp_new() slot should not call the tp_init() slot;
this is up to tp_new()’s caller (typically a factory function).
There are circumstances where it is appropriate not to call
tp_init().
If a subtype defines a tp_init() slot, the tp_init() slot should
normally first call the base type’s tp_init() slot.
(XXX There should be a paragraph or two about argument passing
here.)
Subtyping in Python
The next step is to allow subtyping of selected built-in types
through a class statement in Python. Limiting ourselves to single
inheritance for now, here is what happens for a simple class
statement:
class C(B):
var1 = 1
def method1(self): pass
# etc.
The body of the class statement is executed in a fresh environment
(basically, a new dictionary used as local namespace), and then C
is created. The following explains how C is created.
Assume B is a type object. Since type objects are objects, and
every object has a type, B has a type. Since B is itself a type,
we also call its type its metatype. B’s metatype is accessible
via type(B) or B.__class__ (the latter notation is new for types;
it is introduced in PEP 252). Let’s say this metatype is M (for
Metatype). The class statement will create a new type, C. Since
C will be a type object just like B, we view the creation of C as
an instantiation of the metatype, M. The information that needs
to be provided for the creation of a subclass is:
its name (in this example the string “C”);
its bases (a singleton tuple containing B);
the results of executing the class body, in the form of a
dictionary (for example
{"var1": 1, "method1": <functionmethod1 at ...>, ...}).
The class statement will result in the following call:
C = M("C", (B,), dict)
where dict is the dictionary resulting from execution of the
class body. In other words, the metatype (M) is called.
Note that even though the example has only one base, we still pass
in a (singleton) sequence of bases; this makes the interface
uniform with the multiple-inheritance case.
In current Python, this is called the “Don Beaudry hook” after its
inventor; it is an exceptional case that is only invoked when a
base class is not a regular class. For a regular base class (or
when no base class is specified), current Python calls
PyClass_New(), the C level factory function for classes, directly.
Under the new system this is changed so that Python always
determines a metatype and calls it as given above. When one or
more bases are given, the type of the first base is used as the
metatype; when no base is given, a default metatype is chosen. By
setting the default metatype to PyClass_Type, the metatype of
“classic” classes, the classic behavior of the class statement is
retained. This default can be changed per module by setting the
global variable __metaclass__.
There are two further refinements here. First, a useful feature
is to be able to specify a metatype directly. If the class
suite defines a variable __metaclass__, that is the metatype
to call. (Note that setting __metaclass__ at the module level
only affects class statements without a base class and without an
explicit __metaclass__ declaration; but setting __metaclass__ in a
class suite overrides the default metatype unconditionally.)
Second, with multiple bases, not all bases need to have the same
metatype. This is called a metaclass conflict [1]. Some
metaclass conflicts can be resolved by searching through the set
of bases for a metatype that derives from all other given
metatypes. If such a metatype cannot be found, an exception is
raised and the class statement fails.
This conflict resolution can be implemented by the metatype
constructors: the class statement just calls the metatype of the first
base (or that specified by the __metaclass__ variable), and this
metatype’s constructor looks for the most derived metatype. If
that is itself, it proceeds; otherwise, it calls that metatype’s
constructor. (Ultimate flexibility: another metatype might choose
to require that all bases have the same metatype, or that there’s
only one base class, or whatever.)
(In [1], a new metaclass is automatically derived that is a
subclass of all given metaclasses. But since it is questionable
in Python how conflicting method definitions of the various
metaclasses should be merged, I don’t think this is feasible.
Should the need arise, the user can derive such a metaclass
manually and specify it using the __metaclass__ variable. It is
also possible to have a new metaclass that does this.)
Note that calling M requires that M itself has a type: the
meta-metatype. And the meta-metatype has a type, the
meta-meta-metatype. And so on. This is normally cut short at
some level by making a metatype be its own metatype. This is
indeed what happens in Python: the ob_type reference in
PyType_Type is set to &PyType_Type. In the absence of third party
metatypes, PyType_Type is the only metatype in the Python
interpreter.
(In a previous version of this PEP, there was one additional
meta-level, and there was a meta-metatype called “turtle”. This
turned out to be unnecessary.)
In any case, the work for creating C is done by M’s tp_new() slot.
It allocates space for an “extended” type structure, containing:
the type object; the auxiliary structures (as_sequence etc.); the
string object containing the type name (to ensure that this object
isn’t deallocated while the type object is still referencing it); and
some auxiliary storage (to be described later). It initializes this
storage to zeros except for a few crucial slots (for example, tp_name
is set to point to the type name) and then sets the tp_base slot to
point to B. Then PyType_InitDict() is called to inherit B’s slots.
Finally, C’s tp_dict slot is updated with the contents of the
namespace dictionary (the third argument to the call to M).
Multiple inheritance
The Python class statement supports multiple inheritance, and we
will also support multiple inheritance involving built-in types.
However, there are some restrictions. The C runtime architecture
doesn’t make it feasible to have a meaningful subtype of two
different built-in types except in a few degenerate cases.
Changing the C runtime to support fully general multiple
inheritance would be too much of an upheaval of the code base.
The main problem with multiple inheritance from different built-in
types stems from the fact that the C implementation of built-in
types accesses structure members directly; the C compiler
generates an offset relative to the object pointer and that’s
that. For example, the list and dictionary type structures each
declare a number of different but overlapping structure members.
A C function accessing an object expecting a list won’t work when
passed a dictionary, and vice versa, and there’s not much we could
do about this without rewriting all code that accesses lists and
dictionaries. This would be too much work, so we won’t do this.
The problem with multiple inheritance is caused by conflicting
structure member allocations. Classes defined in Python normally
don’t store their instance variables in structure members: they
are stored in an instance dictionary. This is the key to a
partial solution. Suppose we have the following two classes:
class A(dictionary):
def foo(self): pass
class B(dictionary):
def bar(self): pass
class C(A, B): pass
(Here, ‘dictionary’ is the type of built-in dictionary objects,
a.k.a. type({}) or {}.__class__ or types.DictType.) If we look at
the structure layout, we find that an A instance has the layout
of a dictionary followed by the __dict__ pointer, and a B instance
has the same layout; since there are no structure member layout
conflicts, this is okay.
Here’s another example:
class X(object):
def foo(self): pass
class Y(dictionary):
def bar(self): pass
class Z(X, Y): pass
(Here, ‘object’ is the base for all built-in types; its structure
layout only contains the ob_refcnt and ob_type members.) This
example is more complicated, because the __dict__ pointer for X
instances has a different offset than that for Y instances. Where
is the __dict__ pointer for Z instances? The answer is that the
offset for the __dict__ pointer is not hardcoded, it is stored in
the type object.
Suppose on a particular machine an ‘object’ structure is 8 bytes
long, and a ‘dictionary’ struct is 60 bytes, and an object pointer
is 4 bytes. Then an X structure is 12 bytes (an object structure
followed by a __dict__ pointer), and a Y structure is 64 bytes (a
dictionary structure followed by a __dict__ pointer). The Z
structure has the same layout as the Y structure in this example.
Each type object (X, Y and Z) has a “__dict__ offset” which is
used to find the __dict__ pointer. Thus, the recipe for looking
up an instance variable is:
get the type of the instance
get the __dict__ offset from the type object
add the __dict__ offset to the instance pointer
look in the resulting address to find a dictionary reference
look up the instance variable name in that dictionary
Of course, this recipe can only be implemented in C, and I have
left out some details. But this allows us to use multiple
inheritance patterns similar to the ones we can use with classic
classes.
XXX I should write up the complete algorithm here to determine
base class compatibility, but I can’t be bothered right now. Look
at best_base() in typeobject.c in the implementation mentioned
below.
MRO: Method resolution order (the lookup rule)
With multiple inheritance comes the question of method resolution
order: the order in which a class or type and its bases are
searched looking for a method of a given name.
In classic Python, the rule is given by the following recursive
function, also known as the left-to-right depth-first rule:
def classic_lookup(cls, name):
if cls.__dict__.has_key(name):
return cls.__dict__[name]
for base in cls.__bases__:
try:
return classic_lookup(base, name)
except AttributeError:
pass
raise AttributeError, name
The problem with this becomes apparent when we consider a “diamond
diagram”:
class A:
^ ^ def save(self): ...
/ \
/ \
/ \
/ \
class B class C:
^ ^ def save(self): ...
\ /
\ /
\ /
\ /
class D
Arrows point from a subtype to its base type(s). This particular
diagram means B and C derive from A, and D derives from B and C
(and hence also, indirectly, from A).
Assume that C overrides the method save(), which is defined in the
base A. (C.save() probably calls A.save() and then saves some of
its own state.) B and D don’t override save(). When we invoke
save() on a D instance, which method is called? According to the
classic lookup rule, A.save() is called, ignoring C.save()!
This is not good. It probably breaks C (its state doesn’t get
saved), defeating the whole purpose of inheriting from C in the
first place.
Why was this not a problem in classic Python? Diamond diagrams
are rarely found in classic Python class hierarchies. Most class
hierarchies use single inheritance, and multiple inheritance is
usually confined to mix-in classes. In fact, the problem shown
here is probably the reason why multiple inheritance is unpopular
in classic Python.
Why will this be a problem in the new system? The ‘object’ type
at the top of the type hierarchy defines a number of methods that
can usefully be extended by subtypes, for example __getattr__().
(Aside: in classic Python, the __getattr__() method is not really
the implementation for the get-attribute operation; it is a hook
that only gets invoked when an attribute cannot be found by normal
means. This has often been cited as a shortcoming – some class
designs have a legitimate need for a __getattr__() method that
gets called for all attribute references. But then of course
this method has to be able to invoke the default implementation
directly. The most natural way is to make the default
implementation available as object.__getattr__(self, name).)
Thus, a classic class hierarchy like this:
class B class C:
^ ^ def __getattr__(self, name): ...
\ /
\ /
\ /
\ /
class D
will change into a diamond diagram under the new system:
object:
^ ^ __getattr__()
/ \
/ \
/ \
/ \
class B class C:
^ ^ def __getattr__(self, name): ...
\ /
\ /
\ /
\ /
class D
and while in the original diagram C.__getattr__() is invoked,
under the new system with the classic lookup rule,
object.__getattr__() would be invoked!
Fortunately, there’s a lookup rule that’s better. It’s a bit
difficult to explain, but it does the right thing in the diamond
diagram, and it is the same as the classic lookup rule when there
are no diamonds in the inheritance graph (when it is a tree).
The new lookup rule constructs a list of all classes in the
inheritance diagram in the order in which they will be searched.
This construction is done at class definition time to save time.
To explain the new lookup rule, let’s first consider what such a
list would look like for the classic lookup rule. Note that in
the presence of diamonds the classic lookup visits some classes
multiple times. For example, in the ABCD diamond diagram above,
the classic lookup rule visits the classes in this order:
D, B, A, C, A
Note how A occurs twice in the list. The second occurrence is
redundant, since anything that could be found there would already
have been found when searching the first occurrence.
We use this observation to explain our new lookup rule. Using the
classic lookup rule, construct the list of classes that would be
searched, including duplicates. Now for each class that occurs in
the list multiple times, remove all occurrences except for the
last. The resulting list contains each ancestor class exactly
once (including the most derived class, D in the example).
Searching for methods in this order will do the right thing for
the diamond diagram. Because of the way the list is constructed,
it does not change the search order in situations where no diamond
is involved.
Isn’t this backwards incompatible? Won’t it break existing code?
It would, if we changed the method resolution order for all
classes. However, in Python 2.2, the new lookup rule will only be
applied to types derived from built-in types, which is a new
feature. Class statements without a base class create “classic
classes”, and so do class statements whose base classes are
themselves classic classes. For classic classes the classic
lookup rule will be used. (To experiment with the new lookup rule
for classic classes, you will be able to specify a different
metaclass explicitly.) We’ll also provide a tool that analyzes a
class hierarchy looking for methods that would be affected by a
change in method resolution order.
XXX Another way to explain the motivation for the new MRO, due to
Damian Conway: you never use the method defined in a base class if
it is defined in a derived class that you haven’t explored yet
(using the old search order).
XXX To be done
Additional topics to be discussed in this PEP:
backwards compatibility issues!!!
class methods and static methods
cooperative methods and super()
mapping between type object slots (tp_foo) and special methods
(__foo__) (actually, this may belong in PEP 252)
built-in names for built-in types (object, int, str, list etc.)
__dict__ and __dictoffset__
__slots__
the HEAPTYPE flag bit
GC support
API docs for all the new functions
how to use __new__
writing metaclasses (using mro() etc.)
high level user overview
open issues
do we need __del__?
assignment to __dict__, __bases__
inconsistent naming
(e.g. tp_dealloc/tp_new/tp_init/tp_alloc/tp_free)
add builtin alias ‘dict’ for ‘dictionary’?
when subclasses of dict/list etc. are passed to system
functions, the __getitem__ overrides (etc.) aren’t always
used
Implementation
A prototype implementation of this PEP (and for PEP 252) is
available from CVS, and in the series of Python 2.2 alpha and beta
releases. For some examples of the features described here, see
the file Lib/test/test_descr.py and the extension module
Modules/xxsubtype.c.
References
[1] (1, 2)
“Putting Metaclasses to Work”, by Ira R. Forman and Scott
H. Danforth, Addison-Wesley 1999.
(http://www.aw.com/product/0,2627,0201433052,00.html)
Copyright
This document has been placed in the public domain.
| Final | PEP 253 – Subtyping Built-in Types | Standards Track | This PEP proposes additions to the type object API that will allow
the creation of subtypes of built-in types, in C and in Python. |
PEP 254 – Making Classes Look More Like Types
Author:
Guido van Rossum <guido at python.org>
Status:
Rejected
Type:
Standards Track
Created:
18-Jun-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Status
Copyright
Abstract
This PEP has not been written yet. Watch this space!
Status
This PEP was a stub entry and eventually abandoned without having
been filled-out. Substantially most of the intended functionality
was implemented in Py2.2 with new-style types and classes.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 254 – Making Classes Look More Like Types | Standards Track | This PEP has not been written yet. Watch this space! |
PEP 255 – Simple Generators
Author:
Neil Schemenauer <nas at arctrix.com>,
Tim Peters <tim.peters at gmail.com>,
Magnus Lie Hetland <magnus at hetland.org>
Status:
Final
Type:
Standards Track
Requires:
234
Created:
18-May-2001
Python-Version:
2.2
Post-History:
14-Jun-2001, 23-Jun-2001
Table of Contents
Abstract
Motivation
Specification: Yield
Specification: Return
Specification: Generators and Exception Propagation
Specification: Try/Except/Finally
Example
Q & A
Why not a new keyword instead of reusing def?
Why a new keyword for yield? Why not a builtin function instead?
Then why not some other special syntax without a new keyword?
Why allow return at all? Why not force termination to be spelled raise StopIteration?
Then why not allow an expression on return too?
BDFL Pronouncements
Issue
Con
Pro
BDFL
Reference Implementation
Footnotes and References
Copyright
Abstract
This PEP introduces the concept of generators to Python, as well as a new
statement used in conjunction with them, the yield statement.
Motivation
When a producer function has a hard enough job that it requires maintaining
state between values produced, most programming languages offer no pleasant and
efficient solution beyond adding a callback function to the producer’s argument
list, to be called with each value produced.
For example, tokenize.py in the standard library takes this approach: the
caller must pass a tokeneater function to tokenize(), called whenever
tokenize() finds the next token. This allows tokenize to be coded in a
natural way, but programs calling tokenize are typically convoluted by the need
to remember between callbacks which token(s) were seen last. The tokeneater
function in tabnanny.py is a good example of that, maintaining a state
machine in global variables, to remember across callbacks what it has already
seen and what it hopes to see next. This was difficult to get working
correctly, and is still difficult for people to understand. Unfortunately,
that’s typical of this approach.
An alternative would have been for tokenize to produce an entire parse of the
Python program at once, in a large list. Then tokenize clients could be
written in a natural way, using local variables and local control flow (such as
loops and nested if statements) to keep track of their state. But this isn’t
practical: programs can be very large, so no a priori bound can be placed on
the memory needed to materialize the whole parse; and some tokenize clients
only want to see whether something specific appears early in the program (e.g.,
a future statement, or, as is done in IDLE, just the first indented statement),
and then parsing the whole program first is a severe waste of time.
Another alternative would be to make tokenize an iterator,
delivering the
next token whenever its .next() method is invoked. This is pleasant for the
caller in the same way a large list of results would be, but without the memory
and “what if I want to get out early?” drawbacks. However, this shifts the
burden on tokenize to remember its state between .next() invocations, and
the reader need only glance at tokenize.tokenize_loop() to realize what a
horrid chore that would be. Or picture a recursive algorithm for producing the
nodes of a general tree structure: to cast that into an iterator framework
requires removing the recursion manually and maintaining the state of the
traversal by hand.
A fourth option is to run the producer and consumer in separate threads. This
allows both to maintain their states in natural ways, and so is pleasant for
both. Indeed, Demo/threads/Generator.py in the Python source distribution
provides a usable synchronized-communication class for doing that in a general
way. This doesn’t work on platforms without threads, though, and is very slow
on platforms that do (compared to what is achievable without threads).
A final option is to use the Stackless [1] (PEP 219) variant implementation of Python
instead, which supports lightweight coroutines. This has much the same
programmatic benefits as the thread option, but is much more efficient.
However, Stackless is a controversial rethinking of the Python core, and it may
not be possible for Jython to implement the same semantics. This PEP isn’t the
place to debate that, so suffice it to say here that generators provide a
useful subset of Stackless functionality in a way that fits easily into the
current CPython implementation, and is believed to be relatively
straightforward for other Python implementations.
That exhausts the current alternatives. Some other high-level languages
provide pleasant solutions, notably iterators in Sather [2], which were
inspired by iterators in CLU; and generators in Icon [3], a novel language
where every expression is a generator. There are differences among these,
but the basic idea is the same: provide a kind of function that can return an
intermediate result (“the next value”) to its caller, but maintaining the
function’s local state so that the function can be resumed again right where it
left off. A very simple example:
def fib():
a, b = 0, 1
while 1:
yield b
a, b = b, a+b
When fib() is first invoked, it sets a to 0 and b to 1, then yields b
back to its caller. The caller sees 1. When fib is resumed, from its
point of view the yield statement is really the same as, say, a print
statement: fib continues after the yield with all local state intact. a
and b then become 1 and 1, and fib loops back to the yield, yielding
1 to its invoker. And so on. From fib’s point of view it’s just
delivering a sequence of results, as if via callback. But from its caller’s
point of view, the fib invocation is an iterable object that can be resumed
at will. As in the thread approach, this allows both sides to be coded in the
most natural ways; but unlike the thread approach, this can be done efficiently
and on all platforms. Indeed, resuming a generator should be no more expensive
than a function call.
The same kind of approach applies to many producer/consumer functions. For
example, tokenize.py could yield the next token instead of invoking a
callback function with it as argument, and tokenize clients could iterate over
the tokens in a natural way: a Python generator is a kind of Python
iterator, but of an especially powerful kind.
Specification: Yield
A new statement is introduced:
yield_stmt: "yield" expression_list
yield is a new keyword, so a future statement (PEP 236) is needed to phase
this in: in the initial release, a module desiring to use generators must
include the line:
from __future__ import generators
near the top (see PEP 236) for details). Modules using the identifier
yield without a future statement will trigger warnings. In the
following release, yield will be a language keyword and the future
statement will no longer be needed.
The yield statement may only be used inside functions. A function that
contains a yield statement is called a generator function. A generator
function is an ordinary function object in all respects, but has the new
CO_GENERATOR flag set in the code object’s co_flags member.
When a generator function is called, the actual arguments are bound to
function-local formal argument names in the usual way, but no code in the body
of the function is executed. Instead a generator-iterator object is returned;
this conforms to the iterator protocol, so in particular can be used in
for-loops in a natural way. Note that when the intent is clear from context,
the unqualified name “generator” may be used to refer either to a
generator-function or a generator-iterator.
Each time the .next() method of a generator-iterator is invoked, the code
in the body of the generator-function is executed until a yield or
return statement (see below) is encountered, or until the end of the body
is reached.
If a yield statement is encountered, the state of the function is frozen,
and the value of expression_list is returned to .next()’s caller. By
“frozen” we mean that all local state is retained, including the current
bindings of local variables, the instruction pointer, and the internal
evaluation stack: enough information is saved so that the next time
.next() is invoked, the function can proceed exactly as if the yield
statement were just another external call.
Restriction: A yield statement is not allowed in the try clause of a
try/finally construct. The difficulty is that there’s no guarantee the
generator will ever be resumed, hence no guarantee that the finally block will
ever get executed; that’s too much a violation of finally’s purpose to bear.
Restriction: A generator cannot be resumed while it is actively running:
>>> def g():
... i = me.next()
... yield i
>>> me = g()
>>> me.next()
Traceback (most recent call last):
...
File "<string>", line 2, in g
ValueError: generator already executing
Specification: Return
A generator function can also contain return statements of the form:
return
Note that an expression_list is not allowed on return statements in the body
of a generator (although, of course, they may appear in the bodies of
non-generator functions nested within the generator).
When a return statement is encountered, control proceeds as in any function
return, executing the appropriate finally clauses (if any exist). Then a
StopIteration exception is raised, signalling that the iterator is
exhausted. A StopIteration exception is also raised if control flows off
the end of the generator without an explicit return.
Note that return means “I’m done, and have nothing interesting to return”, for
both generator functions and non-generator functions.
Note that return isn’t always equivalent to raising StopIteration: the
difference lies in how enclosing try/except constructs are treated. For
example,:
>>> def f1():
... try:
... return
... except:
... yield 1
>>> print list(f1())
[]
because, as in any function, return simply exits, but:
>>> def f2():
... try:
... raise StopIteration
... except:
... yield 42
>>> print list(f2())
[42]
because StopIteration is captured by a bare except, as is any
exception.
Specification: Generators and Exception Propagation
If an unhandled exception– including, but not limited to, StopIteration
–is raised by, or passes through, a generator function, then the exception is
passed on to the caller in the usual way, and subsequent attempts to resume the
generator function raise StopIteration. In other words, an unhandled
exception terminates a generator’s useful life.
Example (not idiomatic but to illustrate the point):
>>> def f():
... return 1/0
>>> def g():
... yield f() # the zero division exception propagates
... yield 42 # and we'll never get here
>>> k = g()
>>> k.next()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 2, in g
File "<stdin>", line 2, in f
ZeroDivisionError: integer division or modulo by zero
>>> k.next() # and the generator cannot be resumed
Traceback (most recent call last):
File "<stdin>", line 1, in ?
StopIteration
>>>
Specification: Try/Except/Finally
As noted earlier, yield is not allowed in the try clause of a
try/finally construct. A consequence is that generators should allocate
critical resources with great care. There is no restriction on yield
otherwise appearing in finally clauses, except clauses, or in the
try clause of a try/except construct:
>>> def f():
... try:
... yield 1
... try:
... yield 2
... 1/0
... yield 3 # never get here
... except ZeroDivisionError:
... yield 4
... yield 5
... raise
... except:
... yield 6
... yield 7 # the "raise" above stops this
... except:
... yield 8
... yield 9
... try:
... x = 12
... finally:
... yield 10
... yield 11
>>> print list(f())
[1, 2, 4, 5, 8, 9, 10, 11]
>>>
Example
# A binary tree class.
class Tree:
def __init__(self, label, left=None, right=None):
self.label = label
self.left = left
self.right = right
def __repr__(self, level=0, indent=" "):
s = level*indent + `self.label`
if self.left:
s = s + "\n" + self.left.__repr__(level+1, indent)
if self.right:
s = s + "\n" + self.right.__repr__(level+1, indent)
return s
def __iter__(self):
return inorder(self)
# Create a Tree from a list.
def tree(list):
n = len(list)
if n == 0:
return []
i = n / 2
return Tree(list[i], tree(list[:i]), tree(list[i+1:]))
# A recursive generator that generates Tree labels in in-order.
def inorder(t):
if t:
for x in inorder(t.left):
yield x
yield t.label
for x in inorder(t.right):
yield x
# Show it off: create a tree.
t = tree("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
# Print the nodes of the tree in in-order.
for x in t:
print x,
print
# A non-recursive generator.
def inorder(node):
stack = []
while node:
while node.left:
stack.append(node)
node = node.left
yield node.label
while not node.right:
try:
node = stack.pop()
except IndexError:
return
yield node.label
node = node.right
# Exercise the non-recursive generator.
for x in t:
print x,
print
Both output blocks display:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Q & A
Why not a new keyword instead of reusing def?
See BDFL Pronouncements section below.
Why a new keyword for yield? Why not a builtin function instead?
Control flow is much better expressed via keyword in Python, and yield is a
control construct. It’s also believed that efficient implementation in Jython
requires that the compiler be able to determine potential suspension points at
compile-time, and a new keyword makes that easy. The CPython reference
implementation also exploits it heavily, to detect which functions are
generator-functions (although a new keyword in place of def would solve
that for CPython – but people asking the “why a new keyword?” question don’t
want any new keyword).
Then why not some other special syntax without a new keyword?
For example, one of these instead of yield 3:
return 3 and continue
return and continue 3
return generating 3
continue return 3
return >> , 3
from generator return 3
return >> 3
return << 3
>> 3
<< 3
* 3
Did I miss one <wink>? Out of hundreds of messages, I counted three
suggesting such an alternative, and extracted the above from them. It would be
nice not to need a new keyword, but nicer to make yield very clear – I
don’t want to have to deduce that a yield is occurring from making sense of a
previously senseless sequence of keywords or operators. Still, if this
attracts enough interest, proponents should settle on a single consensus
suggestion, and Guido will Pronounce on it.
Why allow return at all? Why not force termination to be spelled raise StopIteration?
The mechanics of StopIteration are low-level details, much like the
mechanics of IndexError in Python 2.1: the implementation needs to do
something well-defined under the covers, and Python exposes these mechanisms
for advanced users. That’s not an argument for forcing everyone to work at
that level, though. return means “I’m done” in any kind of function, and
that’s easy to explain and to use. Note that return isn’t always equivalent
to raise StopIteration in try/except construct, either (see the
“Specification: Return” section).
Then why not allow an expression on return too?
Perhaps we will someday. In Icon, return expr means both “I’m done”, and
“but I have one final useful value to return too, and this is it”. At the
start, and in the absence of compelling uses for return expr, it’s simply
cleaner to use yield exclusively for delivering values.
BDFL Pronouncements
Issue
Introduce another new keyword (say, gen or generator) in place
of def, or otherwise alter the syntax, to distinguish generator-functions
from non-generator functions.
Con
In practice (how you think about them), generators are functions, but
with the twist that they’re resumable. The mechanics of how they’re set up
is a comparatively minor technical issue, and introducing a new keyword would
unhelpfully overemphasize the mechanics of how generators get started (a vital
but tiny part of a generator’s life).
Pro
In reality (how you think about them), generator-functions are actually
factory functions that produce generator-iterators as if by magic. In this
respect they’re radically different from non-generator functions, acting more
like a constructor than a function, so reusing def is at best confusing.
A yield statement buried in the body is not enough warning that the
semantics are so different.
BDFL
def it stays. No argument on either side is totally convincing, so I
have consulted my language designer’s intuition. It tells me that the syntax
proposed in the PEP is exactly right - not too hot, not too cold. But, like
the Oracle at Delphi in Greek mythology, it doesn’t tell me why, so I don’t
have a rebuttal for the arguments against the PEP syntax. The best I can come
up with (apart from agreeing with the rebuttals … already made) is “FUD”.
If this had been part of the language from day one, I very much doubt it would
have made Andrew Kuchling’s “Python Warts” page.
Reference Implementation
The current implementation, in a preliminary state (no docs, but well tested
and solid), is part of Python’s CVS development tree [5]. Using this requires
that you build Python from source.
This was derived from an earlier patch by Neil Schemenauer [4].
Footnotes and References
[1]
http://www.stackless.com/
[2]
“Iteration Abstraction in Sather”
Murer, Omohundro, Stoutamire and Szyperski
http://www.icsi.berkeley.edu/~sather/Publications/toplas.html
[3]
http://www.cs.arizona.edu/icon/
[4]
http://python.ca/nas/python/generator.diff
[5]
To experiment with this implementation, check out Python from CVS
according to the instructions at http://sf.net/cvs/?group_id=5470
Note that the std test Lib/test/test_generators.py contains many
examples, including all those in this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 255 – Simple Generators | Standards Track | This PEP introduces the concept of generators to Python, as well as a new
statement used in conjunction with them, the yield statement. |
PEP 259 – Omit printing newline after newline
Author:
Guido van Rossum <guido at python.org>
Status:
Rejected
Type:
Standards Track
Created:
11-Jun-2001
Python-Version:
2.2
Post-History:
11-Jun-2001
Table of Contents
Abstract
Problem
Proposed Solution
Scope
Risks
Implementation
Rejected
Copyright
Abstract
Currently, the print statement always appends a newline, unless a
trailing comma is used. This means that if we want to print data
that already ends in a newline, we get two newlines, unless
special precautions are taken.
I propose to skip printing the newline when it follows a newline
that came from data.
In order to avoid having to add yet another magic variable to file
objects, I propose to give the existing ‘softspace’ variable an
extra meaning: a negative value will mean “the last data written
ended in a newline so no space or newline is required.”
Problem
When printing data that resembles the lines read from a file using
a simple loop, double-spacing occurs unless special care is taken:
>>> for line in open("/etc/passwd").readlines():
... print line
...
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:
daemon:x:2:2:daemon:/sbin:
(etc.)
>>>
While there are easy work-arounds, this is often noticed only
during testing and requires an extra edit-test roundtrip; the
fixed code is uglier and harder to maintain.
Proposed Solution
In the PRINT_ITEM opcode in ceval.c, when a string object is
printed, a check is already made that looks at the last character
of that string. Currently, if that last character is a whitespace
character other than space, the softspace flag is reset to zero;
this suppresses the space between two items if the first item is a
string ending in newline, tab, etc. (but not when it ends in a
space). Otherwise the softspace flag is set to one.
The proposal changes this test slightly so that softspace is set
to:
-1 – if the last object written is a string ending in a
newline
0 – if the last object written is a string ending in a
whitespace character that’s neither space nor newline
1 – in all other cases (including the case when the last
object written is an empty string or not a string)
Then, the PRINT_NEWLINE opcode, printing of the newline is
suppressed if the value of softspace is negative; in any case the
softspace flag is reset to zero.
Scope
This only affects printing of 8-bit strings. It doesn’t affect
Unicode, although that could be considered a bug in the Unicode
implementation. It doesn’t affect other objects whose string
representation happens to end in a newline character.
Risks
This change breaks some existing code. For example:
print "Subject: PEP 259\n"
print message_body
In current Python, this produces a blank line separating the
subject from the message body; with the proposed change, the body
begins immediately below the subject. This is not very robust
code anyway; it is better written as:
print "Subject: PEP 259"
print
print message_body
In the test suite, only test_StringIO (which explicitly tests for
this feature) breaks.
Implementation
A patch relative to current CVS is here:
http://sourceforge.net/tracker/index.php?func=detail&aid=432183&group_id=5470&atid=305470
Rejected
The user community unanimously rejected this, so I won’t pursue
this idea any further. Frequently heard arguments against
included:
It is likely to break thousands of CGI scripts.
Enough magic already (also: no more tinkering with ‘print’
please).
Copyright
This document has been placed in the public domain.
| Rejected | PEP 259 – Omit printing newline after newline | Standards Track | Currently, the print statement always appends a newline, unless a
trailing comma is used. This means that if we want to print data
that already ends in a newline, we get two newlines, unless
special precautions are taken. |
PEP 260 – Simplify xrange()
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
26-Jun-2001
Python-Version:
2.2
Post-History:
26-Jun-2001
Table of Contents
Abstract
Problem
Proposed Solution
Scope
Risks
Transition
Copyright
Abstract
This PEP proposes to strip the xrange() object from some rarely
used behavior like x[i:j] and x*n.
Problem
The xrange() function has one idiomatic use:
for i in xrange(...): ...
However, the xrange() object has a bunch of rarely used behaviors
that attempt to make it more sequence-like. These are so rarely
used that historically they have has serious bugs (e.g. off-by-one
errors) that went undetected for several releases.
I claim that it’s better to drop these unused features. This will
simplify the implementation, testing, and documentation, and
reduce maintenance and code size.
Proposed Solution
I propose to strip the xrange() object to the bare minimum. The
only retained sequence behaviors are x[i], len(x), and repr(x).
In particular, these behaviors will be dropped:
x[i:j] (slicing)
x*n, n*x (sequence-repeat)
cmp(x1, x2) (comparisons)
i in x (containment test)
x.tolist() method
x.start, x.stop, x.step attributes
I also propose to change the signature of the PyRange_New() C API
to remove the 4th argument (the repetition count).
By implementing a custom iterator type, we could speed up the
common use, but this is optional (the default sequence iterator
does just fine).
Scope
This PEP affects the xrange() built-in function and the
PyRange_New() C API.
Risks
Somebody’s code could be relying on the extended code, and this
code would break. However, given that historically bugs in the
extended code have gone undetected for so long, it’s unlikely that
much code is affected.
Transition
For backwards compatibility, the existing functionality will still
be present in Python 2.2, but will trigger a warning. A year
after Python 2.2 final is released (probably in 2.4) the
functionality will be ripped out.
Copyright
This document has been placed in the public domain.
| Final | PEP 260 – Simplify xrange() | Standards Track | This PEP proposes to strip the xrange() object from some rarely
used behavior like x[i:j] and x*n. |
PEP 261 – Support for “wide” Unicode characters
Author:
Paul Prescod <paul at prescod.net>
Status:
Final
Type:
Standards Track
Created:
27-Jun-2001
Python-Version:
2.2
Post-History:
27-Jun-2001
Table of Contents
Abstract
Glossary
Proposed Solution
Implementation
Notes
Rejected Suggestions
References
Copyright
Abstract
Python 2.1 unicode characters can have ordinals only up to 2**16 - 1.
This range corresponds to a range in Unicode known as the Basic
Multilingual Plane. There are now characters in Unicode that live
on other “planes”. The largest addressable character in Unicode
has the ordinal 17 * 2**16 - 1 (0x10ffff). For readability, we
will call this TOPCHAR and call characters in this range “wide
characters”.
Glossary
CharacterUsed by itself, means the addressable units of a Python
Unicode string.
Code pointA code point is an integer between 0 and TOPCHAR.
If you imagine Unicode as a mapping from integers to
characters, each integer is a code point. But the
integers between 0 and TOPCHAR that do not map to
characters are also code points. Some will someday
be used for characters. Some are guaranteed never
to be used for characters.
CodecA set of functions for translating between physical
encodings (e.g. on disk or coming in from a network)
into logical Python objects.
EncodingMechanism for representing abstract characters in terms of
physical bits and bytes. Encodings allow us to store
Unicode characters on disk and transmit them over networks
in a manner that is compatible with other Unicode software.
Surrogate pairTwo physical characters that represent a single logical
character. Part of a convention for representing 32-bit
code points in terms of two 16-bit code points.
Unicode stringA Python type representing a sequence of code points with
“string semantics” (e.g. case conversions, regular
expression compatibility, etc.) Constructed with the
unicode() function.
Proposed Solution
One solution would be to merely increase the maximum ordinal
to a larger value. Unfortunately the only straightforward
implementation of this idea is to use 4 bytes per character.
This has the effect of doubling the size of most Unicode
strings. In order to avoid imposing this cost on every
user, Python 2.2 will allow the 4-byte implementation as a
build-time option. Users can choose whether they care about
wide characters or prefer to preserve memory.
The 4-byte option is called “wide Py_UNICODE”. The 2-byte option
is called “narrow Py_UNICODE”.
Most things will behave identically in the wide and narrow worlds.
unichr(i) for 0 <= i < 2**16 (0x10000) always returns a
length-one string.
unichr(i) for 2**16 <= i <= TOPCHAR will return a
length-one string on wide Python builds. On narrow builds it will
raise ValueError.ISSUE
Python currently allows \U literals that cannot be
represented as a single Python character. It generates two
Python characters known as a “surrogate pair”. Should this
be disallowed on future narrow Python builds?
Pro:
Python already the construction of a surrogate pair
for a large unicode literal character escape sequence.
This is basically designed as a simple way to construct
“wide characters” even in a narrow Python build. It is also
somewhat logical considering that the Unicode-literal syntax
is basically a short-form way of invoking the unicode-escape
codec.
Con:
Surrogates could be easily created this way but the user
still needs to be careful about slicing, indexing, printing
etc. Therefore, some have suggested that Unicode
literals should not support surrogates.
ISSUE
Should Python allow the construction of characters that do
not correspond to Unicode code points? Unassigned Unicode
code points should obviously be legal (because they could
be assigned at any time). But code points above TOPCHAR are
guaranteed never to be used by Unicode. Should we allow access
to them anyhow?
Pro:
If a Python user thinks they know what they’re doing why
should we try to prevent them from violating the Unicode
spec? After all, we don’t stop 8-bit strings from
containing non-ASCII characters.
Con:
Codecs and other Unicode-consuming code will have to be
careful of these characters which are disallowed by the
Unicode specification.
ord() is always the inverse of unichr()
There is an integer value in the sys module that describes the
largest ordinal for a character in a Unicode string on the current
interpreter. sys.maxunicode is 2**16-1 (0xffff) on narrow builds
of Python and TOPCHAR on wide builds.ISSUE:
Should there be distinct constants for accessing
TOPCHAR and the real upper bound for the domain of
unichr (if they differ)? There has also been a
suggestion of sys.unicodewidth which can take the
values 'wide' and 'narrow'.
every Python Unicode character represents exactly one Unicode code
point (i.e. Python Unicode Character = Abstract Unicode character).
codecs will be upgraded to support “wide characters”
(represented directly in UCS-4, and as variable-length sequences
in UTF-8 and UTF-16). This is the main part of the implementation
left to be done.
There is a convention in the Unicode world for encoding a 32-bit
code point in terms of two 16-bit code points. These are known
as “surrogate pairs”. Python’s codecs will adopt this convention
and encode 32-bit code points as surrogate pairs on narrow Python
builds.ISSUE
Should there be a way to tell codecs not to generate
surrogates and instead treat wide characters as
errors?
Pro:
I might want to write code that works only with
fixed-width characters and does not have to worry about
surrogates.
Con:
No clear proposal of how to communicate this to codecs.
there are no restrictions on constructing strings that use
code points “reserved for surrogates” improperly. These are
called “isolated surrogates”. The codecs should disallow reading
these from files, but you could construct them using string
literals or unichr().
Implementation
There is a new define:
#define Py_UNICODE_SIZE 2
To test whether UCS2 or UCS4 is in use, the derived macro
Py_UNICODE_WIDE should be used, which is defined when UCS-4 is in
use.
There is a new configure option:
–enable-unicode=ucs2
configures a narrow Py_UNICODE, and uses
wchar_t if it fits
–enable-unicode=ucs4
configures a wide Py_UNICODE, and uses
wchar_t if it fits
–enable-unicode
same as “=ucs2”
–disable-unicode
entirely remove the Unicode functionality.
It is also proposed that one day --enable-unicode will just
default to the width of your platforms wchar_t.
Windows builds will be narrow for a while based on the fact that
there have been few requests for wide characters, those requests
are mostly from hard-core programmers with the ability to buy
their own Python and Windows itself is strongly biased towards
16-bit characters.
Notes
This PEP does NOT imply that people using Unicode need to use a
4-byte encoding for their files on disk or sent over the network.
It only allows them to do so. For example, ASCII is still a
legitimate (7-bit) Unicode-encoding.
It has been proposed that there should be a module that handles
surrogates in narrow Python builds for programmers. If someone
wants to implement that, it will be another PEP. It might also be
combined with features that allow other kinds of character-,
word- and line- based indexing.
Rejected Suggestions
More or less the status-quo
We could officially say that Python characters are 16-bit and
require programmers to implement wide characters in their
application logic by combining surrogate pairs. This is a heavy
burden because emulating 32-bit characters is likely to be
very inefficient if it is coded entirely in Python. Plus these
abstracted pseudo-strings would not be legal as input to the
regular expression engine.
“Space-efficient Unicode” type
Another class of solution is to use some efficient storage
internally but present an abstraction of wide characters to
the programmer. Any of these would require a much more complex
implementation than the accepted solution. For instance consider
the impact on the regular expression engine. In theory, we could
move to this implementation in the future without breaking Python
code. A future Python could “emulate” wide Python semantics on
narrow Python. Guido is not willing to undertake the
implementation right now.
Two types
We could introduce a 32-bit Unicode type alongside the 16-bit
type. There is a lot of code that expects there to be only a
single Unicode type.
This PEP represents the least-effort solution. Over the next
several years, 32-bit Unicode characters will become more common
and that may either convince us that we need a more sophisticated
solution or (on the other hand) convince us that simply
mandating wide Unicode characters is an appropriate solution.
Right now the two options on the table are do nothing or do
this.
References
Unicode Glossary: http://www.unicode.org/glossary/
Copyright
This document has been placed in the public domain.
| Final | PEP 261 – Support for “wide” Unicode characters | Standards Track | Python 2.1 unicode characters can have ordinals only up to 2**16 - 1.
This range corresponds to a range in Unicode known as the Basic
Multilingual Plane. There are now characters in Unicode that live
on other “planes”. The largest addressable character in Unicode
has the ordinal 17 * 2**16 - 1 (0x10ffff). For readability, we
will call this TOPCHAR and call characters in this range “wide
characters”. |
PEP 263 – Defining Python Source Code Encodings
Author:
Marc-André Lemburg <mal at lemburg.com>,
Martin von Löwis <martin at v.loewis.de>
Status:
Final
Type:
Standards Track
Created:
06-Jun-2001
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Problem
Proposed Solution
Defining the Encoding
Examples
Concepts
Implementation
Phases
Scope
References
History
Copyright
Abstract
This PEP proposes to introduce a syntax to declare the encoding of
a Python source file. The encoding information is then used by the
Python parser to interpret the file using the given encoding. Most
notably this enhances the interpretation of Unicode literals in
the source code and makes it possible to write Unicode literals
using e.g. UTF-8 directly in an Unicode aware editor.
Problem
In Python 2.1, Unicode literals can only be written using the
Latin-1 based encoding “unicode-escape”. This makes the
programming environment rather unfriendly to Python users who live
and work in non-Latin-1 locales such as many of the Asian
countries. Programmers can write their 8-bit strings using the
favorite encoding, but are bound to the “unicode-escape” encoding
for Unicode literals.
Proposed Solution
I propose to make the Python source code encoding both visible and
changeable on a per-source file basis by using a special comment
at the top of the file to declare the encoding.
To make Python aware of this encoding declaration a number of
concept changes are necessary with respect to the handling of
Python source code data.
Defining the Encoding
Python will default to ASCII as standard encoding if no other
encoding hints are given.
To define a source code encoding, a magic comment must
be placed into the source files either as first or second
line in the file, such as:
# coding=<encoding name>
or (using formats recognized by popular editors):
#!/usr/bin/python
# -*- coding: <encoding name> -*-
or:
#!/usr/bin/python
# vim: set fileencoding=<encoding name> :
More precisely, the first or second line must match the following
regular expression:
^[ \t\f]*#.*?coding[:=][ \t]*([-_.a-zA-Z0-9]+)
The first group of this
expression is then interpreted as encoding name. If the encoding
is unknown to Python, an error is raised during compilation. There
must not be any Python statement on the line that contains the
encoding declaration. If the first line matches the second line
is ignored.
To aid with platforms such as Windows, which add Unicode BOM marks
to the beginning of Unicode files, the UTF-8 signature
\xef\xbb\xbf will be interpreted as ‘utf-8’ encoding as well
(even if no magic encoding comment is given).
If a source file uses both the UTF-8 BOM mark signature and a
magic encoding comment, the only allowed encoding for the comment
is ‘utf-8’. Any other encoding will cause an error.
Examples
These are some examples to clarify the different styles for
defining the source code encoding at the top of a Python source
file:
With interpreter binary and using Emacs style file encoding
comment:#!/usr/bin/python
# -*- coding: latin-1 -*-
import os, sys
...
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
import os, sys
...
#!/usr/bin/python
# -*- coding: ascii -*-
import os, sys
...
Without interpreter line, using plain text:# This Python file uses the following encoding: utf-8
import os, sys
...
Text editors might have different ways of defining the file’s
encoding, e.g.:#!/usr/local/bin/python
# coding: latin-1
import os, sys
...
Without encoding comment, Python’s parser will assume ASCII
text:#!/usr/local/bin/python
import os, sys
...
Encoding comments which don’t work:
Missing “coding:” prefix:#!/usr/local/bin/python
# latin-1
import os, sys
...
Encoding comment not on line 1 or 2:#!/usr/local/bin/python
#
# -*- coding: latin-1 -*-
import os, sys
...
Unsupported encoding:#!/usr/local/bin/python
# -*- coding: utf-42 -*-
import os, sys
...
Concepts
The PEP is based on the following concepts which would have to be
implemented to enable usage of such a magic comment:
The complete Python source file should use a single encoding.
Embedding of differently encoded data is not allowed and will
result in a decoding error during compilation of the Python
source code.Any encoding which allows processing the first two lines in the
way indicated above is allowed as source code encoding, this
includes ASCII compatible encodings as well as certain
multi-byte encodings such as Shift_JIS. It does not include
encodings which use two or more bytes for all characters like
e.g. UTF-16. The reason for this is to keep the encoding
detection algorithm in the tokenizer simple.
Handling of escape sequences should continue to work as it does
now, but with all possible source code encodings, that is
standard string literals (both 8-bit and Unicode) are subject to
escape sequence expansion while raw string literals only expand
a very small subset of escape sequences.
Python’s tokenizer/compiler combo will need to be updated to
work as follows:
read the file
decode it into Unicode assuming a fixed per-file encoding
convert it into a UTF-8 byte string
tokenize the UTF-8 content
compile it, creating Unicode objects from the given Unicode data
and creating string objects from the Unicode literal data
by first reencoding the UTF-8 data into 8-bit string data
using the given file encoding
Note that Python identifiers are restricted to the ASCII
subset of the encoding, and thus need no further conversion
after step 4.
Implementation
For backwards-compatibility with existing code which currently
uses non-ASCII in string literals without declaring an encoding,
the implementation will be introduced in two phases:
Allow non-ASCII in string literals and comments, by internally
treating a missing encoding declaration as a declaration of
“iso-8859-1”. This will cause arbitrary byte strings to
correctly round-trip between step 2 and step 5 of the
processing, and provide compatibility with Python 2.2 for
Unicode literals that contain non-ASCII bytes.A warning will be issued if non-ASCII bytes are found in the
input, once per improperly encoded input file.
Remove the warning, and change the default encoding to “ascii”.
The builtin compile() API will be enhanced to accept Unicode as
input. 8-bit string input is subject to the standard procedure for
encoding detection as described above.
If a Unicode string with a coding declaration is passed to compile(),
a SyntaxError will be raised.
SUZUKI Hisao is working on a patch; see [2] for details. A patch
implementing only phase 1 is available at [1].
Phases
Implementation of steps 1 and 2 above were completed in 2.3,
except for changing the default encoding to “ascii”.
The default encoding was set to “ascii” in version 2.5.
Scope
This PEP intends to provide an upgrade path from the current
(more-or-less) undefined source code encoding situation to a more
robust and portable definition.
References
[1]
Phase 1 implementation:
https://bugs.python.org/issue526840
[2]
Phase 2 implementation:
https://bugs.python.org/issue534304
History
1.10 and above: see CVS history
1.8: Added ‘.’ to the coding RE.
1.7: Added warnings to phase 1 implementation. Replaced the
Latin-1 default encoding with the interpreter’s default
encoding. Added tweaks to compile().
1.4 - 1.6: Minor tweaks
1.3: Worked in comments by Martin v. Loewis:
UTF-8 BOM mark detection, Emacs style magic comment,
two phase approach to the implementation
Copyright
This document has been placed in the public domain.
| Final | PEP 263 – Defining Python Source Code Encodings | Standards Track | This PEP proposes to introduce a syntax to declare the encoding of
a Python source file. The encoding information is then used by the
Python parser to interpret the file using the given encoding. Most
notably this enhances the interpretation of Unicode literals in
the source code and makes it possible to write Unicode literals
using e.g. UTF-8 directly in an Unicode aware editor. |
PEP 264 – Future statements in simulated shells
Author:
Michael Hudson <mwh at python.net>
Status:
Final
Type:
Standards Track
Requires:
236
Created:
30-Jul-2001
Python-Version:
2.2
Post-History:
30-Jul-2001
Table of Contents
Abstract
Specification
Backward Compatibility
Forward Compatibility
Issues
Implementation
References
Copyright
Abstract
As noted in PEP 236, there is no clear way for “simulated
interactive shells” to simulate the behaviour of __future__
statements in “real” interactive shells, i.e. have __future__
statements’ effects last the life of the shell.
The PEP also takes the opportunity to clean up the other
unresolved issue mentioned in PEP 236, the inability to stop
compile() inheriting the effect of future statements affecting the
code calling compile().
This PEP proposes to address the first problem by adding an
optional fourth argument to the builtin function “compile”, adding
information to the _Feature instances defined in __future__.py and
adding machinery to the standard library modules “codeop” and
“code” to make the construction of such shells easy.
The second problem is dealt with by simply adding another
optional argument to compile(), which if non-zero suppresses the
inheriting of future statements’ effects.
Specification
I propose adding a fourth, optional, “flags” argument to the
builtin “compile” function. If this argument is omitted,
there will be no change in behaviour from that of Python 2.1.
If it is present it is expected to be an integer, representing
various possible compile time options as a bitfield. The
bitfields will have the same values as the CO_* flags already used
by the C part of Python interpreter to refer to future statements.
compile() shall raise a ValueError exception if it does not
recognize any of the bits set in the supplied flags.
The flags supplied will be bitwise-“or”ed with the flags that
would be set anyway, unless the new fifth optional argument is a
non-zero integer, in which case the flags supplied will be exactly
the set used.
The above-mentioned flags are not currently exposed to Python. I
propose adding .compiler_flag attributes to the _Feature objects
in __future__.py that contain the necessary bits, so one might
write code such as:
import __future__
def compile_generator(func_def):
return compile(func_def, "<input>", "suite",
__future__.generators.compiler_flag)
A recent change means that these same bits can be used to tell if
a code object was compiled with a given feature; for instance
codeob.co_flags & __future__.generators.compiler_flag``
will be non-zero if and only if the code object “codeob” was
compiled in an environment where generators were allowed.
I will also add a .all_feature_flags attribute to the __future__
module, giving a low-effort way of enumerating all the __future__
options supported by the running interpreter.
I also propose adding a pair of classes to the standard library
module codeop.
One - Compile - will sport a __call__ method which will act much
like the builtin “compile” of 2.1 with the difference that after
it has compiled a __future__ statement, it “remembers” it and
compiles all subsequent code with the __future__ option in effect.
It will do this by using the new features of the __future__ module
mentioned above.
Objects of the other class added to codeop - CommandCompiler -
will do the job of the existing codeop.compile_command function,
but in a __future__-aware way.
Finally, I propose to modify the class InteractiveInterpreter in
the standard library module code to use a CommandCompiler to
emulate still more closely the behaviour of the default Python
shell.
Backward Compatibility
Should be very few or none; the changes to compile will make no
difference to existing code, nor will adding new functions or
classes to codeop. Existing code using
code.InteractiveInterpreter may change in behaviour, but only for
the better in that the “real” Python shell will be being better
impersonated.
Forward Compatibility
The fiddling that needs to be done to Lib/__future__.py when
adding a __future__ feature will be a touch more complicated.
Everything else should just work.
Issues
I hope the above interface is not too disruptive to implement for
Jython.
Implementation
A series of preliminary implementations are at [1].
After light massaging by Tim Peters, they have now been checked in.
References
[1]
http://sourceforge.net/tracker/?func=detail&atid=305470&aid=449043&group_id=5470
Copyright
This document has been placed in the public domain.
| Final | PEP 264 – Future statements in simulated shells | Standards Track | As noted in PEP 236, there is no clear way for “simulated
interactive shells” to simulate the behaviour of __future__
statements in “real” interactive shells, i.e. have __future__
statements’ effects last the life of the shell. |
PEP 265 – Sorting Dictionaries by Value
Author:
Grant Griffin <g2 at iowegian.com>
Status:
Rejected
Type:
Standards Track
Created:
08-Aug-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
BDFL Pronouncement
Motivation
Rationale
Implementation
Concerns
References
Copyright
Abstract
This PEP suggests a “sort by value” operation for dictionaries.
The primary benefit would be in terms of “batteries included”
support for a common Python idiom which, in its current form, is
both difficult for beginners to understand and cumbersome for all
to implement.
BDFL Pronouncement
This PEP is rejected because the need for it has been largely
fulfilled by Py2.4’s sorted() builtin function:
>>> sorted(d.iteritems(), key=itemgetter(1), reverse=True)
[('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
or for just the keys:
sorted(d, key=d.__getitem__, reverse=True)
['b', 'd', 'c', 'a', 'e']
Also, Python 2.5’s heapq.nlargest() function addresses the common use
case of finding only a few of the highest valued items:
>>> nlargest(2, d.iteritems(), itemgetter(1))
[('b', 23), ('d', 17)]
Motivation
A common use of dictionaries is to count occurrences by setting
the value of d[key] to 1 on its first occurrence, then increment
the value on each subsequent occurrence. This can be done several
different ways, but the get() method is the most succinct:
d[key] = d.get(key, 0) + 1
Once all occurrences have been counted, a common use of the
resulting dictionary is to print the occurrences in
occurrence-sorted order, often with the largest value first.
This leads to a need to sort a dictionary’s items by value. The
canonical method of doing so in Python is to first use d.items()
to get a list of the dictionary’s items, then invert the ordering
of each item’s tuple from (key, value) into (value, key), then
sort the list; since Python sorts the list based on the first item
of the tuple, the list of (inverted) items is therefore sorted by
value. If desired, the list can then be reversed, and the tuples
can be re-inverted back to (key, value). (However, in my
experience, the inverted tuple ordering is fine for most purposes,
e.g. printing out the list.)
For example, given an occurrence count of:
>>> d = {'a':2, 'b':23, 'c':5, 'd':17, 'e':1}
we might do:
>>> items = [(v, k) for k, v in d.items()]
>>> items.sort()
>>> items.reverse() # so largest is first
>>> items = [(k, v) for v, k in items]
resulting in:
>>> items
[('b', 23), ('d', 17), ('c', 5), ('a', 2), ('e', 1)]
which shows the list in by-value order, largest first. (In this
case, 'b' was found to have the most occurrences.)
This works fine, but is “hard to use” in two aspects. First,
although this idiom is known to veteran Pythoneers, it is not at
all obvious to newbies – either in terms of its algorithm
(inverting the ordering of item tuples) or its implementation
(using list comprehensions – which are an advanced Python
feature.) Second, it requires having to repeatedly type a lot of
“grunge”, resulting in both tedium and mistakes.
We therefore would rather Python provide a method of sorting
dictionaries by value which would be both easy for newbies to
understand (or, better yet, not to have to understand) and
easier for all to use.
Rationale
As Tim Peters has pointed out, this sort of thing brings on the
problem of trying to be all things to all people. Therefore, we
will limit its scope to try to hit “the sweet spot”. Unusual
cases (e.g. sorting via a custom comparison function) can, of
course, be handled “manually” using present methods.
Here are some simple possibilities:
The items() method of dictionaries can be augmented with new
parameters having default values that provide for full
backwards-compatibility:
(1) items(sort_by_values=0, reversed=0)
or maybe just:
(2) items(sort_by_values=0)
since reversing a list is easy enough.
Alternatively, items() could simply let us control the (key, value)
order:
(3) items(values_first=0)
Again, this is fully backwards-compatible. It does less work than
the others, but it at least eases the most complicated/tricky part
of the sort-by-value problem: inverting the order of item tuples.
Using this is very simple:
items = d.items(1)
items.sort()
items.reverse() # (if desired)
The primary drawback of the preceding three approaches is the
additional overhead for the parameter-less items() case, due to
having to process default parameters. (However, if one assumes
that items() gets used primarily for creating sort-by-value lists,
this is not really a drawback in practice.)
Alternatively, we might add a new dictionary method which somehow
embodies “sorting”. This approach offers two advantages. First,
it avoids adding overhead to the items() method. Second, it is
perhaps more accessible to newbies: when they go looking for a
method for sorting dictionaries, they hopefully run into this one,
and they will not have to understand the finer points of tuple
inversion and list sorting to achieve sort-by-value.
To allow the four basic possibilities of sorting by key/value and in
forward/reverse order, we could add this method:
(4) sorted_items(by_value=0, reversed=0)
I believe the most common case would actually be by_value=1,
reversed=1, but the defaults values given here might lead to
fewer surprises by users: sorted_items() would be the same as
items() followed by sort().
Finally (as a last resort), we could use:
(5) items_sorted_by_value(reversed=0)
Implementation
The proposed dictionary methods would necessarily be implemented
in C. Presumably, the implementation would be fairly simple since
it involves just adding a few calls to Python’s existing
machinery.
Concerns
Aside from the run-time overhead already addressed in
possibilities 1 through 3, concerns with this proposal probably
will fall into the categories of “feature bloat” and/or “code
bloat”. However, I believe that several of the suggestions made
here will result in quite minimal bloat, resulting in a good
tradeoff between bloat and “value added”.
Tim Peters has noted that implementing this in C might not be
significantly faster than implementing it in Python today.
However, the major benefits intended here are “accessibility” and
“ease of use”, not “speed”. Therefore, as long as it is not
noticeably slower (in the case of plain items(), speed need not be
a consideration.
References
A related thread called “counting occurrences” appeared on
comp.lang.python in August, 2001. This included examples of
approaches to systematizing the sort-by-value problem by
implementing it as reusable Python functions and classes.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 265 – Sorting Dictionaries by Value | Standards Track | This PEP suggests a “sort by value” operation for dictionaries.
The primary benefit would be in terms of “batteries included”
support for a common Python idiom which, in its current form, is
both difficult for beginners to understand and cumbersome for all
to implement. |
PEP 266 – Optimizing Global Variable/Attribute Access
Author:
Skip Montanaro <skip at pobox.com>
Status:
Withdrawn
Type:
Standards Track
Created:
13-Aug-2001
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Introduction
Proposed Change
Threads
Rationale
Questions
What about threads? What if math.sin changes while in cache?
Unresolved Issues
Threading
Nested Scopes
Missing Attributes
Who does the dirty work?
Discussion
Backwards Compatibility
Implementation
Performance
References
Copyright
Abstract
The bindings for most global variables and attributes of other modules
typically never change during the execution of a Python program, but because
of Python’s dynamic nature, code which accesses such global objects must run
through a full lookup each time the object is needed. This PEP proposes a
mechanism that allows code that accesses most global objects to treat them as
local objects and places the burden of updating references on the code that
changes the name bindings of such objects.
Introduction
Consider the workhorse function sre_compile._compile. It is the internal
compilation function for the sre module. It consists almost entirely of a
loop over the elements of the pattern being compiled, comparing opcodes with
known constant values and appending tokens to an output list. Most of the
comparisons are with constants imported from the sre_constants module.
This means there are lots of LOAD_GLOBAL bytecodes in the compiled output
of this module. Just by reading the code it’s apparent that the author
intended LITERAL, NOT_LITERAL, OPCODES and many other symbols to
be constants. Still, each time they are involved in an expression, they must
be looked up anew.
Most global accesses are actually to objects that are “almost constants”.
This includes global variables in the current module as well as the attributes
of other imported modules. Since they rarely change, it seems reasonable to
place the burden of updating references to such objects on the code that
changes the name bindings. If sre_constants.LITERAL is changed to refer
to another object, perhaps it would be worthwhile for the code that modifies
the sre_constants module dict to correct any active references to that
object. By doing so, in many cases global variables and the attributes of
many objects could be cached as local variables. If the bindings between the
names given to the objects and the objects themselves changes rarely, the cost
of keeping track of such objects should be low and the potential payoff fairly
large.
In an attempt to gauge the effect of this proposal, I modified the Pystone
benchmark program included in the Python distribution to cache global
functions. Its main function, Proc0, makes calls to ten different
functions inside its for loop. In addition, Func2 calls Func1
repeatedly inside a loop. If local copies of these 11 global identifiers are
made before the functions’ loops are entered, performance on this particular
benchmark improves by about two percent (from 5561 pystones to 5685 on my
laptop). It gives some indication that performance would be improved by
caching most global variable access. Note also that the pystone benchmark
makes essentially no accesses of global module attributes, an anticipated area
of improvement for this PEP.
Proposed Change
I propose that the Python virtual machine be modified to include
TRACK_OBJECT and UNTRACK_OBJECT opcodes. TRACK_OBJECT would
associate a global name or attribute of a global name with a slot in the local
variable array and perform an initial lookup of the associated object to fill
in the slot with a valid value. The association it creates would be noted by
the code responsible for changing the name-to-object binding to cause the
associated local variable to be updated. The UNTRACK_OBJECT opcode would
delete any association between the name and the local variable slot.
Threads
Operation of this code in threaded programs will be no different than in
unthreaded programs. If you need to lock an object to access it, you would
have had to do that before TRACK_OBJECT would have been executed and
retain that lock until after you stop using it.
FIXME: I suspect I need more here.
Rationale
Global variables and attributes rarely change. For example, once a function
imports the math module, the binding between the name math and the
module it refers to aren’t likely to change. Similarly, if the function that
uses the math module refers to its sin attribute, it’s unlikely to
change. Still, every time the module wants to call the math.sin function,
it must first execute a pair of instructions:
LOAD_GLOBAL math
LOAD_ATTR sin
If the client module always assumed that math.sin was a local constant and
it was the responsibility of “external forces” outside the function to keep
the reference correct, we might have code like this:
TRACK_OBJECT math.sin
...
LOAD_FAST math.sin
...
UNTRACK_OBJECT math.sin
If the LOAD_FAST was in a loop the payoff in reduced global loads and
attribute lookups could be significant.
This technique could, in theory, be applied to any global variable access or
attribute lookup. Consider this code:
l = []
for i in range(10):
l.append(math.sin(i))
return l
Even though l is a local variable, you still pay the cost of loading
l.append ten times in the loop. The compiler (or an optimizer) could
recognize that both math.sin and l.append are being called in the loop
and decide to generate the tracked local code, avoiding it for the builtin
range() function because it’s only called once during loop setup.
Performance issues related to accessing local variables make tracking
l.append less attractive than tracking globals such as math.sin.
According to a post to python-dev by Marc-Andre Lemburg [1], LOAD_GLOBAL
opcodes account for over 7% of all instructions executed by the Python virtual
machine. This can be a very expensive instruction, at least relative to a
LOAD_FAST instruction, which is a simple array index and requires no extra
function calls by the virtual machine. I believe many LOAD_GLOBAL
instructions and LOAD_GLOBAL/LOAD_ATTR pairs could be converted to
LOAD_FAST instructions.
Code that uses global variables heavily often resorts to various tricks to
avoid global variable and attribute lookup. The aforementioned
sre_compile._compile function caches the append method of the growing
output list. Many people commonly abuse functions’ default argument feature
to cache global variable lookups. Both of these schemes are hackish and
rarely address all the available opportunities for optimization. (For
example, sre_compile._compile does not cache the two globals that it uses
most frequently: the builtin len function and the global OPCODES array
that it imports from sre_constants.py.
Questions
What about threads? What if math.sin changes while in cache?
I believe the global interpreter lock will protect values from being
corrupted. In any case, the situation would be no worse than it is today.
If one thread modified math.sin after another thread had already executed
LOAD_GLOBAL math, but before it executed LOAD_ATTR sin, the client
thread would see the old value of math.sin.
The idea is this. I use a multi-attribute load below as an example, not
because it would happen very often, but because by demonstrating the recursive
nature with an extra call hopefully it will become clearer what I have in
mind. Suppose a function defined in module foo wants to access
spam.eggs.ham and that spam is a module imported at the module level
in foo:
import spam
...
def somefunc():
...
x = spam.eggs.ham
Upon entry to somefunc, a TRACK_GLOBAL instruction will be executed:
TRACK_GLOBAL spam.eggs.ham n
spam.eggs.ham is a string literal stored in the function’s constants
array. n is a fastlocals index. &fastlocals[n] is a reference to
slot n in the executing frame’s fastlocals array, the location in
which the spam.eggs.ham reference will be stored. Here’s what I envision
happening:
The TRACK_GLOBAL instruction locates the object referred to by the name
spam and finds it in its module scope. It then executes a C function
like:_PyObject_TrackName(m, "spam.eggs.ham", &fastlocals[n])
where m is the module object with an attribute spam.
The module object strips the leading spam. and stores the necessary
information (eggs.ham and &fastlocals[n]) in case its binding for the
name eggs changes. It then locates the object referred to by the key
eggs in its dict and recursively calls:_PyObject_TrackName(eggs, "eggs.ham", &fastlocals[n])
The eggs object strips the leading eggs., stores the
(ham, &fastlocals[n]) info, locates the object in its namespace called
ham and calls _PyObject_TrackName once again:_PyObject_TrackName(ham, "ham", &fastlocals[n])
The ham object strips the leading string (no “.” this time, but that’s
a minor point), sees that the result is empty, then uses its own value
(self, probably) to update the location it was handed:Py_XDECREF(&fastlocals[n]);
&fastlocals[n] = self;
Py_INCREF(&fastlocals[n]);
At this point, each object involved in resolving spam.eggs.ham
knows which entry in its namespace needs to be tracked and what location
to update if that name changes. Furthermore, if the one name it is
tracking in its local storage changes, it can call _PyObject_TrackName
using the new object once the change has been made. At the bottom end of
the food chain, the last object will always strip a name, see the empty
string and know that its value should be stuffed into the location it’s
been passed.
When the object referred to by the dotted expression spam.eggs.ham
is going to go out of scope, an UNTRACK_GLOBAL spam.eggs.ham n
instruction is executed. It has the effect of deleting all the tracking
information that TRACK_GLOBAL established.
The tracking operation may seem expensive, but recall that the objects
being tracked are assumed to be “almost constant”, so the setup cost will
be traded off against hopefully multiple local instead of global loads.
For globals with attributes the tracking setup cost grows but is offset by
avoiding the extra LOAD_ATTR cost. The TRACK_GLOBAL instruction
needs to perform a PyDict_GetItemString for the first name in the chain
to determine where the top-level object resides. Each object in the chain
has to store a string and an address somewhere, probably in a dict that
uses storage locations as keys (e.g. the &fastlocals[n]) and strings as
values. (This dict could possibly be a central dict of dicts whose keys
are object addresses instead of a per-object dict.) It shouldn’t be the
other way around because multiple active frames may want to track
spam.eggs.ham, but only one frame will want to associate that name with
one of its fast locals slots.
Unresolved Issues
Threading
What about this (dumb) code?:
l = []
lock = threading.Lock()
...
def fill_l()::
for i in range(1000)::
lock.acquire()
l.append(math.sin(i))
lock.release()
...
def consume_l()::
while 1::
lock.acquire()
if l::
elt = l.pop()
lock.release()
fiddle(elt)
It’s not clear from a static analysis of the code what the lock is protecting.
(You can’t tell at compile-time that threads are even involved can you?)
Would or should it affect attempts to track l.append or math.sin in
the fill_l function?
If we annotate the code with mythical track_object and untrack_object
builtins (I’m not proposing this, just illustrating where stuff would go!), we
get:
l = []
lock = threading.Lock()
...
def fill_l()::
track_object("l.append", append)
track_object("math.sin", sin)
for i in range(1000)::
lock.acquire()
append(sin(i))
lock.release()
untrack_object("math.sin", sin)
untrack_object("l.append", append)
...
def consume_l()::
while 1::
lock.acquire()
if l::
elt = l.pop()
lock.release()
fiddle(elt)
Is that correct both with and without threads (or at least equally incorrect
with and without threads)?
Nested Scopes
The presence of nested scopes will affect where TRACK_GLOBAL finds a
global variable, but shouldn’t affect anything after that. (I think.)
Missing Attributes
Suppose I am tracking the object referred to by spam.eggs.ham and
spam.eggs is rebound to an object that does not have a ham attribute.
It’s clear this will be an AttributeError if the programmer attempts to
resolve spam.eggs.ham in the current Python virtual machine, but suppose
the programmer has anticipated this case:
if hasattr(spam.eggs, "ham"):
print spam.eggs.ham
elif hasattr(spam.eggs, "bacon"):
print spam.eggs.bacon
else:
print "what? no meat?"
You can’t raise an AttributeError when the tracking information is
recalculated. If it does not raise AttributeError and instead lets the
tracking stand, it may be setting the programmer up for a very subtle error.
One solution to this problem would be to track the shortest possible root of
each dotted expression the function refers to directly. In the above example,
spam.eggs would be tracked, but spam.eggs.ham and spam.eggs.bacon
would not.
Who does the dirty work?
In the Questions section I postulated the existence of a
_PyObject_TrackName function. While the API is fairly easy to specify,
the implementation behind-the-scenes is not so obvious. A central dictionary
could be used to track the name/location mappings, but it appears that all
setattr functions might need to be modified to accommodate this new
functionality.
If all types used the PyObject_GenericSetAttr function to set attributes
that would localize the update code somewhat. They don’t however (which is
not too surprising), so it seems that all getattrfunc and getattrofunc
functions will have to be updated. In addition, this would place an absolute
requirement on C extension module authors to call some function when an
attribute changes value (PyObject_TrackUpdate?).
Finally, it’s quite possible that some attributes will be set by side effect
and not by any direct call to a setattr method of some sort. Consider a
device interface module that has an interrupt routine that copies the contents
of a device register into a slot in the object’s struct whenever it
changes. In these situations, more extensive modifications would have to be
made by the module author. To identify such situations at compile time would
be impossible. I think an extra slot could be added to PyTypeObjects to
indicate if an object’s code is safe for global tracking. It would have a
default value of 0 (Py_TRACKING_NOT_SAFE). If an extension module author
has implemented the necessary tracking support, that field could be
initialized to 1 (Py_TRACKING_SAFE). _PyObject_TrackName could check
that field and issue a warning if it is asked to track an object that the
author has not explicitly said was safe for tracking.
Discussion
Jeremy Hylton has an alternate proposal on the table [2]. His proposal seeks
to create a hybrid dictionary/list object for use in global name lookups that
would make global variable access look more like local variable access. While
there is no C code available to examine, the Python implementation given in
his proposal still appears to require dictionary key lookup. It doesn’t
appear that his proposal could speed local variable attribute lookup, which
might be worthwhile in some situations if potential performance burdens could
be addressed.
Backwards Compatibility
I don’t believe there will be any serious issues of backward compatibility.
Obviously, Python bytecode that contains TRACK_OBJECT opcodes could not be
executed by earlier versions of the interpreter, but breakage at the bytecode
level is often assumed between versions.
Implementation
TBD. This is where I need help. I believe there should be either a central
name/location registry or the code that modifies object attributes should be
modified, but I’m not sure the best way to go about this. If you look at the
code that implements the STORE_GLOBAL and STORE_ATTR opcodes, it seems
likely that some changes will be required to PyDict_SetItem and
PyObject_SetAttr or their String variants. Ideally, there’d be a fairly
central place to localize these changes. If you begin considering tracking
attributes of local variables you get into issues of modifying STORE_FAST
as well, which could be a problem, since the name bindings for local variables
are changed much more frequently. (I think an optimizer could avoid inserting
the tracking code for the attributes for any local variables where the
variable’s name binding changes.)
Performance
I believe (though I have no code to prove it at this point), that implementing
TRACK_OBJECT will generally not be much more expensive than a single
LOAD_GLOBAL instruction or a LOAD_GLOBAL/LOAD_ATTR pair. An
optimizer should be able to avoid converting LOAD_GLOBAL and
LOAD_GLOBAL/LOAD_ATTR to the new scheme unless the object access
occurred within a loop. Further down the line, a register-oriented
replacement for the current Python virtual machine [3] could conceivably
eliminate most of the LOAD_FAST instructions as well.
The number of tracked objects should be relatively small. All active frames
of all active threads could conceivably be tracking objects, but this seems
small compared to the number of functions defined in a given application.
References
[1]
https://mail.python.org/pipermail/python-dev/2000-July/007609.html
[2]
http://www.zope.org/Members/jeremy/CurrentAndFutureProjects/FastGlobalsPEP
[3]
http://www.musi-cal.com/~skip/python/rattlesnake20010813.tar.gz
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 266 – Optimizing Global Variable/Attribute Access | Standards Track | The bindings for most global variables and attributes of other modules
typically never change during the execution of a Python program, but because
of Python’s dynamic nature, code which accesses such global objects must run
through a full lookup each time the object is needed. This PEP proposes a
mechanism that allows code that accesses most global objects to treat them as
local objects and places the burden of updating references on the code that
changes the name bindings of such objects. |
PEP 267 – Optimized Access to Module Namespaces
Author:
Jeremy Hylton <jeremy at alum.mit.edu>
Status:
Deferred
Type:
Standards Track
Created:
23-May-2001
Python-Version:
2.2
Post-History:
Table of Contents
Deferral
Abstract
Introduction
DLict design
Compiler issues
Runtime model
Backwards compatibility
Related PEPs
Copyright
Deferral
While this PEP is a nice idea, no-one has yet emerged to do the work of
hashing out the differences between this PEP, PEP 266 and PEP 280.
Hence, it is being deferred.
Abstract
This PEP proposes a new implementation of global module namespaces
and the builtin namespace that speeds name resolution. The
implementation would use an array of object pointers for most
operations in these namespaces. The compiler would assign indices
for global variables and module attributes at compile time.
The current implementation represents these namespaces as
dictionaries. A global name incurs a dictionary lookup each time
it is used; a builtin name incurs two dictionary lookups, a failed
lookup in the global namespace and a second lookup in the builtin
namespace.
This implementation should speed Python code that uses
module-level functions and variables. It should also eliminate
awkward coding styles that have evolved to speed access to these
names.
The implementation is complicated because the global and builtin
namespaces can be modified dynamically in ways that are impossible
for the compiler to detect. (Example: A module’s namespace is
modified by a script after the module is imported.) As a result,
the implementation must maintain several auxiliary data structures
to preserve these dynamic features.
Introduction
This PEP proposes a new implementation of attribute access for
module objects that optimizes access to module variables known at
compile time. The module will store these variables in an array
and provide an interface to lookup attributes using array offsets.
For globals, builtins, and attributes of imported modules, the
compiler will generate code that uses the array offsets for fast
access.
[describe the key parts of the design: dlict, compiler support,
stupid name trick workarounds, optimization of other module’s
globals]
The implementation will preserve existing semantics for module
namespaces, including the ability to modify module namespaces at
runtime in ways that affect the visibility of builtin names.
DLict design
The namespaces are implemented using a data structure that has
sometimes gone under the name dlict. It is a dictionary that has
numbered slots for some dictionary entries. The type must be
implemented in C to achieve acceptable performance. The new
type-class unification work should make this fairly easy. The
DLict will presumably be a subclass of dictionary with an
alternate storage module for some keys.
A Python implementation is included here to illustrate the basic
design:
"""A dictionary-list hybrid"""
import types
class DLict:
def __init__(self, names):
assert isinstance(names, types.DictType)
self.names = {}
self.list = [None] * size
self.empty = [1] * size
self.dict = {}
self.size = 0
def __getitem__(self, name):
i = self.names.get(name)
if i is None:
return self.dict[name]
if self.empty[i] is not None:
raise KeyError, name
return self.list[i]
def __setitem__(self, name, val):
i = self.names.get(name)
if i is None:
self.dict[name] = val
else:
self.empty[i] = None
self.list[i] = val
self.size += 1
def __delitem__(self, name):
i = self.names.get(name)
if i is None:
del self.dict[name]
else:
if self.empty[i] is not None:
raise KeyError, name
self.empty[i] = 1
self.list[i] = None
self.size -= 1
def keys(self):
if self.dict:
return self.names.keys() + self.dict.keys()
else:
return self.names.keys()
def values(self):
if self.dict:
return self.names.values() + self.dict.values()
else:
return self.names.values()
def items(self):
if self.dict:
return self.names.items()
else:
return self.names.items() + self.dict.items()
def __len__(self):
return self.size + len(self.dict)
def __cmp__(self, dlict):
c = cmp(self.names, dlict.names)
if c != 0:
return c
c = cmp(self.size, dlict.size)
if c != 0:
return c
for i in range(len(self.names)):
c = cmp(self.empty[i], dlict.empty[i])
if c != 0:
return c
if self.empty[i] is None:
c = cmp(self.list[i], dlict.empty[i])
if c != 0:
return c
return cmp(self.dict, dlict.dict)
def clear(self):
self.dict.clear()
for i in range(len(self.names)):
if self.empty[i] is None:
self.empty[i] = 1
self.list[i] = None
def update(self):
pass
def load(self, index):
"""dlict-special method to support indexed access"""
if self.empty[index] is None:
return self.list[index]
else:
raise KeyError, index # XXX might want reverse mapping
def store(self, index, val):
"""dlict-special method to support indexed access"""
self.empty[index] = None
self.list[index] = val
def delete(self, index):
"""dlict-special method to support indexed access"""
self.empty[index] = 1
self.list[index] = None
Compiler issues
The compiler currently collects the names of all global variables
in a module. These are names bound at the module level or bound
in a class or function body that declares them to be global.
The compiler would assign indices for each global name and add the
names and indices of the globals to the module’s code object.
Each code object would then be bound irrevocably to the module it
was defined in. (Not sure if there are some subtle problems with
this.)
For attributes of imported modules, the module will store an
indirection record. Internally, the module will store a pointer
to the defining module and the offset of the attribute in the
defining module’s global variable array. The offset would be
initialized the first time the name is looked up.
Runtime model
The PythonVM will be extended with new opcodes to access globals
and module attributes via a module-level array.
A function object would need to point to the module that defined
it in order to provide access to the module-level global array.
For module attributes stored in the dlict (call them static
attributes), the get/delattr implementation would need to track
access to these attributes using the old by-name interface. If a
static attribute is updated dynamically, e.g.:
mod.__dict__["foo"] = 2
The implementation would need to update the array slot instead of
the backup dict.
Backwards compatibility
The dlict will need to maintain meta-information about whether a
slot is currently used or not. It will also need to maintain a
pointer to the builtin namespace. When a name is not currently
used in the global namespace, the lookup will have to fail over to
the builtin namespace.
In the reverse case, each module may need a special accessor
function for the builtin namespace that checks to see if a global
shadowing the builtin has been added dynamically. This check
would only occur if there was a dynamic change to the module’s
dlict, i.e. when a name is bound that wasn’t discovered at
compile-time.
These mechanisms would have little if any cost for the common case
whether a module’s global namespace is not modified in strange
ways at runtime. They would add overhead for modules that did
unusual things with global names, but this is an uncommon practice
and probably one worth discouraging.
It may be desirable to disable dynamic additions to the global
namespace in some future version of Python. If so, the new
implementation could provide warnings.
Related PEPs
PEP 266, Optimizing Global Variable/Attribute Access, proposes a
different mechanism for optimizing access to global variables as
well as attributes of objects. The mechanism uses two new opcodes
TRACK_OBJECT and UNTRACK_OBJECT to create a slot in the local
variables array that aliases the global or object attribute. If
the object being aliases is rebound, the rebind operation is
responsible for updating the aliases.
The objecting tracking approach applies to a wider range of
objects than just module. It may also have a higher runtime cost,
because each function that uses a global or object attribute must
execute extra opcodes to register its interest in an object and
unregister on exit; the cost of registration is unclear, but
presumably involves a dynamically resizable data structure to hold
a list of callbacks.
The implementation proposed here avoids the need for registration,
because it does not create aliases. Instead it allows functions
that reference a global variable or module attribute to retain a
pointer to the location where the original binding is stored. A
second advantage is that the initial lookup is performed once per
module rather than once per function call.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 267 – Optimized Access to Module Namespaces | Standards Track | This PEP proposes a new implementation of global module namespaces
and the builtin namespace that speeds name resolution. The
implementation would use an array of object pointers for most
operations in these namespaces. The compiler would assign indices
for global variables and module attributes at compile time. |
PEP 268 – Extended HTTP functionality and WebDAV
Author:
Greg Stein <gstein at lyra.org>
Status:
Rejected
Type:
Standards Track
Created:
20-Aug-2001
Python-Version:
2.x
Post-History:
21-Aug-2001
Table of Contents
Rejection Notice
Abstract
Rationale
Specification
HTTP Authentication
Proxy Handling
WebDAV Features
Reference Implementation
Copyright
Rejection Notice
This PEP has been rejected. It has failed to generate sufficient
community support in the six years since its proposal.
Abstract
This PEP discusses new modules and extended functionality for Python’s
HTTP support. Notably, the addition of authenticated requests, proxy
support, authenticated proxy usage, and WebDAV capabilities.
Rationale
Python has been quite popular as a result of its “batteries included”
positioning. One of the most heavily used protocols, HTTP (see
RFC 2616), has been included with Python for years (httplib). However,
this support has not kept up with the full needs and requirements of
many HTTP-based applications and systems. In addition, new protocols
based on HTTP, such as WebDAV and XML-RPC, are becoming useful and are
seeing increasing usage. Supplying this functionality meets Python’s
“batteries included” role and also keeps Python at the leading edge of
new technologies.
While authentication and proxy support are two very notable features
missing from Python’s core HTTP processing, they are minimally handled
as part of Python’s URL handling (urllib and
urllib2). However, applications that need fine-grained or
sophisticated HTTP handling cannot make use of the features while they
reside in urllib. Refactoring these features into a location where
they can be directly associated with an HTTP connection will improve
their utility for both urllib and for sophisticated applications.
The motivation for this PEP was from several people requesting these
features directly, and from a number of feature requests on
SourceForge. Since the exact form of the modules to be provided and
the classes/architecture used could be subject to debate, this PEP was
created to provide a focal point for those discussions.
Specification
Two modules will be added to the standard library: httpx (HTTP
extended functionality), and davlib (WebDAV library).
[ suggestions for module names are welcome; davlib has some
precedence, but something like webdav might be desirable ]
HTTP Authentication
The httpx module will provide a mixin for performing HTTP
authentication (for both proxy and origin server authentication). This
mixin (httpx.HandleAuthentication) can be combined with the
HTTPConnection and the HTTPSConnection classes (the mixin may
possibly work with the HTTP and HTTPS compatibility classes, but that
is not a requirement).
The mixin will delegate the authentication process to one or more
“authenticator” objects, allowing multiple connections to share
authenticators. The use of a separate object allows for a long term
connection to an authentication system (e.g. LDAP). An authenticator
for the Basic and Digest mechanisms (see RFC 2617) will be
provided. User-supplied authenticator subclasses can be registered and
used by the connections.
A “credentials” object (httpx.Credentials) is also associated with
the mixin, and stores the credentials (e.g. username and password)
needed by the authenticators. Subclasses of Credentials can be created
to hold additional information (e.g. NT domain).
The mixin overrides the getresponse() method to detect 401
(Unauthorized) and 407 (Proxy Authentication Required)
responses. When this is found, the response object, the connection,
and the credentials are passed to the authenticator corresponding with
the authentication scheme specified in the response (multiple
authenticators are tried in decreasing order of security if multiple
schemes are in the response). Each authenticator can examine the
response headers and decide whether and how to resend the request with
the correct authentication headers. If no authenticator can
successfully handle the authentication, then an exception is raised.
Resending a request, with the appropriate credentials, is one of the
more difficult portions of the authentication system. The difficulty
arises in recording what was sent originally: the request line, the
headers, and the body. By overriding putrequest, putheader, and
endheaders, we can capture all but the body. Once the endheaders
method is called, then we capture all calls to send() (until the next
putrequest method call) to hold the body content. The mixin will have
a configurable limit for the amount of data to hold in this fashion
(e.g. only hold up to 100k of body content). Assuming that the entire
body has been stored, then we can resend the request with the
appropriate authentication information.
If the body is too large to be stored, then the getresponse()
simply returns the response object, indicating the 401 or 407
error. Since the authentication information has been computed and
cached (into the Credentials object; see below), the caller can simply
regenerate the request. The mixin will attach the appropriate
credentials.
A “protection space” (see RFC 2617, section 1.2) is defined as a tuple
of the host, port, and authentication realm. When a request is
initially sent to an HTTP server, we do not know the authentication
realm (the realm is only returned when authentication fails). However,
we do have the path from the URL, and that can be useful in
determining the credentials to send to the server. The Basic
authentication scheme is typically set up hierarchically: the
credentials for /path can be tried for /path/subpath. The
Digest authentication scheme has explicit support for the hierarchical
setup. The httpx.Credentials object will store credentials for
multiple protection spaces, and can be looked up in two different
ways:
looked up using (host, port, path) – this lookup scheme is
used when generating a request for a path where we don’t know the
authentication realm.
looked up using (host, port, realm) – this mechanism is used
during the authentication process when the server has specified that
the Request-URI resides within a specific realm.
The HandleAuthentication mixin will override putrequest() to
automatically insert credentials, if available. The URL from the
putrequest is used to determine the appropriate authentication
information to use.
It is also important to note that two sets of credentials are used,
and stored by the mixin. One set for any proxy that may be used, and
one used for the target origin server. Since proxies do not have
paths, the protection spaces in the proxy credentials will always use
“/” for storing and looking up via a path.
Proxy Handling
The httpx module will provide a mixin for using a proxy to perform
HTTP(S) operations. This mixin (httpx.UseProxy) can be combined
with the HTTPConnection and the HTTPSConnection classes (the
mixin may possibly work with the HTTP and HTTPS compatibility classes,
but that is not a requirement).
The mixin will record the (host, port) of the proxy to use. XXX
will be overridden to use this host/port combination for connections
and to rewrite request URLs into the absoluteURIs referring to the
origin server (these URIs are passed to the proxy server).
Proxy authentication is handled by the httpx.HandleAuthentication
class since a user may directly use HTTP(S)Connection to speak
with proxies.
WebDAV Features
The davlib module will provide a mixin for sending WebDAV requests
to a WebDAV-enabled server. This mixin (davlib.DAVClient) can be
combined with the HTTPConnection and the HTTPSConnection
classes (the mixin may possibly work with the HTTP and HTTPS
compatibility classes, but that is not a requirement).
The mixin provides methods to perform the various HTTP methods defined
by HTTP in RFC 2616, and by WebDAV in RFC 2518.
A custom response object is used to decode 207 (Multi-Status)
responses. The response object will use the standard library’s xml
package to parse the multistatus XML information, producing a simple
structure of objects to hold the multistatus data. Multiple parsing
schemes will be tried/used, in order of decreasing speed.
Reference Implementation
The actual (future/final) implementation is being developed in the
/nondist/sandbox/Lib directory, until it is accepted and moved
into the main Lib directory.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 268 – Extended HTTP functionality and WebDAV | Standards Track | This PEP discusses new modules and extended functionality for Python’s
HTTP support. Notably, the addition of authenticated requests, proxy
support, authenticated proxy usage, and WebDAV capabilities. |
PEP 269 – Pgen Module for Python
Author:
Jonathan Riehl <jriehl at spaceship.com>
Status:
Deferred
Type:
Standards Track
Created:
24-Aug-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Rationale
Specification
parseGrammarFile (fileName) -> AST
parseGrammarString (text) -> AST
buildParser (grammarAst) -> DFA
parseFile (fileName, dfa, start) -> AST
parseString (text, dfa, start) -> AST
symbolToStringMap (dfa) -> dict
stringToSymbolMap (dfa) -> dict
Implementation Plan
Limitations
Reference Implementation
References
Copyright
Abstract
Much like the parser module exposes the Python parser, this PEP
proposes that the parser generator used to create the Python
parser, pgen, be exposed as a module in Python.
Rationale
Through the course of Pythonic history, there have been numerous
discussions about the creation of a Python compiler [1]. These
have resulted in several implementations of Python parsers, most
notably the parser module currently provided in the Python
standard library [2] and Jeremy Hylton’s compiler module [3].
However, while multiple language changes have been proposed
[4] [5], experimentation with the Python syntax has lacked the
benefit of a Python binding to the actual parser generator used to
build Python.
By providing a Python wrapper analogous to Fred Drake Jr.’s parser
wrapper, but targeted at the pgen library, the following
assertions are made:
Reference implementations of syntax changes will be easier to
develop. Currently, a reference implementation of a syntax
change would require the developer to use the pgen tool from
the command line. The resulting parser data structure would
then either have to be reworked to interface with a custom
CPython implementation, or wrapped as a C extension module.
Reference implementations of syntax changes will be easier to
distribute. Since the parser generator will be available in
Python, it should follow that the resulting parser will
accessible from Python. Therefore, reference implementations
should be available as pure Python code, versus using custom
versions of the existing CPython distribution, or as compilable
extension modules.
Reference implementations of syntax changes will be easier to
discuss with a larger audience. This somewhat falls out of the
second assertion, since the community of Python users is most
likely larger than the community of CPython developers.
Development of small languages in Python will be further
enhanced, since the additional module will be a fully
functional LL(1) parser generator.
Specification
The proposed module will be called pgen. The pgen module will
contain the following functions:
parseGrammarFile (fileName) -> AST
The parseGrammarFile() function will read the file pointed to
by fileName and create an AST object. The AST nodes will
contain the nonterminal, numeric values of the parser
generator meta-grammar. The output AST will be an instance of
the AST extension class as provided by the parser module.
Syntax errors in the input file will cause the SyntaxError
exception to be raised.
parseGrammarString (text) -> AST
The parseGrammarString() function will follow the semantics of
the parseGrammarFile(), but accept the grammar text as a
string for input, as opposed to the file name.
buildParser (grammarAst) -> DFA
The buildParser() function will accept an AST object for input
and return a DFA (deterministic finite automaton) data
structure. The DFA data structure will be a C extension
class, much like the AST structure is provided in the parser
module. If the input AST does not conform to the nonterminal
codes defined for the pgen meta-grammar, buildParser() will
throw a ValueError exception.
parseFile (fileName, dfa, start) -> AST
The parseFile() function will essentially be a wrapper for the
PyParser_ParseFile() C API function. The wrapper code will
accept the DFA C extension class, and the file name. An AST
instance that conforms to the lexical values in the token
module and the nonterminal values contained in the DFA will be
output.
parseString (text, dfa, start) -> AST
The parseString() function will operate in a similar fashion
to the parseFile() function, but accept the parse text as an
argument. Much like parseFile() will wrap the
PyParser_ParseFile() C API function, parseString() will wrap
the PyParser_ParseString() function.
symbolToStringMap (dfa) -> dict
The symbolToStringMap() function will accept a DFA instance
and return a dictionary object that maps from the DFA’s
numeric values for its nonterminals to the string names of the
nonterminals as found in the original grammar specification
for the DFA.
stringToSymbolMap (dfa) -> dict
The stringToSymbolMap() function output a dictionary mapping
the nonterminal names of the input DFA to their corresponding
numeric values.
Extra credit will be awarded if the map generation functions and
parsing functions are also methods of the DFA extension class.
Implementation Plan
A cunning plan has been devised to accomplish this enhancement:
Rename the pgen functions to conform to the CPython naming
standards. This action may involve adding some header files to
the Include subdirectory.
Move the pgen C modules in the Makefile.pre.in from unique pgen
elements to the Python C library.
Make any needed changes to the parser module so the AST
extension class understands that there are AST types it may not
understand. Cursory examination of the AST extension class
shows that it keeps track of whether the tree is a suite or an
expression.
Code an additional C module in the Modules directory. The C
extension module will implement the DFA extension class and the
functions outlined in the previous section.
Add the new module to the build process. Black magic, indeed.
Limitations
Under this proposal, would be designers of Python 3000 will still
be constrained to Python’s lexical conventions. The addition,
subtraction or modification of the Python lexer is outside the
scope of this PEP.
Reference Implementation
No reference implementation is currently provided. A patch
was provided at some point in
http://sourceforge.net/tracker/index.php?func=detail&aid=599331&group_id=5470&atid=305470
but that patch is no longer maintained.
References
[1]
The (defunct) Python Compiler-SIG
http://www.python.org/sigs/compiler-sig/
[2]
Parser Module Documentation
http://docs.python.org/library/parser.html
[3]
Hylton, Jeremy.
http://docs.python.org/library/compiler.html
[4]
Pelletier, Michel. “Python Interface Syntax”, PEP 245
[5]
The Python Types-SIG
http://www.python.org/sigs/types-sig/
Copyright
This document has been placed in the public domain.
| Deferred | PEP 269 – Pgen Module for Python | Standards Track | Much like the parser module exposes the Python parser, this PEP
proposes that the parser generator used to create the Python
parser, pgen, be exposed as a module in Python. |
PEP 270 – uniq method for list objects
Author:
Jason Petrone <jp at demonseed.net>
Status:
Rejected
Type:
Standards Track
Created:
21-Aug-2001
Python-Version:
2.2
Post-History:
Table of Contents
Notice
Abstract
Rationale
Considerations
Reference Implementation
References
Copyright
Notice
This PEP is withdrawn by the author. He writes:
Removing duplicate elements from a list is a common task, but
there are only two reasons I can see for making it a built-in.
The first is if it could be done much faster, which isn’t the
case. The second is if it makes it significantly easier to
write code. The introduction of sets.py eliminates this
situation since creating a sequence without duplicates is just
a matter of choosing a different data structure: a set instead
of a list.
As described in PEP 218, sets are being added to the standard
library for Python 2.3.
Abstract
This PEP proposes adding a method for removing duplicate elements to
the list object.
Rationale
Removing duplicates from a list is a common task. I think it is
useful and general enough to belong as a method in list objects.
It also has potential for faster execution when implemented in C,
especially if optimization using hashing or sorted cannot be used.
On comp.lang.python there are many, many, posts [1] asking about
the best way to do this task. It’s a little tricky to implement
optimally and it would be nice to save people the trouble of
figuring it out themselves.
Considerations
Tim Peters suggests trying to use a hash table, then trying to
sort, and finally falling back on brute force [2]. Should uniq
maintain list order at the expense of speed?
Is it spelled ‘uniq’ or ‘unique’?
Reference Implementation
I’ve written the brute force version. It’s about 20 lines of code
in listobject.c. Adding support for hash table and sorted
duplicate removal would only take another hour or so.
References
[1]
https://groups.google.com/forum/#!searchin/comp.lang.python/duplicates
[2]
Tim Peters unique() entry in the Python cookbook:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52560/index_txt
Copyright
This document has been placed in the public domain.
| Rejected | PEP 270 – uniq method for list objects | Standards Track | This PEP proposes adding a method for removing duplicate elements to
the list object. |
PEP 271 – Prefixing sys.path by command line option
Author:
Frédéric B. Giacometti <fred at arakne.com>
Status:
Rejected
Type:
Standards Track
Created:
15-Aug-2001
Python-Version:
2.2
Post-History:
Table of Contents
Abstract
Rationale
Other Information
When to use this option
Reference Implementation
Copyright
Abstract
At present, setting the PYTHONPATH environment variable is the
only method for defining additional Python module search
directories.
This PEP introduces the ‘-P’ valued option to the python command
as an alternative to PYTHONPATH.
Rationale
On Unix:
python -P $SOMEVALUE
will be equivalent to:
env PYTHONPATH=$SOMEVALUE python
On Windows 2K:
python -P %SOMEVALUE%
will (almost) be equivalent to:
set __PYTHONPATH=%PYTHONPATH% && set PYTHONPATH=%SOMEVALUE%\
&& python && set PYTHONPATH=%__PYTHONPATH%
Other Information
This option is equivalent to the ‘java -classpath’ option.
When to use this option
This option is intended to ease and make more robust the use of
Python in test or build scripts, for instance.
Reference Implementation
A patch implementing this is available from SourceForge:
http://sourceforge.net/tracker/download.php?group_id=5470&atid=305470&file_id=6916&aid=429614
with the patch discussion at:
http://sourceforge.net/tracker/?func=detail&atid=305470&aid=429614&group_id=5470
Copyright
This document has been placed in the public domain.
| Rejected | PEP 271 – Prefixing sys.path by command line option | Standards Track | At present, setting the PYTHONPATH environment variable is the
only method for defining additional Python module search
directories. |
PEP 272 – API for Block Encryption Algorithms v1.0
Author:
A.M. Kuchling <amk at amk.ca>
Status:
Final
Type:
Informational
Created:
18-Sep-2001
Post-History:
17-Apr-2002, 29-May-2002
Table of Contents
Abstract
Introduction
Specification
References
Changes
Acknowledgements
Copyright
Abstract
This document specifies a standard API for secret-key block
encryption algorithms such as DES or Rijndael, making it easier to
switch between different algorithms and implementations.
Introduction
Encryption algorithms transform their input data (called
plaintext) in some way that is dependent on a variable key,
producing ciphertext. The transformation can easily be reversed
if and only if one knows the key. The key is a sequence of bits
chosen from some very large space of possible keys. There are two
classes of encryption algorithms: block ciphers and stream ciphers.
Block ciphers encrypt multibyte inputs of a fixed size (frequently
8 or 16 bytes long), and can be operated in various feedback
modes. The feedback modes supported in this specification are:
Number
Constant
Description
1
MODE_ECB
Electronic Code Book
2
MODE_CBC
Cipher Block Chaining
3
MODE_CFB
Cipher Feedback
5
MODE_OFB
Output Feedback
6
MODE_CTR
Counter
These modes are to be implemented as described in NIST publication
SP 800-38A [1]. Descriptions of the first three feedback modes can
also be found in Bruce Schneier’s book Applied Cryptography [2].
(The numeric value 4 is reserved for MODE_PGP, a variant of CFB
described in RFC 2440: “OpenPGP Message Format”. This mode
isn’t considered important enough to make it worth requiring it
for all block encryption ciphers, though supporting it is a nice
extra feature.)
In a strict formal sense, stream ciphers encrypt data bit-by-bit;
practically, stream ciphers work on a character-by-character
basis. This PEP only aims at specifying an interface for block
ciphers, though stream ciphers can support the interface described
here by fixing ‘block_size’ to 1. Feedback modes also don’t make
sense for stream ciphers, so the only reasonable feedback mode
would be ECB mode.
Specification
Encryption modules can add additional functions, methods, and
attributes beyond those described in this PEP, but all of the
features described in this PEP must be present for a module to
claim compliance with it.
Secret-key encryption modules should define one function:
new(key, mode, [IV], **kwargs)
Returns a ciphering object, using the secret key contained in the
string ‘key’, and using the feedback mode ‘mode’, which must be
one of the constants from the table above.
If ‘mode’ is MODE_CBC or MODE_CFB, ‘IV’ must be provided and must
be a string of the same length as the block size. Not providing a
value of ‘IV’ will result in a ValueError exception being raised.
Depending on the algorithm, a module may support additional
keyword arguments to this function. Some keyword arguments are
specified by this PEP, and modules are free to add additional
keyword arguments. If a value isn’t provided for a given keyword,
a secure default value should be used. For example, if an
algorithm has a selectable number of rounds between 1 and 16, and
1-round encryption is insecure and 8-round encryption is believed
secure, the default value for ‘rounds’ should be 8 or more.
(Module implementors can choose a very slow but secure value, too,
such as 16 in this example. This decision is left up to the
implementor.)
The following table lists keyword arguments defined by this PEP:
Keyword
Meaning
counter
Callable object that returns counter blocks
(see below; CTR mode only)
rounds
Number of rounds of encryption to use
segment_size
Size of data and ciphertext segments,
measured in bits (see below; CFB mode only)
The Counter feedback mode requires a sequence of input blocks,
called counters, that are used to produce the output. When ‘mode’
is MODE_CTR, the ‘counter’ keyword argument must be provided, and
its value must be a callable object, such as a function or method.
Successive calls to this callable object must return a sequence of
strings that are of the length ‘block_size’ and that never
repeats. (Appendix B of the NIST publication gives a way to
generate such a sequence, but that’s beyond the scope of this
PEP.)
The CFB mode operates on segments of the plaintext and ciphertext
that are ‘segment_size’ bits long. Therefore, when using this
mode, the input and output strings must be a multiple of
‘segment_size’ bits in length. ‘segment_size’ must be an integer
between 1 and block_size*8, inclusive. (The factor of 8 comes
from ‘block_size’ being measured in bytes and not in bits). The
default value for this parameter should be block_size*8.
Implementors are allowed to constrain ‘segment_size’ to be a
multiple of 8 for simplicity, but they’re encouraged to support
arbitrary values for generality.
Secret-key encryption modules should define two variables:
block_sizeAn integer value; the size of the blocks encrypted by this
module, measured in bytes. For all feedback modes, the length
of strings passed to the encrypt() and decrypt() must be a
multiple of the block size.
key_sizeAn integer value; the size of the keys required by this
module, measured in bytes. If key_size is None, then the
algorithm accepts variable-length keys. This may mean the
module accepts keys of any random length, or that there are a
few different possible lengths, e.g. 16, 24, or 32 bytes. You
cannot pass a key of length 0 (that is, the null string ‘’) as
a variable-length key.
Cipher objects should have two attributes:
block_sizeAn integer value equal to the size of the blocks encrypted by
this object. For algorithms with a variable block size, this
value is equal to the block size selected for this object.
IVContains the initial value which will be used to start a
cipher feedback mode; it will always be a string exactly one
block in length. After encrypting or decrypting a string,
this value is updated to reflect the modified feedback text.
It is read-only, and cannot be assigned a new value.
Cipher objects require the following methods:
decrypt(string)Decrypts ‘string’, using the key-dependent data in the object
and with the appropriate feedback mode. The string’s length
must be an exact multiple of the algorithm’s block size or, in
CFB mode, of the segment size. Returns a string containing
the plaintext.
encrypt(string)Encrypts a non-empty string, using the key-dependent data in
the object, and with the appropriate feedback mode. The
string’s length must be an exact multiple of the algorithm’s
block size or, in CFB mode, of the segment size. Returns a
string containing the ciphertext.
Here’s an example, using a module named ‘DES’:
>>> import DES
>>> obj = DES.new('abcdefgh', DES.MODE_ECB)
>>> plaintext = "Guido van Rossum is a space alien."
>>> len(plaintext)
34
>>> obj.encrypt(plaintext)
Traceback (innermost last):
File "<stdin>", line 1, in ?
ValueError: Strings for DES must be a multiple of 8 in length
>>> ciphertext = obj.encrypt(plain+'XXXXXX') # Add padding
>>> ciphertext
'\021,\343Nq\214DY\337T\342pA\372\255\311s\210\363,\300j\330\250\312\347\342I\3215w\03561\303dgb/\006'
>>> obj.decrypt(ciphertext)
'Guido van Rossum is a space alien.XXXXXX'
References
[1]
NIST publication SP 800-38A, “Recommendation for Block Cipher
Modes of Operation” (http://csrc.nist.gov/encryption/modes/)
[2]
Applied Cryptography
Changes
2002-04: Removed references to stream ciphers; retitled PEP;
prefixed feedback mode constants with MODE_; removed PGP feedback
mode; added CTR and OFB feedback modes; clarified where numbers
are measured in bytes and where in bits.
2002-09: Clarified the discussion of key length by using
“variable-length keys” instead of “arbitrary-length”.
Acknowledgements
Thanks to the readers of the python-crypto list for their comments on
this PEP.
Copyright
This document has been placed in the public domain.
| Final | PEP 272 – API for Block Encryption Algorithms v1.0 | Informational | This document specifies a standard API for secret-key block
encryption algorithms such as DES or Rijndael, making it easier to
switch between different algorithms and implementations. |
PEP 273 – Import Modules from Zip Archives
Author:
James C. Ahlstrom <jim at interet.com>
Status:
Final
Type:
Standards Track
Created:
11-Oct-2001
Python-Version:
2.3
Post-History:
26-Oct-2001
Table of Contents
Abstract
Note
Specification
Subdirectory Equivalence
Efficiency
zlib
Booting
Directory Imports
Benchmarks
Custom Imports
Implementation
References
Copyright
Abstract
This PEP adds the ability to import Python modules
*.py, *.py[co] and packages from zip archives. The
same code is used to speed up normal directory imports
provided os.listdir is available.
Note
Zip imports were added to Python 2.3, but the final implementation
uses an approach different from the one described in this PEP.
The 2.3 implementation is SourceForge patch #652586 [1], which adds
new import hooks described in PEP 302.
The rest of this PEP is therefore only of historical interest.
Specification
Currently, sys.path is a list of directory names as strings. If
this PEP is implemented, an item of sys.path can be a string
naming a zip file archive. The zip archive can contain a
subdirectory structure to support package imports. The zip
archive satisfies imports exactly as a subdirectory would.
The implementation is in C code in the Python core and works on
all supported Python platforms.
Any files may be present in the zip archive, but only files
*.py and *.py[co] are available for import. Zip import of
dynamic modules (*.pyd, *.so) is disallowed.
Just as sys.path currently has default directory names, a default
zip archive name is added too. Otherwise there is no way to
import all Python library files from an archive.
Subdirectory Equivalence
The zip archive must be treated exactly as a subdirectory tree so
we can support package imports based on current and future rules.
All zip data is taken from the Central Directory, the data must be
correct, and brain dead zip files are not accommodated.
Suppose sys.path contains “/A/B/SubDir” and “/C/D/E/Archive.zip”,
and we are trying to import modfoo from the Q package. Then
import.c will generate a list of paths and extensions and will
look for the file. The list of generated paths does not change
for zip imports. Suppose import.c generates the path
“/A/B/SubDir/Q/R/modfoo.pyc”. Then it will also generate the path
“/C/D/E/Archive.zip/Q/R/modfoo.pyc”. Finding the SubDir path is
exactly equivalent to finding “Q/R/modfoo.pyc” in the archive.
Suppose you zip up /A/B/SubDir/* and all its subdirectories. Then
your zip file will satisfy imports just as your subdirectory did.
Well, not quite. You can’t satisfy dynamic modules from a zip
file. Dynamic modules have extensions like .dll, .pyd, and .so.
They are operating system dependent, and probably can’t be loaded
except from a file. It might be possible to extract the dynamic
module from the zip file, write it to a plain file and load it.
But that would mean creating temporary files, and dealing with all
the dynload_*.c, and that’s probably not a good idea.
When trying to import *.pyc, if it is not available then
*.pyo will be used instead. And vice versa when looking for *.pyo.
If neither *.pyc nor *.pyo is available, or if the magic numbers
are invalid, then *.py will be compiled and used to satisfy the
import, but the compiled file will not be saved. Python would
normally write it to the same directory as *.py, but surely we
don’t want to write to the zip file. We could write to the
directory of the zip archive, but that would clutter it up, not
good if it is /usr/bin for example.
Failing to write the compiled files will make zip imports very slow,
and the user will probably not figure out what is wrong. So it
is best to put *.pyc and *.pyo in the archive with the *.py.
Efficiency
The only way to find files in a zip archive is linear search. So
for each zip file in sys.path, we search for its names once, and
put the names plus other relevant data into a static Python
dictionary. The key is the archive name from sys.path joined with
the file name (including any subdirectories) within the archive.
This is exactly the name generated by import.c, and makes lookup
easy.
This same mechanism is used to speed up directory (non-zip) imports.
See below.
zlib
Compressed zip archives require zlib for decompression. Prior to
any other imports, we attempt an import of zlib. Import of
compressed files will fail with a message “missing zlib” unless
zlib is available.
Booting
Python imports site.py itself, and this imports os, nt, ntpath,
stat, and UserDict. It also imports sitecustomize.py which may
import more modules. Zip imports must be available before site.py
is imported.
Just as there are default directories in sys.path, there must be
one or more default zip archives too.
The problem is what the name should be. The name should be linked
with the Python version, so the Python executable can correctly
find its corresponding libraries even when there are multiple
Python versions on the same machine.
We add one name to sys.path. On Unix, the directory is
sys.prefix + "/lib", and the file name is
"python%s%s.zip" % (sys.version[0], sys.version[2]).
So for Python 2.2 and prefix /usr/local, the path
/usr/local/lib/python2.2/ is already on sys.path, and
/usr/local/lib/python22.zip would be added.
On Windows, the file is the full path to python22.dll, with
“dll” replaced by “zip”. The zip archive name is always inserted
as the second item in sys.path. The first is the directory of the
main.py (thanks Tim).
Directory Imports
The static Python dictionary used to speed up zip imports can be
used to speed up normal directory imports too. For each item in
sys.path that is not a zip archive, we call os.listdir, and add
the directory contents to the dictionary. Then instead of calling
fopen() in a double loop, we just check the dictionary. This
greatly speeds up imports. If os.listdir doesn’t exist, the
dictionary is not used.
Benchmarks
Case
Original 2.2a3
Using os.listdir
Zip Uncomp
Zip Compr
1
3.2 2.5 3.2->1.02
2.3 2.5 2.3->0.87
1.66->0.93
1.5->1.07
2
2.8 3.9 3.0->1.32
Same as Case 1.
3
5.7 5.7 5.7->5.7
2.1 2.1 2.1->1.8
1.25->0.99
1.19->1.13
4
9.4 9.4 9.3->9.35
Same as Case 3.
Case 1: Local drive C:, sys.path has its default value.
Case 2: Local drive C:, directory with files is at the end of sys.path.
Case 3: Network drive, sys.path has its default value.
Case 4: Network drive, directory with files is at the end of sys.path.
Benchmarks were performed on a Pentium 4 clone, 1.4 GHz, 256 Meg.
The machine was running Windows 2000 with a Linux/Samba network server.
Times are in seconds, and are the time to import about 100 Lib modules.
Case 2 and 4 have the “correct” directory moved to the end of sys.path.
“Uncomp” means uncompressed zip archive, “Compr” means compressed.
Initial times are after a re-boot of the system; the time after
“->” is the time after repeated runs. Times to import from C:
after a re-boot are rather highly variable for the “Original” case,
but are more realistic.
Custom Imports
The logic demonstrates the ability to import using default searching
until a needed Python module (in this case, os) becomes available.
This can be used to bootstrap custom importers. For example, if
“importer()” in __init__.py exists, then it could be used for imports.
The “importer()” can freely import os and other modules, and these
will be satisfied from the default mechanism. This PEP does not
define any custom importers, and this note is for information only.
Implementation
A C implementation is available as SourceForge patch 492105.
Superseded by patch 652586 and current CVS. [2]
A newer version (updated for recent CVS by Paul Moore) is 645650.
Superseded by patch 652586 and current CVS. [3]
A competing implementation by Just van Rossum is 652586, which is
the basis for the final implementation of PEP 302. PEP 273 has
been implemented using PEP 302’s import hooks. [1]
References
[1] (1, 2)
Just van Rossum, New import hooks + Import from Zip files
https://bugs.python.org/issue652586
[2]
Import from Zip archive, James C. Ahlstrom
https://bugs.python.org/issue492105
[3]
Import from Zip Archive, Paul Moore
https://bugs.python.org/issue645650
Copyright
This document has been placed in the public domain.
| Final | PEP 273 – Import Modules from Zip Archives | Standards Track | This PEP adds the ability to import Python modules
*.py, *.py[co] and packages from zip archives. The
same code is used to speed up normal directory imports
provided os.listdir is available. |
PEP 274 – Dict Comprehensions
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Standards Track
Created:
25-Oct-2001
Python-Version:
2.7, 3.0
Post-History:
29-Oct-2001
Table of Contents
Abstract
Resolution
Proposed Solution
Rationale
Semantics
Examples
Implementation
Copyright
Abstract
PEP 202 introduces a syntactical extension to Python called the
“list comprehension”. This PEP proposes a similar syntactical
extension called the “dictionary comprehension” or “dict
comprehension” for short. You can use dict comprehensions in ways
very similar to list comprehensions, except that they produce
Python dictionary objects instead of list objects.
Resolution
This PEP was originally written for inclusion in Python 2.3. It
was withdrawn after observation that substantially all of its
benefits were subsumed by generator expressions coupled with the
dict() constructor.
However, Python 2.7 and 3.0 introduces this exact feature, as well
as the closely related set comprehensions. On 2012-04-09, the PEP
was changed to reflect this reality by updating its Status to
Accepted, and updating the Python-Version field. The Open
Questions section was also removed since these have been long
resolved by the current implementation.
Proposed Solution
Dict comprehensions are just like list comprehensions, except that
you group the expression using curly braces instead of square
braces. Also, the left part before the for keyword expresses
both a key and a value, separated by a colon. The notation is
specifically designed to remind you of list comprehensions as
applied to dictionaries.
Rationale
There are times when you have some data arranged as a sequences of
length-2 sequences, and you want to turn that into a dictionary.
In Python 2.2, the dict() constructor accepts an argument that is
a sequence of length-2 sequences, used as (key, value) pairs to
initialize a new dictionary object.
However, the act of turning some data into a sequence of length-2
sequences can be inconvenient or inefficient from a memory or
performance standpoint. Also, for some common operations, such as
turning a list of things into a set of things for quick duplicate
removal or set inclusion tests, a better syntax can help code
clarity.
As with list comprehensions, an explicit for loop can always be
used (and in fact was the only way to do it in earlier versions of
Python). But as with list comprehensions, dict comprehensions can
provide a more syntactically succinct idiom that the traditional
for loop.
Semantics
The semantics of dict comprehensions can actually be demonstrated
in stock Python 2.2, by passing a list comprehension to the
built-in dictionary constructor:
>>> dict([(i, chr(65+i)) for i in range(4)])
is semantically equivalent to:
>>> {i : chr(65+i) for i in range(4)}
The dictionary constructor approach has two distinct disadvantages
from the proposed syntax though. First, it isn’t as legible as a
dict comprehension. Second, it forces the programmer to create an
in-core list object first, which could be expensive.
Examples
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
>>> print {k : v for k, v in someDict.iteritems()} == someDict.copy()
1
>>> print {x.lower() : 1 for x in list_of_email_addrs}
{'[email protected]' : 1, '[email protected]' : 1, '[email protected]' : 1}
>>> def invert(d):
... return {v : k for k, v in d.iteritems()}
...
>>> d = {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
>>> print invert(d)
{'A' : 0, 'B' : 1, 'C' : 2, 'D' : 3}
>>> {(k, v): k+v for k in range(4) for v in range(4)}
... {(3, 3): 6, (3, 2): 5, (3, 1): 4, (0, 1): 1, (2, 1): 3,
(0, 2): 2, (3, 0): 3, (0, 3): 3, (1, 1): 2, (1, 0): 1,
(0, 0): 0, (1, 2): 3, (2, 0): 2, (1, 3): 4, (2, 2): 4, (
2, 3): 5}
Implementation
All implementation details were resolved in the Python 2.7 and 3.0
time-frame.
Copyright
This document has been placed in the public domain.
| Final | PEP 274 – Dict Comprehensions | Standards Track | PEP 202 introduces a syntactical extension to Python called the
“list comprehension”. This PEP proposes a similar syntactical
extension called the “dictionary comprehension” or “dict
comprehension” for short. You can use dict comprehensions in ways
very similar to list comprehensions, except that they produce
Python dictionary objects instead of list objects. |
PEP 275 – Switching on Multiple Values
Author:
Marc-André Lemburg <mal at lemburg.com>
Status:
Rejected
Type:
Standards Track
Created:
10-Nov-2001
Python-Version:
2.6
Post-History:
Table of Contents
Rejection Notice
Abstract
Problem
Proposed Solutions
Solution 1: Optimizing if-elif-else
Solution 2: Adding a switch statement to Python
New Syntax
Implementation
Issues
Examples
Scope
Credits
References
Copyright
Rejection Notice
A similar PEP for Python 3000, PEP 3103, was already rejected,
so this proposal has no chance of being accepted either.
Abstract
This PEP proposes strategies to enhance Python’s performance
with respect to handling switching on a single variable having
one of multiple possible values.
Problem
Up to Python 2.5, the typical way of writing multi-value switches
has been to use long switch constructs of the following type:
if x == 'first state':
...
elif x == 'second state':
...
elif x == 'third state':
...
elif x == 'fourth state':
...
else:
# default handling
...
This works fine for short switch constructs, since the overhead of
repeated loading of a local (the variable x in this case) and
comparing it to some constant is low (it has a complexity of O(n)
on average). However, when using such a construct to write a state
machine such as is needed for writing parsers the number of
possible states can easily reach 10 or more cases.
The current solution to this problem lies in using a dispatch
table to find the case implementing method to execute depending on
the value of the switch variable (this can be tuned to have a
complexity of O(1) on average, e.g. by using perfect hash
tables). This works well for state machines which require complex
and lengthy processing in the different case methods. It does not
perform well for ones which only process one or two instructions
per case, e.g.
def handle_data(self, data):
self.stack.append(data)
A nice example of this is the state machine implemented in
pickle.py which is used to serialize Python objects. Other
prominent cases include XML SAX parsers and Internet protocol
handlers.
Proposed Solutions
This PEP proposes two different but not necessarily conflicting
solutions:
Adding an optimization to the Python compiler and VM
which detects the above if-elif-else construct and
generates special opcodes for it which use a read-only
dictionary for storing jump offsets.
Adding new syntax to Python which mimics the C style
switch statement.
The first solution has the benefit of not relying on adding new
keywords to the language, while the second looks cleaner. Both
involve some run-time overhead to assure that the switching
variable is immutable and hashable.
Both solutions use a dictionary lookup to find the right
jump location, so they both share the same problem space in
terms of requiring that both the switch variable and the
constants need to be compatible to the dictionary implementation
(hashable, comparable, a==b => hash(a)==hash(b)).
Solution 1: Optimizing if-elif-else
Implementation:
It should be possible for the compiler to detect an
if-elif-else construct which has the following signature:
if x == 'first':...
elif x == 'second':...
else:...
i.e. the left hand side always references the same variable,
the right hand side a hashable immutable builtin type. The
right hand sides need not be all of the same type, but they
should be comparable to the type of the left hand switch
variable.
The compiler could then setup a read-only (perfect) hash
table, store it in the constants and add an opcode SWITCH in
front of the standard if-elif-else byte code stream which
triggers the following run-time behaviour:
At runtime, SWITCH would check x for being one of the
well-known immutable types (strings, unicode, numbers) and
use the hash table for finding the right opcode snippet. If
this condition is not met, the interpreter should revert to
the standard if-elif-else processing by simply skipping the
SWITCH opcode and proceeding with the usual if-elif-else byte
code stream.
Issues:
The new optimization should not change the current Python
semantics (by reducing the number of __cmp__ calls and adding
__hash__ calls in if-elif-else constructs which are affected
by the optimization). To assure this, switching can only
safely be implemented either if a “from __future__” style
flag is used, or the switching variable is one of the builtin
immutable types: int, float, string, unicode, etc. (not
subtypes, since it’s not clear whether these are still
immutable or not)
To prevent post-modifications of the jump-table dictionary
(which could be used to reach protected code), the jump-table
will have to be a read-only type (e.g. a read-only
dictionary).
The optimization should only be used for if-elif-else
constructs which have a minimum number of n cases (where n is
a number which has yet to be defined depending on performance
tests).
Solution 2: Adding a switch statement to Python
New Syntax
switch EXPR:
case CONSTANT:
SUITE
case CONSTANT:
SUITE
...
else:
SUITE
(modulo indentation variations)
The “else” part is optional. If no else part is given and
none of the defined cases matches, no action is taken and
the switch statement is ignored. This is in line with the
current if-behaviour. A user who wants to signal this
situation using an exception can define an else-branch
which then implements the intended action.
Note that the constants need not be all of the same type, but
they should be comparable to the type of the switch variable.
Implementation
The compiler would have to compile this into byte code
similar to this:
def whatis(x):
switch(x):
case 'one':
print '1'
case 'two':
print '2'
case 'three':
print '3'
else:
print "D'oh!"
into (omitting POP_TOP’s and SET_LINENO’s):
6 LOAD_FAST 0 (x)
9 LOAD_CONST 1 (switch-table-1)
12 SWITCH 26 (to 38)
14 LOAD_CONST 2 ('1')
17 PRINT_ITEM
18 PRINT_NEWLINE
19 JUMP 43
22 LOAD_CONST 3 ('2')
25 PRINT_ITEM
26 PRINT_NEWLINE
27 JUMP 43
30 LOAD_CONST 4 ('3')
33 PRINT_ITEM
34 PRINT_NEWLINE
35 JUMP 43
38 LOAD_CONST 5 ("D'oh!")
41 PRINT_ITEM
42 PRINT_NEWLINE
>>43 LOAD_CONST 0 (None)
46 RETURN_VALUE
Where the ‘SWITCH’ opcode would jump to 14, 22, 30 or 38
depending on ‘x’.
Thomas Wouters has written a patch which demonstrates the
above. You can download it from [1].
Issues
The switch statement should not implement fall-through
behaviour (as does the switch statement in C). Each case
defines a complete and independent suite; much like in an
if-elif-else statement. This also enables using break in
switch statements inside loops.
If the interpreter finds that the switch variable x is
not hashable, it should raise a TypeError at run-time
pointing out the problem.
There have been other proposals for the syntax which reuse
existing keywords and avoid adding two new ones (“switch” and
“case”). Others have argued that the keywords should use new
terms to avoid confusion with the C keywords of the same name
but slightly different semantics (e.g. fall-through without
break). Some of the proposed variants:
case EXPR:
of CONSTANT:
SUITE
of CONSTANT:
SUITE
else:
SUITE
case EXPR:
if CONSTANT:
SUITE
if CONSTANT:
SUITE
else:
SUITE
when EXPR:
in CONSTANT_TUPLE:
SUITE
in CONSTANT_TUPLE:
SUITE
...
else:
SUITE
The switch statement could be extended to allow multiple
values for one section (e.g. case ‘a’, ‘b’, ‘c’: …). Another
proposed extension would allow ranges of values (e.g. case
10..14: …). These should probably be post-poned, but already
kept in mind when designing and implementing a first version.
Examples
The following examples all use a new syntax as proposed by
solution 2. However, all of these examples would work with
solution 1 as well.
switch EXPR: switch x:
case CONSTANT: case "first":
SUITE print x
case CONSTANT: case "second":
SUITE x = x**2
... print x
else: else:
SUITE print "whoops!"
case EXPR: case x:
of CONSTANT: of "first":
SUITE print x
of CONSTANT: of "second":
SUITE print x**2
else: else:
SUITE print "whoops!"
case EXPR: case state:
if CONSTANT: if "first":
SUITE state = "second"
if CONSTANT: if "second":
SUITE state = "third"
else: else:
SUITE state = "first"
when EXPR: when state:
in CONSTANT_TUPLE: in ("first", "second"):
SUITE print state
in CONSTANT_TUPLE: state = next_state(state)
SUITE in ("seventh",):
... print "done"
else: break # out of loop!
SUITE else:
print "middle state"
state = next_state(state)
Here’s another nice application found by Jack Jansen (switching
on argument types):
switch type(x).__name__:
case 'int':
SUITE
case 'string':
SUITE
Scope
XXX Explain “from __future__ import switch”
Credits
Martin von Löwis (issues with the optimization idea)
Thomas Wouters (switch statement + byte code compiler example)
Skip Montanaro (dispatching ideas, examples)
Donald Beaudry (switch syntax)
Greg Ewing (switch syntax)
Jack Jansen (type switching examples)
References
[1]
https://sourceforge.net/tracker/index.php?func=detail&aid=481118&group_id=5470&atid=305470
Copyright
This document has been placed in the public domain.
| Rejected | PEP 275 – Switching on Multiple Values | Standards Track | This PEP proposes strategies to enhance Python’s performance
with respect to handling switching on a single variable having
one of multiple possible values. |
PEP 276 – Simple Iterator for ints
Author:
Jim Althoff <james_althoff at i2.com>
Status:
Rejected
Type:
Standards Track
Created:
12-Nov-2001
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
BDFL Pronouncement
Specification
Rationale
Backwards Compatibility
Issues
Implementation
Copyright
Abstract
Python 2.1 added new functionality to support iterators (PEP 234).
Iterators have proven to be useful and convenient in many coding
situations. It is noted that the implementation of Python’s
for-loop control structure uses the iterator protocol as of
release 2.1. It is also noted that Python provides iterators for
the following builtin types: lists, tuples, dictionaries, strings,
and files. This PEP proposes the addition of an iterator for the
builtin type int (types.IntType). Such an iterator would simplify
the coding of certain for-loops in Python.
BDFL Pronouncement
This PEP was rejected on 17 June 2005 with a note to python-dev.
Much of the original need was met by the enumerate() function which
was accepted for Python 2.3.
Also, the proposal both allowed and encouraged misuses such as:
>>> for i in 3: print i
0
1
2
Likewise, it was not helpful that the proposal would disable the
syntax error in statements like:
x, = 1
Specification
Define an iterator for types.intType (i.e., the builtin type
“int”) that is returned from the builtin function “iter” when
called with an instance of types.intType as the argument.
The returned iterator has the following behavior:
Assume that object i is an instance of types.intType (the
builtin type int) and that i > 0
iter(i) returns an iterator object
said iterator object iterates through the sequence of ints
0,1,2,…,i-1Example:
iter(5) returns an iterator object that iterates through the
sequence of ints 0,1,2,3,4
if i <= 0, iter(i) returns an “empty” iterator, i.e., one that
throws StopIteration upon the first call of its “next” method
In other words, the conditions and semantics of said iterator is
consistent with the conditions and semantics of the range() and
xrange() functions.
Note that the sequence 0,1,2,…,i-1 associated with the int i is
considered “natural” in the context of Python programming because
it is consistent with the builtin indexing protocol of sequences
in Python. Python lists and tuples, for example, are indexed
starting at 0 and ending at len(object)-1 (when using positive
indices). In other words, such objects are indexed with the
sequence 0,1,2,…,len(object)-1
Rationale
A common programming idiom is to take a collection of objects and
apply some operation to each item in the collection in some
established sequential order. Python provides the “for in”
looping control structure for handling this common idiom. Cases
arise, however, where it is necessary (or more convenient) to
access each item in an “indexed” collection by iterating through
each index and accessing each item in the collection using the
corresponding index.
For example, one might have a two-dimensional “table” object where one
requires the application of some operation to the first column of
each row in the table. Depending on the implementation of the table
it might not be possible to access first each row and then each
column as individual objects. It might, rather, be possible to
access a cell in the table using a row index and a column index.
In such a case it is necessary to use an idiom where one iterates
through a sequence of indices (indexes) in order to access the
desired items in the table. (Note that the commonly used
DefaultTableModel class in Java-Swing-Jython has this very protocol).
Another common example is where one needs to process two or more
collections in parallel. Another example is where one needs to
access, say, every second item in a collection.
There are many other examples where access to items in a
collection is facilitated by a computation on an index thus
necessitating access to the indices rather than direct access to
the items themselves.
Let’s call this idiom the “indexed for-loop” idiom. Some
programming languages provide builtin syntax for handling this
idiom. In Python the common convention for implementing the
indexed for-loop idiom is to use the builtin range() or xrange()
function to generate a sequence of indices as in, for example:
for rowcount in range(table.getRowCount()):
print table.getValueAt(rowcount, 0)
or
for rowcount in xrange(table.getRowCount()):
print table.getValueAt(rowcount, 0)
From time to time there are discussions in the Python community
about the indexed for-loop idiom. It is sometimes argued that the
need for using the range() or xrange() function for this design
idiom is:
Not obvious (to new-to-Python programmers),
Error prone (easy to forget, even for experienced Python
programmers)
Confusing and distracting for those who feel compelled to understand
the differences and recommended usage of xrange() vis-a-vis range()
Unwieldy, especially when combined with the len() function,
i.e., xrange(len(sequence))
Not as convenient as equivalent mechanisms in other languages,
Annoying, a “wart”, etc.
And from time to time proposals are put forth for ways in which
Python could provide a better mechanism for this idiom. Recent
examples include PEP 204, “Range Literals”, and PEP 212, “Loop
Counter Iteration”.
Most often, such proposal include changes to Python’s syntax and
other “heavyweight” changes.
Part of the difficulty here is that advocating new syntax implies
a comprehensive solution for “general indexing” that has to
include aspects like:
starting index value
ending index value
step value
open intervals versus closed intervals versus half opened intervals
Finding a new syntax that is comprehensive, simple, general,
Pythonic, appealing to many, easy to implement, not in conflict
with existing structures, not excessively overloading of existing
structures, etc. has proven to be more difficult than one might
anticipate.
The proposal outlined in this PEP tries to address the problem by
suggesting a simple “lightweight” solution that helps the most
common case by using a proven mechanism that is already available
(as of Python 2.1): namely, iterators.
Because for-loops already use “iterator” protocol as of Python
2.1, adding an iterator for types.IntType as proposed in this PEP
would enable by default the following shortcut for the indexed
for-loop idiom:
for rowcount in table.getRowCount():
print table.getValueAt(rowcount, 0)
The following benefits for this approach vis-a-vis the current
mechanism of using the range() or xrange() functions are claimed
to be:
Simpler,
Less cluttered,
Focuses on the problem at hand without the need to resort to
secondary implementation-oriented functions (range() and
xrange())
And compared to other proposals for change:
Requires no new syntax
Requires no new keywords
Takes advantage of the new and well-established iterator mechanism
And generally:
Is consistent with iterator-based “convenience” changes already
included (as of Python 2.1) for other builtin types such as:
lists, tuples, dictionaries, strings, and files.
Backwards Compatibility
The proposed mechanism is generally backwards compatible as it
calls for neither new syntax nor new keywords. All existing,
valid Python programs should continue to work unmodified.
However, this proposal is not perfectly backwards compatible in
the sense that certain statements that are currently invalid
would, under the current proposal, become valid.
Tim Peters has pointed out two such examples:
The common case where one forgets to include range() or
xrange(), for example:for rowcount in table.getRowCount():
print table.getValueAt(rowcount, 0)
in Python 2.2 raises a TypeError exception.
Under the current proposal, the above statement would be valid
and would work as (presumably) intended. Presumably, this is a
good thing.
As noted by Tim, this is the common case of the “forgotten
range” mistake (which one currently corrects by adding a call
to range() or xrange()).
The (hopefully) very uncommon case where one makes a typing
mistake when using tuple unpacking. For example:x, = 1
in Python 2.2 raises a TypeError exception.
Under the current proposal, the above statement would be valid
and would set x to 0. The PEP author has no data as to how
common this typing error is nor how difficult it would be to
catch such an error under the current proposal. He imagines
that it does not occur frequently and that it would be
relatively easy to correct should it happen.
Issues
Extensive discussions concerning PEP 276 on the Python interest
mailing list suggests a range of opinions: some in favor, some
neutral, some against. Those in favor tend to agree with the
claims above of the usefulness, convenience, ease of learning,
and simplicity of a simple iterator for integers.
Issues with PEP 276 include:
Using range/xrange is fine as is.Response: Some posters feel this way. Other disagree.
Some feel that iterating over the sequence “0, 1, 2, …, n-1”
for an integer n is not intuitive. “for i in 5:” is considered
(by some) to be “non-obvious”, for example. Some dislike this
usage because it doesn’t have “the right feel”. Some dislike it
because they believe that this type of usage forces one to view
integers as a sequences and this seems wrong to them. Some
dislike it because they prefer to view for-loops as dealing
with explicit sequences rather than with arbitrary iterators.Response: Some like the proposed idiom and see it as simple,
elegant, easy to learn, and easy to use. Some are neutral on
this issue. Others, as noted, dislike it.
Is it obvious that iter(5) maps to the sequence 0,1,2,3,4?Response: Given, as noted above, that Python has a strong
convention for indexing sequences starting at 0 and stopping at
(inclusively) the index whose value is one less than the length
of the sequence, it is argued that the proposed sequence is
reasonably intuitive to the Python programmer while being useful
and practical. More importantly, it is argued that once learned
this convention is very easy to remember. Note that the doc
string for the range function makes a reference to the
natural and useful association between range(n) and the indices
for a list whose length is n.
Possible ambiguityfor i in 10: print i
might be mistaken for
for i in (10,): print i
Response: This is exactly the same situation with strings in
current Python (replace 10 with ‘spam’ in the above, for
example).
Too general: in the newest releases of Python there are
contexts – as with for-loops – where iterators are called
implicitly. Some fear that having an iterator invoked for
an integer in one of the context (excluding for-loops) might
lead to unexpected behavior and bugs. The “x, = 1” example
noted above is an a case in point.Response: From the author’s perspective the examples of the
above that were identified in the PEP 276 discussions did
not appear to be ones that would be accidentally misused
in ways that would lead to subtle and hard-to-detect errors.
In addition, it seems that there is a way to deal with this
issue by using a variation of what is outlined in the
specification section of this proposal. Instead of adding
an __iter__ method to class int, change the for-loop handling
code to convert (in essence) from
for i in n: # when isinstance(n,int) is 1
to
for i in xrange(n):
This approach gives the same results in a for-loop as an
__iter__ method would but would prevent iteration on integer
values in any other context. Lists and tuples, for example,
don’t have __iter__ and are handled with special code.
Integer values would be one more special case.
“i in n” seems very unnatural.Response: Some feel that “i in len(mylist)” would be easily
understandable and useful. Some don’t like it, particularly
when a literal is used as in “i in 5”. If the variant
mentioned in the response to the previous issue is implemented,
this issue is moot. If not, then one could also address this
issue by defining a __contains__ method in class int that would
always raise a TypeError. This would then make the behavior of
“i in n” identical to that of current Python.
Might dissuade newbies from using the indexed for-loop idiom when
the standard “for item in collection:” idiom is clearly better.Response: The standard idiom is so nice when it fits that it
needs neither extra “carrot” nor “stick”. On the other hand,
one does notice cases of overuse/misuse of the standard idiom
(due, most likely, to the awkwardness of the indexed for-loop
idiom), as in:
for item in sequence:
print sequence.index(item)
Why not propose even bigger changes?
The majority of disagreement with PEP 276 came from those who
favor much larger changes to Python to address the more general
problem of specifying a sequence of integers where such
a specification is general enough to handle the starting value,
ending value, and stepping value of the sequence and also
addresses variations of open, closed, and half-open (half-closed)
integer intervals. Many suggestions of such were discussed.
These include:
adding Haskell-like notation for specifying a sequence of
integers in a literal list,
various uses of slicing notation to specify sequences,
changes to the syntax of for-in loops to allow the use of
relational operators in the loop header,
creation of an integer-interval class along with methods that
overload relational operators or division operators
to provide “slicing” on integer-interval objects,
and more.
It should be noted that there was much debate but not an
overwhelming consensus for any of these larger-scale suggestions.
Clearly, PEP 276 does not propose such a large-scale change
and instead focuses on a specific problem area. Towards the
end of the discussion period, several posters expressed favor
for the narrow focus and simplicity of PEP 276 vis-a-vis the more
ambitious suggestions that were advanced. There did appear to be
consensus for the need for a PEP for any such larger-scale,
alternative suggestion. In light of this recognition, details of
the various alternative suggestions are not discussed here further.
Implementation
An implementation is not available at this time but is expected
to be straightforward. The author has implemented a subclass of
int with an __iter__ method (written in Python) as a means to test
out the ideas in this proposal, however.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 276 – Simple Iterator for ints | Standards Track | Python 2.1 added new functionality to support iterators (PEP 234).
Iterators have proven to be useful and convenient in many coding
situations. It is noted that the implementation of Python’s
for-loop control structure uses the iterator protocol as of
release 2.1. It is also noted that Python provides iterators for
the following builtin types: lists, tuples, dictionaries, strings,
and files. This PEP proposes the addition of an iterator for the
builtin type int (types.IntType). Such an iterator would simplify
the coding of certain for-loops in Python. |
PEP 277 – Unicode file name support for Windows NT
Author:
Neil Hodgson <neilh at scintilla.org>
Status:
Final
Type:
Standards Track
Created:
11-Jan-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Rationale
Specification
Restrictions
Reference Implementation
References
Copyright
Abstract
This PEP discusses supporting access to all files possible on
Windows NT by passing Unicode file names directly to the system’s
wide-character functions.
Rationale
Python 2.2 on Win32 platforms converts Unicode file names passed
to open and to functions in the os module into the ‘mbcs’ encoding
before passing the result to the operating system. This is often
successful in the common case where the script is operating with
the locale set to the same value as when the file was created.
Most machines are set up as one locale and rarely if ever changed
from this locale. For some users, locale is changed more often
and on servers there are often files saved by users using
different locales.
On Windows NT and descendent operating systems, including Windows
2000 and Windows XP, wide-character APIs are available that
provide direct access to all file names, including those that are
not representable using the current locale. The purpose of this
proposal is to provide access to these wide-character APIs through
the standard Python file object and posix module and so provide
access to all files on Windows NT.
Specification
On Windows platforms which provide wide-character file APIs, when
Unicode arguments are provided to file APIs, wide-character calls
are made instead of the standard C library and posix calls.
The Python file object is extended to use a Unicode file name
argument directly rather than converting it. This affects the
file object constructor file(filename[, mode[, bufsize]]) and also
the open function which is an alias of this constructor. When a
Unicode filename argument is used here then the name attribute of
the file object will be Unicode. The representation of a file
object, repr(f) will display Unicode file names as an escaped
string in a similar manner to the representation of Unicode
strings.
The posix module contains functions that take file or directory
names: chdir, listdir, mkdir, open, remove, rename,
rmdir, stat, and _getfullpathname. These will use Unicode
arguments directly rather than converting them. For the rename function, this
behaviour is triggered when either of the arguments is Unicode and
the other argument converted to Unicode using the default
encoding.
The listdir function currently returns a list of strings. Under
this proposal, it will return a list of Unicode strings when its
path argument is Unicode.
Restrictions
On the consumer Windows operating systems, Windows 95, Windows 98,
and Windows ME, there are no wide-character file APIs so behaviour
is unchanged under this proposal. It may be possible in the
future to extend this proposal to cover these operating systems as
the VFAT-32 file system used by them does support Unicode file
names but access is difficult and so implementing this would
require much work. The “Microsoft Layer for Unicode” could be a
starting point for implementing this.
Python can be compiled with the size of Unicode characters set to
4 bytes rather than 2 by defining PY_UNICODE_TYPE to be a 4 byte
type and Py_UNICODE_SIZE to be 4. As the Windows API does not
accept 4 byte characters, the features described in this proposal
will not work in this mode so the implementation falls back to the
current ‘mbcs’ encoding technique. This restriction could be lifted
in the future by performing extra conversions using
PyUnicode_AsWideChar but for now that would add too much
complexity for a very rarely used feature.
Reference Implementation
The implementation is available at [2].
References
[1] Microsoft Windows APIs
https://msdn.microsoft.com/
[2]
https://github.com/python/cpython/issues/37017
Copyright
This document has been placed in the public domain.
| Final | PEP 277 – Unicode file name support for Windows NT | Standards Track | This PEP discusses supporting access to all files possible on
Windows NT by passing Unicode file names directly to the system’s
wide-character functions. |
PEP 278 – Universal Newline Support
Author:
Jack Jansen <jack at cwi.nl>
Status:
Final
Type:
Standards Track
Created:
14-Jan-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Specification
Rationale
Reference Implementation
References
Copyright
Abstract
This PEP discusses a way in which Python can support I/O on files
which have a newline format that is not the native format on the
platform, so that Python on each platform can read and import
files with CR (Macintosh), LF (Unix) or CR LF (Windows) line
endings.
It is more and more common to come across files that have an end
of line that does not match the standard on the current platform:
files downloaded over the net, remotely mounted filesystems on a
different platform, Mac OS X with its double standard of Mac and
Unix line endings, etc.
Many tools such as editors and compilers already handle this
gracefully, it would be good if Python did so too.
Specification
Universal newline support is enabled by default,
but can be disabled during the configure of Python.
In a Python with universal newline support the feature is
automatically enabled for all import statements and execfile()
calls. There is no special support for eval() or exec.
In a Python with universal newline support open() the mode
parameter can also be “U”, meaning “open for input as a text file
with universal newline interpretation”. Mode “rU” is also allowed,
for symmetry with “rb”. Mode “U” cannot be
combined with other mode flags such as “+”. Any line ending in the
input file will be seen as a '\n' in Python, so little other code has
to change to handle universal newlines.
Conversion of newlines happens in all calls that read data: read(),
readline(), readlines(), etc.
There is no special support for output to file with a different
newline convention, and so mode “wU” is also illegal.
A file object that has been opened in universal newline mode gets
a new attribute “newlines” which reflects the newline convention
used in the file. The value for this attribute is one of None (no
newline read yet), "\r", "\n", "\r\n" or a tuple containing all the
newline types seen.
Rationale
Universal newline support is implemented in C, not in Python.
This is done because we want files with a foreign newline
convention to be import-able, so a Python Lib directory can be
shared over a remote file system connection, or between MacPython
and Unix-Python on Mac OS X. For this to be feasible the
universal newline convention needs to have a reasonably small
impact on performance, which means a Python implementation is not
an option as it would bog down all imports. And because of files
with multiple newline conventions, which Visual C++ and other
Windows tools will happily produce, doing a quick check for the
newlines used in a file (handing off the import to C code if a
platform-local newline is seen) will not work. Finally, a C
implementation also allows tracebacks and such (which open the
Python source module) to be handled easily.
There is no output implementation of universal newlines, Python
programs are expected to handle this by themselves or write files
with platform-local convention otherwise. The reason for this is
that input is the difficult case, outputting different newlines to
a file is already easy enough in Python.
Also, an output implementation would be much more difficult than an
input implementation, surprisingly: a lot of output is done through
PyXXX_Print() methods, and at this point the file object is not
available anymore, only a FILE *. So, an output implementation would
need to somehow go from the FILE* to the file object, because that
is where the current newline delimiter is stored.
The input implementation has no such problem: there are no cases in
the Python source tree where files are partially read from C,
partially from Python, and such cases are expected to be rare in
extension modules. If such cases exist the only problem is that the
newlines attribute of the file object is not updated during the
fread() or fgets() calls that are done direct from C.
A partial output implementation, where strings passed to fp.write()
would be converted to use fp.newlines as their line terminator but
all other output would not is far too surprising, in my view.
Because there is no output support for universal newlines there is
also no support for a mode “rU+”: the surprise factor of the
previous paragraph would hold to an even stronger degree.
There is no support for universal newlines in strings passed to
eval() or exec. It is envisioned that such strings always have the
standard \n line feed, if the strings come from a file that file can
be read with universal newlines.
I think there are no special issues with unicode. utf-16 shouldn’t
pose any new problems, as such files need to be opened in binary
mode anyway. Interaction with utf-8 is fine too: values 0x0a and 0x0d
cannot occur as part of a multibyte sequence.
Universal newline files should work fine with iterators and
xreadlines() as these eventually call the normal file
readline/readlines methods.
While universal newlines are automatically enabled for import they
are not for opening, where you have to specifically say open(...,
"U"). This is open to debate, but here are a few reasons for this
design:
Compatibility. Programs which already do their own
interpretation of \r\n in text files would break. Examples of such
programs would be editors which warn you when you open a file with
a different newline convention. If universal newlines was made the
default such an editor would silently convert your line endings to
the local convention on save. Programs which open binary files as
text files on Unix would also break (but it could be argued they
deserve it :-).
Interface clarity. Universal newlines are only supported for
input files, not for input/output files, as the semantics would
become muddy. Would you write Mac newlines if all reads so far
had encountered Mac newlines? But what if you then later read a
Unix newline?
The newlines attribute is included so that programs that really
care about the newline convention, such as text editors, can
examine what was in a file. They can then save (a copy of) the
file with the same newline convention (or, in case of a file with
mixed newlines, ask the user what to do, or output in platform
convention).
Feedback is explicitly solicited on one item in the reference
implementation: whether or not the universal newlines routines
should grab the global interpreter lock. Currently they do not,
but this could be considered living dangerously, as they may
modify fields in a FileObject. But as these routines are
replacements for fgets() and fread() as well it may be difficult
to decide whether or not the lock is held when the routine is
called. Moreover, the only danger is that if two threads read the
same FileObject at the same time an extraneous newline may be seen
or the newlines attribute may inadvertently be set to mixed. I
would argue that if you read the same FileObject in two threads
simultaneously you are asking for trouble anyway.
Note that no globally accessible pointers are manipulated in the
fgets() or fread() replacement routines, just some integer-valued
flags, so the chances of core dumps are zero (he said:-).
Universal newline support can be disabled during configure because it does
have a small performance penalty, and moreover the implementation has
not been tested on all conceivable platforms yet. It might also be silly
on some platforms (WinCE or Palm devices, for instance). If universal
newline support is not enabled then file objects do not have the newlines
attribute, so testing whether the current Python has it can be done with a
simple:
if hasattr(open, 'newlines'):
print 'We have universal newline support'
Note that this test uses the open() function rather than the file
type so that it won’t fail for versions of Python where the file
type was not available (the file type was added to the built-in
namespace in the same release as the universal newline feature was
added).
Additionally, note that this test fails again on Python versions
>= 2.5, when open() was made a function again and is not synonymous
with the file type anymore.
Reference Implementation
A reference implementation is available in SourceForge patch
#476814: https://bugs.python.org/issue476814
References
None.
Copyright
This document has been placed in the public domain.
| Final | PEP 278 – Universal Newline Support | Standards Track | This PEP discusses a way in which Python can support I/O on files
which have a newline format that is not the native format on the
platform, so that Python on each platform can read and import
files with CR (Macintosh), LF (Unix) or CR LF (Windows) line
endings. |
PEP 279 – The enumerate() built-in function
Author:
Raymond Hettinger <python at rcn.com>
Status:
Final
Type:
Standards Track
Created:
30-Jan-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Rationale
BDFL Pronouncements
Specification for a new built-in
Copyright
Abstract
This PEP introduces a new built-in function, enumerate() to
simplify a commonly used looping idiom. It provides all iterable
collections with the same advantage that iteritems() affords to
dictionaries – a compact, readable, reliable index notation.
Rationale
Python 2.2 introduced the concept of an iterable interface as
proposed in PEP 234. The iter() factory function was provided
as common calling convention and deep changes were made to use
iterators as a unifying theme throughout Python. The unification
came in the form of establishing a common iterable interface for
mappings, sequences, and file objects.
Generators, as proposed in PEP 255, were introduced as a means
for making it easier to create iterators, especially ones with
complex internal execution or variable states. The availability
of generators makes it possible to improve on the loop counter
ideas in PEP 212. Those ideas provided a clean syntax for
iteration with indices and values, but did not apply to all
iterable objects. Also, that approach did not have the memory
friendly benefit provided by generators which do not evaluate the
entire sequence all at once.
The new proposal is to add a built-in function, enumerate() which
was made possible once iterators and generators became available.
It provides all iterables with the same advantage that iteritems()
affords to dictionaries – a compact, readable, reliable index
notation. Like zip(), it is expected to become a commonly used
looping idiom.
This suggestion is designed to take advantage of the existing
implementation and require little additional effort to
incorporate. It is backwards compatible and requires no new
keywords. The proposal will go into Python 2.3 when generators
become final and are not imported from __future__.
BDFL Pronouncements
The new built-in function is ACCEPTED.
Specification for a new built-in
def enumerate(collection):
'Generates an indexed series: (0,coll[0]), (1,coll[1]) ...'
i = 0
it = iter(collection)
while 1:
yield (i, it.next())
i += 1
Note A: PEP 212 Loop Counter Iteration discussed several
proposals for achieving indexing. Some of the proposals only work
for lists unlike the above function which works for any generator,
xrange, sequence, or iterable object. Also, those proposals were
presented and evaluated in the world prior to Python 2.2 which did
not include generators. As a result, the non-generator version in
PEP 212 had the disadvantage of consuming memory with a giant list
of tuples. The generator version presented here is fast and
light, works with all iterables, and allows users to abandon the
sequence in mid-stream with no loss of computation effort.
There are other PEPs which touch on related issues: integer
iterators, integer for-loops, and one for modifying the arguments
to range and xrange. The enumerate() proposal does not preclude
the other proposals and it still meets an important need even if
those are adopted – the need to count items in any iterable. The
other proposals give a means of producing an index but not the
corresponding value. This is especially problematic if a sequence
is given which doesn’t support random access such as a file
object, generator, or sequence defined with __getitem__.
Note B: Almost all of the PEP reviewers welcomed the function but
were divided as to whether there should be any built-ins. The
main argument for a separate module was to slow the rate of
language inflation. The main argument for a built-in was that the
function is destined to be part of a core programming style,
applicable to any object with an iterable interface. Just as
zip() solves the problem of looping over multiple sequences, the
enumerate() function solves the loop counter problem.
If only one built-in is allowed, then enumerate() is the most
important general purpose tool, solving the broadest class of
problems while improving program brevity, clarity and reliability.
Note C: Various alternative names were discussed:
iterindexed()
five syllables is a mouthful
index()
nice verb but could be confused the .index() method
indexed()
widely liked however adjectives should be avoided
indexer()
noun did not read well in a for-loop
count()
direct and explicit but often used in other contexts
itercount()
direct, explicit and hated by more than one person
iteritems()
conflicts with key:value concept for dictionaries
itemize()
confusing because amap.items() != list(itemize(amap))
enum()
pithy; less clear than enumerate; too similar to enum
in other languages where it has a different meaning
All of the names involving ‘count’ had the further disadvantage of
implying that the count would begin from one instead of zero.
All of the names involving ‘index’ clashed with usage in database
languages where indexing implies a sorting operation rather than
linear sequencing.
Note D: This function was originally proposed with optional start
and stop arguments. GvR pointed out that the function call
enumerate(seqn,4,6) had an alternate, plausible interpretation as
a slice that would return the fourth and fifth elements of the
sequence. To avoid the ambiguity, the optional arguments were
dropped even though it meant losing flexibility as a loop counter.
That flexibility was most important for the common case of
counting from one, as in:
for linenum, line in enumerate(source,1): print linenum, line
Comments from GvR:filter and map should die and be subsumed into list
comprehensions, not grow more variants. I’d rather introduce
built-ins that do iterator algebra (e.g. the iterzip that I’ve
often used as an example).I like the idea of having some way to iterate over a sequence
and its index set in parallel. It’s fine for this to be a
built-in.
I don’t like the name “indexed”; adjectives do not make good
function names. Maybe iterindexed()?
Comments from Ka-Ping Yee:I’m also quite happy with everything you
proposed … and the extra built-ins (really ‘indexed’ in
particular) are things I have wanted for a long time.
Comments from Neil Schemenauer:The new built-ins sound okay. Guido
may be concerned with increasing the number of built-ins too
much. You might be better off selling them as part of a
module. If you use a module then you can add lots of useful
functions (Haskell has lots of them that we could steal).
Comments for Magnus Lie Hetland:I think indexed would be a useful and
natural built-in function. I would certainly use it a lot. I
like indexed() a lot; +1. I’m quite happy to have it make PEP
281 obsolete. Adding a separate module for iterator utilities
seems like a good idea.
Comments from the Community:The response to the enumerate() proposal
has been close to 100% favorable. Almost everyone loves the
idea.
Author response:Prior to these comments, four built-ins were proposed.
After the comments, xmap, xfilter and xzip were withdrawn. The
one that remains is vital for the language and is proposed by
itself. Indexed() is trivially easy to implement and can be
documented in minutes. More importantly, it is useful in
everyday programming which does not otherwise involve explicit
use of generators.This proposal originally included another function iterzip().
That was subsequently implemented as the izip() function in
the itertools module.
Copyright
This document has been placed in the public domain.
| Final | PEP 279 – The enumerate() built-in function | Standards Track | This PEP introduces a new built-in function, enumerate() to
simplify a commonly used looping idiom. It provides all iterable
collections with the same advantage that iteritems() affords to
dictionaries – a compact, readable, reliable index notation. |
PEP 280 – Optimizing access to globals
Author:
Guido van Rossum <guido at python.org>
Status:
Deferred
Type:
Standards Track
Created:
10-Feb-2002
Python-Version:
2.3
Post-History:
Table of Contents
Deferral
Abstract
Description
Additional Ideas
FAQs
Graphics
Comparison
Copyright
Deferral
While this PEP is a nice idea, no-one has yet emerged to do the work of
hashing out the differences between this PEP, PEP 266 and PEP 267.
Hence, it is being deferred.
Abstract
This PEP describes yet another approach to optimizing access to
module globals, providing an alternative to PEP 266 (Optimizing
Global Variable/Attribute Access by Skip Montanaro) and PEP 267
(Optimized Access to Module Namespaces by Jeremy Hylton).
The expectation is that eventually one approach will be picked and
implemented; possibly multiple approaches will be prototyped
first.
Description
(Note: Jason Orendorff writes: “””I implemented this once, long
ago, for Python 1.5-ish, I believe. I got it to the point where
it was only 15% slower than ordinary Python, then abandoned it.
;) In my implementation, “cells” were real first-class objects,
and “celldict” was a copy-and-hack version of dictionary. I
forget how the rest worked.””” Reference:
https://mail.python.org/pipermail/python-dev/2002-February/019876.html)
Let a cell be a really simple Python object, containing a pointer
to a Python object and a pointer to a cell. Both pointers may be
NULL. A Python implementation could be:
class cell(object):
def __init__(self):
self.objptr = NULL
self.cellptr = NULL
The cellptr attribute is used for chaining cells together for
searching built-ins; this will be explained later.
Let a celldict be a mapping from strings (the names of a module’s
globals) to objects (the values of those globals), implemented
using a dict of cells. A Python implementation could be:
class celldict(object):
def __init__(self):
self.__dict = {} # dict of cells
def getcell(self, key):
c = self.__dict.get(key)
if c is None:
c = cell()
self.__dict[key] = c
return c
def cellkeys(self):
return self.__dict.keys()
def __getitem__(self, key):
c = self.__dict.get(key)
if c is None:
raise KeyError, key
value = c.objptr
if value is NULL:
raise KeyError, key
else:
return value
def __setitem__(self, key, value):
c = self.__dict.get(key)
if c is None:
c = cell()
self.__dict[key] = c
c.objptr = value
def __delitem__(self, key):
c = self.__dict.get(key)
if c is None or c.objptr is NULL:
raise KeyError, key
c.objptr = NULL
def keys(self):
return [k for k, c in self.__dict.iteritems()
if c.objptr is not NULL]
def items(self):
return [k, c.objptr for k, c in self.__dict.iteritems()
if c.objptr is not NULL]
def values(self):
preturn [c.objptr for c in self.__dict.itervalues()
if c.objptr is not NULL]
def clear(self):
for c in self.__dict.values():
c.objptr = NULL
# Etc.
It is possible that a cell exists corresponding to a given key,
but the cell’s objptr is NULL; let’s call such a cell empty. When
the celldict is used as a mapping, it is as if empty cells don’t
exist. However, once added, a cell is never deleted from a
celldict, and it is possible to get at empty cells using the
getcell() method.
The celldict implementation never uses the cellptr attribute of
cells.
We change the module implementation to use a celldict for its
__dict__. The module’s getattr, setattr and delattr operations
now map to getitem, setitem and delitem on the celldict. The type
of <module>.__dict__ and globals() is probably the only backwards
incompatibility.
When a module is initialized, its __builtins__ is initialized from
the __builtin__ module’s __dict__, which is itself a celldict.
For each cell in __builtins__, the new module’s __dict__ adds a
cell with a NULL objptr, whose cellptr points to the corresponding
cell of __builtins__. Python pseudo-code (ignoring rexec):
import __builtin__
class module(object):
def __init__(self):
self.__dict__ = d = celldict()
d['__builtins__'] = bd = __builtin__.__dict__
for k in bd.cellkeys():
c = self.__dict__.getcell(k)
c.cellptr = bd.getcell(k)
def __getattr__(self, k):
try:
return self.__dict__[k]
except KeyError:
raise IndexError, k
def __setattr__(self, k, v):
self.__dict__[k] = v
def __delattr__(self, k):
del self.__dict__[k]
The compiler generates LOAD_GLOBAL_CELL <i> (and STORE_GLOBAL_CELL
<i> etc.) opcodes for references to globals, where <i> is a small
index with meaning only within one code object like the const
index in LOAD_CONST. The code object has a new tuple, co_globals,
giving the names of the globals referenced by the code indexed by
<i>. No new analysis is required to be able to do this.
When a function object is created from a code object and a celldict,
the function object creates an array of cell pointers by asking the
celldict for cells corresponding to the names in the code object’s
co_globals. If the celldict doesn’t already have a cell for a
particular name, it creates and an empty one. This array of cell
pointers is stored on the function object as func_cells. When a
function object is created from a regular dict instead of a
celldict, func_cells is a NULL pointer.
When the VM executes a LOAD_GLOBAL_CELL <i> instruction, it gets
cell number <i> from func_cells. It then looks in the cell’s
PyObject pointer, and if not NULL, that’s the global value. If it
is NULL, it follows the cell’s cell pointer to the next cell, if it
is not NULL, and looks in the PyObject pointer in that cell. If
that’s also NULL, or if there is no second cell, NameError is
raised. (It could follow the chain of cell pointers until a NULL
cell pointer is found; but I have no use for this.) Similar for
STORE_GLOBAL_CELL <i>, except it doesn’t follow the cell pointer
chain – it always stores in the first cell.
There are fallbacks in the VM for the case where the function’s
globals aren’t a celldict, and hence func_cells is NULL. In that
case, the code object’s co_globals is indexed with <i> to find the
name of the corresponding global and this name is used to index the
function’s globals dict.
Additional Ideas
Never make func_cell a NULL pointer; instead, make up an array
of empty cells, so that LOAD_GLOBAL_CELL can index func_cells
without a NULL check.
Make c.cellptr equal to c when a cell is created, so that
LOAD_GLOBAL_CELL can always dereference c.cellptr without a NULL
check.With these two additional ideas added, here’s Python pseudo-code
for LOAD_GLOBAL_CELL:
def LOAD_GLOBAL_CELL(self, i):
# self is the frame
c = self.func_cells[i]
obj = c.objptr
if obj is not NULL:
return obj # Existing global
return c.cellptr.objptr # Built-in or NULL
Be more aggressive: put the actual values of builtins into module
dicts, not just pointers to cells containing the actual values.There are two points to this: (1) Simplify and speed access, which
is the most common operation. (2) Support faithful emulation of
extreme existing corner cases.
WRT #2, the set of builtins in the scheme above is captured at the
time a module dict is first created. Mutations to the set of builtin
names following that don’t get reflected in the module dicts. Example:
consider files main.py and cheater.py:
[main.py]
import cheater
def f():
cheater.cheat()
return pachinko()
print f()
[cheater.py]
def cheat():
import __builtin__
__builtin__.pachinko = lambda: 666
If main.py is run under Python 2.2 (or before), 666 is printed. But
under the proposal, __builtin__.pachinko doesn’t exist at the time
main’s __dict__ is initialized. When the function object for
f is created, main.__dict__ grows a pachinko cell mapping to two
NULLs. When cheat() is called, __builtin__.__dict__ grows a pachinko
cell too, but main.__dict__ doesn’t know– and will never know –about
that. When f’s return stmt references pachinko, in will still find
the double-NULLs in main.__dict__’s pachinko cell, and so raise
NameError.
A similar (in cause) break in compatibility can occur if a module
global foo is del’ed, but a builtin foo was created prior to that
but after the module dict was first created. Then the builtin foo
becomes visible in the module under 2.2 and before, but remains
invisible under the proposal.
Mutating builtins is extremely rare (most programs never mutate the
builtins, and it’s hard to imagine a plausible use for frequent
mutation of the builtins – I’ve never seen or heard of one), so it
doesn’t matter how expensive mutating the builtins becomes. OTOH,
referencing globals and builtins is very common. Combining those
observations suggests a more aggressive caching of builtins in module
globals, speeding access at the expense of making mutations of the
builtins (potentially much) more expensive to keep the caches in
synch.
Much of the scheme above remains the same, and most of the rest is
just a little different. A cell changes to:
class cell(object):
def __init__(self, obj=NULL, builtin=0):
self.objptr = obj
self.builtinflag = builtin
and a celldict maps strings to this version of cells. builtinflag
is true when and only when objptr contains a value obtained from
the builtins; in other words, it’s true when and only when a cell
is acting as a cached value. When builtinflag is false, objptr is
the value of a module global (possibly NULL). celldict changes to:
class celldict(object):
def __init__(self, builtindict=()):
self.basedict = builtindict
self.__dict = d = {}
for k, v in builtindict.items():
d[k] = cell(v, 1)
def __getitem__(self, key):
c = self.__dict.get(key)
if c is None or c.objptr is NULL or c.builtinflag:
raise KeyError, key
return c.objptr
def __setitem__(self, key, value):
c = self.__dict.get(key)
if c is None:
c = cell()
self.__dict[key] = c
c.objptr = value
c.builtinflag = 0
def __delitem__(self, key):
c = self.__dict.get(key)
if c is None or c.objptr is NULL or c.builtinflag:
raise KeyError, key
c.objptr = NULL
# We may have unmasked a builtin. Note that because
# we're checking the builtin dict for that *now*, this
# still works if the builtin first came into existence
# after we were constructed. Note too that del on
# namespace dicts is rare, so the expense of this check
# shouldn't matter.
if key in self.basedict:
c.objptr = self.basedict[key]
assert c.objptr is not NULL # else "in" lied
c.builtinflag = 1
else:
# There is no builtin with the same name.
assert not c.builtinflag
def keys(self):
return [k for k, c in self.__dict.iteritems()
if c.objptr is not NULL and not c.builtinflag]
def items(self):
return [k, c.objptr for k, c in self.__dict.iteritems()
if c.objptr is not NULL and not c.builtinflag]
def values(self):
preturn [c.objptr for c in self.__dict.itervalues()
if c.objptr is not NULL and not c.builtinflag]
def clear(self):
for c in self.__dict.values():
if not c.builtinflag:
c.objptr = NULL
# Etc.
The speed benefit comes from simplifying LOAD_GLOBAL_CELL, which
I expect is executed more frequently than all other namespace
operations combined:
def LOAD_GLOBAL_CELL(self, i):
# self is the frame
c = self.func_cells[i]
return c.objptr # may be NULL (also true before)
That is, accessing builtins and accessing module globals are equally
fast. For module globals, a NULL-pointer test+branch is saved. For
builtins, an additional pointer chase is also saved.
The other part needed to make this fly is expensive, propagating
mutations of builtins into the module dicts that were initialized
from the builtins. This is much like, in 2.2, propagating changes
in new-style base classes to their descendants: the builtins need to
maintain a list of weakrefs to the modules (or module dicts)
initialized from the builtin’s dict. Given a mutation to the builtin
dict (adding a new key, changing the value associated with an
existing key, or deleting a key), traverse the list of module dicts
and make corresponding mutations to them. This is straightforward;
for example, if a key is deleted from builtins, execute
reflect_bltin_del in each module:
def reflect_bltin_del(self, key):
c = self.__dict.get(key)
assert c is not None # else we were already out of synch
if c.builtinflag:
# Put us back in synch.
c.objptr = NULL
c.builtinflag = 0
# Else we're shadowing the builtin, so don't care that
# the builtin went away.
Note that c.builtinflag protects from us erroneously deleting a
module global of the same name. Adding a new (key, value) builtin
pair is similar:
def reflect_bltin_new(self, key, value):
c = self.__dict.get(key)
if c is None:
# Never heard of it before: cache the builtin value.
self.__dict[key] = cell(value, 1)
elif c.objptr is NULL:
# This used to exist in the module or the builtins,
# but doesn't anymore; rehabilitate it.
assert not c.builtinflag
c.objptr = value
c.builtinflag = 1
else:
# We're shadowing it already.
assert not c.builtinflag
Changing the value of an existing builtin:
def reflect_bltin_change(self, key, newvalue):
c = self.__dict.get(key)
assert c is not None # else we were already out of synch
if c.builtinflag:
# Put us back in synch.
c.objptr = newvalue
# Else we're shadowing the builtin, so don't care that
# the builtin changed.
FAQs
Q: Will it still be possible to:a) install new builtins in the __builtin__ namespace and have
them available in all already loaded modules right away ?
b) override builtins (e.g. open()) with my own copies
(e.g. to increase security) in a way that makes these new
copies override the previous ones in all modules ?
A: Yes, this is the whole point of this design. In the original
approach, when LOAD_GLOBAL_CELL finds a NULL in the second
cell, it should go back to see if the __builtins__ dict has
been modified (the pseudo code doesn’t have this yet). Tim’s
“more aggressive” alternative also takes care of this.
Q: How does the new scheme get along with the restricted execution
model?A: It is intended to support that fully.
Q: What happens when a global is deleted?A: The module’s celldict would have a cell with a NULL objptr for
that key. This is true in both variations, but the “aggressive”
variation goes on to see whether this unmasks a builtin of the
same name, and if so copies its value (just a pointer-copy of the
ultimate PyObject*) into the cell’s objptr and sets the cell’s
builtinflag to true.
Q: What would the C code for LOAD_GLOBAL_CELL look like?A: The first version, with the first two bullets under “Additional
ideas” incorporated, could look like this:
case LOAD_GLOBAL_CELL:
cell = func_cells[oparg];
x = cell->objptr;
if (x == NULL) {
x = cell->cellptr->objptr;
if (x == NULL) {
... error recovery ...
break;
}
}
Py_INCREF(x);
PUSH(x);
continue;
We could even write it like this (idea courtesy of Ka-Ping Yee):
case LOAD_GLOBAL_CELL:
cell = func_cells[oparg];
x = cell->cellptr->objptr;
if (x != NULL) {
Py_INCREF(x);
PUSH(x);
continue;
}
... error recovery ...
break;
In modern CPU architectures, this reduces the number of
branches taken for built-ins, which might be a really good
thing, while any decent memory cache should realize that
cell->cellptr is the same as cell for regular globals and hence
this should be very fast in that case too.
For the aggressive variant:
case LOAD_GLOBAL_CELL:
cell = func_cells[oparg];
x = cell->objptr;
if (x != NULL) {
Py_INCREF(x);
PUSH(x);
continue;
}
... error recovery ...
break;
Q: What happens in the module’s top-level code where there is
presumably no func_cells array?A: We could do some code analysis and create a func_cells array,
or we could use LOAD_NAME which should use PyMapping_GetItem on
the globals dict.
Graphics
Ka-Ping Yee supplied a drawing of the state of things after
“import spam”, where spam.py contains:
import eggs
i = -2
max = 3
def foo(n):
y = abs(i) + max
return eggs.ham(y + n)
The drawing is at http://web.lfw.org/repo/cells.gif; a larger
version is at http://lfw.org/repo/cells-big.gif; the source is at
http://lfw.org/repo/cells.ai.
Comparison
XXX Here, a comparison of the three approaches could be added.
Copyright
This document has been placed in the public domain.
| Deferred | PEP 280 – Optimizing access to globals | Standards Track | This PEP describes yet another approach to optimizing access to
module globals, providing an alternative to PEP 266 (Optimizing
Global Variable/Attribute Access by Skip Montanaro) and PEP 267
(Optimized Access to Module Namespaces by Jeremy Hylton). |
PEP 281 – Loop Counter Iteration with range and xrange
Author:
Magnus Lie Hetland <magnus at hetland.org>
Status:
Rejected
Type:
Standards Track
Created:
11-Feb-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Pronouncement
Motivation
Specification
Alternatives
Backwards Compatibility
Copyright
Abstract
This PEP describes yet another way of exposing the loop counter in
for-loops. It basically proposes that the functionality of the
function indices() from PEP 212 be included in the existing
functions range() and xrange().
Pronouncement
In commenting on PEP 279’s enumerate() function, this PEP’s author
offered, “I’m quite happy to have it make PEP 281 obsolete.”
Subsequently, PEP 279 was accepted into Python 2.3.
On 17 June 2005, the BDFL concurred with it being obsolete and
hereby rejected the PEP. For the record, he found some of the
examples to somewhat jarring in appearance:
>>> range(range(5), range(10), range(2))
[5, 7, 9]
Motivation
It is often desirable to loop over the indices of a sequence. PEP
212 describes several ways of doing this, including adding a
built-in function called indices, conceptually defined as:
def indices(sequence):
return range(len(sequence))
On the assumption that adding functionality to an existing built-in
function may be less intrusive than adding a new built-in function,
this PEP proposes adding this functionality to the existing
functions range() and xrange().
Specification
It is proposed that all three arguments to the built-in functions
range() and xrange() are allowed to be objects with a length
(i.e. objects implementing the __len__ method). If an argument
cannot be interpreted as an integer (i.e. it has no __int__
method), its length will be used instead.
Examples:
>>> range(range(10))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> range(range(5), range(10))
[5, 6, 7, 8, 9]
>>> range(range(5), range(10), range(2))
[5, 7, 9]
>>> list(xrange(range(10)))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> list(xrange(xrange(10)))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Number the lines of a file:
lines = file.readlines()
for num in range(lines):
print num, lines[num]
Alternatives
A natural alternative to the above specification is allowing
xrange() to access its arguments in a lazy manner. Thus, instead
of using their length explicitly, xrange can return one index for
each element of the stop argument until the end is reached. A
similar lazy treatment makes little sense for the start and step
arguments since their length must be calculated before iteration
can begin. (Actually, the length of the step argument isn’t needed
until the second element is returned.)
A pseudo-implementation (using only the stop argument, and assuming
that it is iterable) is:
def xrange(stop):
i = 0
for x in stop:
yield i
i += 1
Testing whether to use int() or lazy iteration could be done by
checking for an __iter__ attribute. (This example assumes the
presence of generators, but could easily have been implemented as a
plain iterator object.)
It may be questionable whether this feature is truly useful, since
one would not be able to access the elements of the iterable object
inside the for loop through indexing.
Example:
# Printing the numbers of the lines of a file:
for num in range(file):
print num # The line itself is not accessible
A more controversial alternative (to deal with this) would be to
let range() behave like the function irange() of PEP 212 when
supplied with a sequence.
Example:
>>> range(5)
[0, 1, 2, 3, 4]
>>> range('abcde')
[(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd'), (4, 'e')]
Backwards Compatibility
The proposal could cause backwards incompatibilities if arguments
are used which implement both __int__ and __len__ (or __iter__ in
the case of lazy iteration with xrange). The author does not
believe that this is a significant problem.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 281 – Loop Counter Iteration with range and xrange | Standards Track | This PEP describes yet another way of exposing the loop counter in
for-loops. It basically proposes that the functionality of the
function indices() from PEP 212 be included in the existing
functions range() and xrange(). |
PEP 282 – A Logging System
Author:
Vinay Sajip <vinay_sajip at red-dove.com>,
Trent Mick <trentm at activestate.com>
Status:
Final
Type:
Standards Track
Created:
04-Feb-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Motivation
Influences
Simple Example
Control Flow
Levels
Loggers
Handlers
LogRecords
Formatters
Filters
Configuration
Thread Safety
Module-Level Functions
Implementation
Packaging
References
Copyright
Abstract
This PEP describes a proposed logging package for Python’s
standard library.
Basically the system involves the user creating one or more logger
objects on which methods are called to log debugging notes,
general information, warnings, errors etc. Different logging
‘levels’ can be used to distinguish important messages from less
important ones.
A registry of named singleton logger objects is maintained so that
different logical logging streams (or ‘channels’) exist
(say, one for ‘zope.zodb’ stuff and another for
‘mywebsite’-specific stuff)
one does not have to pass logger object references around.
The system is configurable at runtime. This configuration
mechanism allows one to tune the level and type of logging done
while not touching the application itself.
Motivation
If a single logging mechanism is enshrined in the standard
library, 1) logging is more likely to be done ‘well’, and 2)
multiple libraries will be able to be integrated into larger
applications which can be logged reasonably coherently.
Influences
This proposal was put together after having studied the
following logging packages:
java.util.logging in JDK 1.4 (a.k.a. JSR047) [1]
log4j [2]
the Syslog package from the Protomatter project [3]
MAL’s mx.Log package [4]
Simple Example
This shows a very simple example of how the logging package can be
used to generate simple logging output on stderr.
--------- mymodule.py -------------------------------
import logging
log = logging.getLogger("MyModule")
def doIt():
log.debug("Doin' stuff...")
#do stuff...
raise TypeError, "Bogus type error for testing"
-----------------------------------------------------
--------- myapp.py ----------------------------------
import mymodule, logging
logging.basicConfig()
log = logging.getLogger("MyApp")
log.info("Starting my app")
try:
mymodule.doIt()
except Exception, e:
log.exception("There was a problem.")
log.info("Ending my app")
-----------------------------------------------------
$ python myapp.py
INFO:MyApp: Starting my app
DEBUG:MyModule: Doin' stuff...
ERROR:MyApp: There was a problem.
Traceback (most recent call last):
File "myapp.py", line 9, in ?
mymodule.doIt()
File "mymodule.py", line 7, in doIt
raise TypeError, "Bogus type error for testing"
TypeError: Bogus type error for testing
INFO:MyApp: Ending my app
The above example shows the default output format. All
aspects of the output format should be configurable, so that
you could have output formatted like this:
2002-04-19 07:56:58,174 MyModule DEBUG - Doin' stuff...
or just
Doin' stuff...
Control Flow
Applications make logging calls on Logger objects. Loggers are
organized in a hierarchical namespace and child Loggers inherit
some logging properties from their parents in the namespace.
Logger names fit into a “dotted name” namespace, with dots
(periods) indicating sub-namespaces. The namespace of logger
objects therefore corresponds to a single tree data structure.
"" is the root of the namespace
"Zope" would be a child node of the root
"Zope.ZODB" would be a child node of "Zope"
These Logger objects create LogRecord objects which are passed
to Handler objects for output. Both Loggers and Handlers may
use logging levels and (optionally) Filters to decide if they
are interested in a particular LogRecord. When it is necessary to
output a LogRecord externally, a Handler can (optionally) use a
Formatter to localize and format the message before sending it
to an I/O stream.
Each Logger keeps track of a set of output Handlers. By default
all Loggers also send their output to all Handlers of their
ancestor Loggers. Loggers may, however, also be configured to
ignore Handlers higher up the tree.
The APIs are structured so that calls on the Logger APIs can be
cheap when logging is disabled. If logging is disabled for a
given log level, then the Logger can make a cheap comparison test
and return. If logging is enabled for a given log level, the
Logger is still careful to minimize costs before passing the
LogRecord into the Handlers. In particular, localization and
formatting (which are relatively expensive) are deferred until the
Handler requests them.
The overall Logger hierarchy can also have a level associated with
it, which takes precedence over the levels of individual Loggers.
This is done through a module-level function:
def disable(lvl):
"""
Do not generate any LogRecords for requests with a severity less
than 'lvl'.
"""
...
Levels
The logging levels, in increasing order of importance, are:
DEBUG
INFO
WARN
ERROR
CRITICAL
The term CRITICAL is used in preference to FATAL, which is used by
log4j. The levels are conceptually the same - that of a serious,
or very serious, error. However, FATAL implies death, which in
Python implies a raised and uncaught exception, traceback, and
exit. Since the logging module does not enforce such an outcome
from a FATAL-level log entry, it makes sense to use CRITICAL in
preference to FATAL.
These are just integer constants, to allow simple comparison of
importance. Experience has shown that too many levels can be
confusing, as they lead to subjective interpretation of which
level should be applied to any particular log request.
Although the above levels are strongly recommended, the logging
system should not be prescriptive. Users may define their own
levels, as well as the textual representation of any levels. User
defined levels must, however, obey the constraints that they are
all positive integers and that they increase in order of
increasing severity.
User-defined logging levels are supported through two module-level
functions:
def getLevelName(lvl):
"""Return the text for level 'lvl'."""
...
def addLevelName(lvl, lvlName):
"""
Add the level 'lvl' with associated text 'levelName', or
set the textual representation of existing level 'lvl' to be
'lvlName'."""
...
Loggers
Each Logger object keeps track of a log level (or threshold) that
it is interested in, and discards log requests below that level.
A Manager class instance maintains the hierarchical namespace of
named Logger objects. Generations are denoted with dot-separated
names: Logger “foo” is the parent of Loggers “foo.bar” and
“foo.baz”.
The Manager class instance is a singleton and is not directly
exposed to users, who interact with it using various module-level
functions.
The general logging method is:
class Logger:
def log(self, lvl, msg, *args, **kwargs):
"""Log 'str(msg) % args' at logging level 'lvl'."""
...
However, convenience functions are defined for each logging level:
class Logger:
def debug(self, msg, *args, **kwargs): ...
def info(self, msg, *args, **kwargs): ...
def warn(self, msg, *args, **kwargs): ...
def error(self, msg, *args, **kwargs): ...
def critical(self, msg, *args, **kwargs): ...
Only one keyword argument is recognized at present - “exc_info”.
If true, the caller wants exception information to be provided in
the logging output. This mechanism is only needed if exception
information needs to be provided at any logging level. In the
more common case, where exception information needs to be added to
the log only when errors occur, i.e. at the ERROR level, then
another convenience method is provided:
class Logger:
def exception(self, msg, *args): ...
This should only be called in the context of an exception handler,
and is the preferred way of indicating a desire for exception
information in the log. The other convenience methods are
intended to be called with exc_info only in the unusual situation
where you might want to provide exception information in the
context of an INFO message, for example.
The “msg” argument shown above will normally be a format string;
however, it can be any object x for which str(x) returns the
format string. This facilitates, for example, the use of an
object which fetches a locale- specific message for an
internationalized/localized application, perhaps using the
standard gettext module. An outline example:
class Message:
"""Represents a message"""
def __init__(self, id):
"""Initialize with the message ID"""
def __str__(self):
"""Return an appropriate localized message text"""
...
logger.info(Message("abc"), ...)
Gathering and formatting data for a log message may be expensive,
and a waste if the logger was going to discard the message anyway.
To see if a request will be honoured by the logger, the
isEnabledFor() method can be used:
class Logger:
def isEnabledFor(self, lvl):
"""
Return true if requests at level 'lvl' will NOT be
discarded.
"""
...
so instead of this expensive and possibly wasteful DOM to XML
conversion:
...
hamletStr = hamletDom.toxml()
log.info(hamletStr)
...
one can do this:
if log.isEnabledFor(logging.INFO):
hamletStr = hamletDom.toxml()
log.info(hamletStr)
When new loggers are created, they are initialized with a level
which signifies “no level”. A level can be set explicitly using
the setLevel() method:
class Logger:
def setLevel(self, lvl): ...
If a logger’s level is not set, the system consults all its
ancestors, walking up the hierarchy until an explicitly set level
is found. That is regarded as the “effective level” of the
logger, and can be queried via the getEffectiveLevel() method:
def getEffectiveLevel(self): ...
Loggers are never instantiated directly. Instead, a module-level
function is used:
def getLogger(name=None): ...
If no name is specified, the root logger is returned. Otherwise,
if a logger with that name exists, it is returned. If not, a new
logger is initialized and returned. Here, “name” is synonymous
with “channel name”.
Users can specify a custom subclass of Logger to be used by the
system when instantiating new loggers:
def setLoggerClass(klass): ...
The passed class should be a subclass of Logger, and its __init__
method should call Logger.__init__.
Handlers
Handlers are responsible for doing something useful with a given
LogRecord. The following core Handlers will be implemented:
StreamHandler: A handler for writing to a file-like object.
FileHandler: A handler for writing to a single file or set
of rotating files.
SocketHandler: A handler for writing to remote TCP ports.
DatagramHandler: A handler for writing to UDP sockets, for
low-cost logging. Jeff Bauer already had such a system [5].
MemoryHandler: A handler that buffers log records in memory
until the buffer is full or a particular condition occurs
[1].
SMTPHandler: A handler for sending to email addresses via SMTP.
SysLogHandler: A handler for writing to Unix syslog via UDP.
NTEventLogHandler: A handler for writing to event logs on
Windows NT, 2000 and XP.
HTTPHandler: A handler for writing to a Web server with
either GET or POST semantics.
Handlers can also have levels set for them using the
setLevel() method:
def setLevel(self, lvl): ...
The FileHandler can be set up to create a rotating set of log
files. In this case, the file name passed to the constructor is
taken as a “base” file name. Additional file names for the
rotation are created by appending .1, .2, etc. to the base file
name, up to a maximum as specified when rollover is requested.
The setRollover method is used to specify a maximum size for a log
file and a maximum number of backup files in the rotation.
def setRollover(maxBytes, backupCount): ...
If maxBytes is specified as zero, no rollover ever occurs and the
log file grows indefinitely. If a non-zero size is specified,
when that size is about to be exceeded, rollover occurs. The
rollover method ensures that the base file name is always the most
recent, .1 is the next most recent, .2 the next most recent after
that, and so on.
There are many additional handlers implemented in the test/example
scripts provided with [6] - for example, XMLHandler and
SOAPHandler.
LogRecords
A LogRecord acts as a receptacle for information about a
logging event. It is little more than a dictionary, though it
does define a getMessage method which merges a message with
optional runarguments.
Formatters
A Formatter is responsible for converting a LogRecord to a string
representation. A Handler may call its Formatter before writing a
record. The following core Formatters will be implemented:
Formatter: Provide printf-like formatting, using the % operator.
BufferingFormatter: Provide formatting for multiple
messages, with header and trailer formatting support.
Formatters are associated with Handlers by calling setFormatter()
on a handler:
def setFormatter(self, form): ...
Formatters use the % operator to format the logging message. The
format string should contain %(name)x and the attribute dictionary
of the LogRecord is used to obtain message-specific data. The
following attributes are provided:
%(name)s
Name of the logger (logging channel)
%(levelno)s
Numeric logging level for the message (DEBUG,
INFO, WARN, ERROR, CRITICAL)
%(levelname)s
Text logging level for the message (“DEBUG”, “INFO”,
“WARN”, “ERROR”, “CRITICAL”)
%(pathname)s
Full pathname of the source file where the logging
call was issued (if available)
%(filename)s
Filename portion of pathname
%(module)s
Module from which logging call was made
%(lineno)d
Source line number where the logging call was issued
(if available)
%(created)f
Time when the LogRecord was created (time.time()
return value)
%(asctime)s
Textual time when the LogRecord was created
%(msecs)d
Millisecond portion of the creation time
%(relativeCreated)d
Time in milliseconds when the LogRecord was created,
relative to the time the logging module was loaded
(typically at application startup time)
%(thread)d
Thread ID (if available)
%(message)s
The result of record.getMessage(), computed just as
the record is emitted
If a formatter sees that the format string includes “(asctime)s”,
the creation time is formatted into the LogRecord’s asctime
attribute. To allow flexibility in formatting dates, Formatters
are initialized with a format string for the message as a whole,
and a separate format string for date/time. The date/time format
string should be in time.strftime format. The default value for
the message format is “%(message)s”. The default date/time format
is ISO8601.
The formatter uses a class attribute, “converter”, to indicate how
to convert a time from seconds to a tuple. By default, the value
of “converter” is “time.localtime”. If needed, a different
converter (e.g. “time.gmtime”) can be set on an individual
formatter instance, or the class attribute changed to affect all
formatter instances.
Filters
When level-based filtering is insufficient, a Filter can be called
by a Logger or Handler to decide if a LogRecord should be output.
Loggers and Handlers can have multiple filters installed, and any
one of them can veto a LogRecord being output.
class Filter:
def filter(self, record):
"""
Return a value indicating true if the record is to be
processed. Possibly modify the record, if deemed
appropriate by the filter.
"""
The default behaviour allows a Filter to be initialized with a
Logger name. This will only allow through events which are
generated using the named logger or any of its children. For
example, a filter initialized with “A.B” will allow events logged
by loggers “A.B”, “A.B.C”, “A.B.C.D”, “A.B.D” etc. but not “A.BB”,
“B.A.B” etc. If initialized with the empty string, all events are
passed by the Filter. This filter behaviour is useful when it is
desired to focus attention on one particular area of an
application; the focus can be changed simply by changing a filter
attached to the root logger.
There are many examples of Filters provided in [6].
Configuration
The main benefit of a logging system like this is that one can
control how much and what logging output one gets from an
application without changing that application’s source code.
Therefore, although configuration can be performed through the
logging API, it must also be possible to change the logging
configuration without changing an application at all. For
long-running programs like Zope, it should be possible to change
the logging configuration while the program is running.
Configuration includes the following:
What logging level a logger or handler should be interested in.
What handlers should be attached to which loggers.
What filters should be attached to which handlers and loggers.
Specifying attributes specific to certain handlers and filters.
In general each application will have its own requirements for how
a user may configure logging output. However, each application
will specify the required configuration to the logging system
through a standard mechanism.
The most simple configuration is that of a single handler, writing
to stderr, attached to the root logger. This configuration is set
up by calling the basicConfig() function once the logging module
has been imported.
def basicConfig(): ...
For more sophisticated configurations, this PEP makes no specific
proposals, for the following reasons:
A specific proposal may be seen as prescriptive.
Without the benefit of wide practical experience in the
Python community, there is no way to know whether any given
configuration approach is a good one. That practice can’t
really come until the logging module is used, and that means
until after Python 2.3 has shipped.
There is a likelihood that different types of applications
may require different configuration approaches, so that no
“one size fits all”.
The reference implementation [6] has a working configuration file
format, implemented for the purpose of proving the concept and
suggesting one possible alternative. It may be that separate
extension modules, not part of the core Python distribution, are
created for logging configuration and log viewing, supplemental
handlers and other features which are not of interest to the bulk
of the community.
Thread Safety
The logging system should support thread-safe operation without
any special action needing to be taken by its users.
Module-Level Functions
To support use of the logging mechanism in short scripts and small
applications, module-level functions debug(), info(), warn(),
error(), critical() and exception() are provided. These work in
the same way as the correspondingly named methods of Logger - in
fact they delegate to the corresponding methods on the root
logger. A further convenience provided by these functions is that
if no configuration has been done, basicConfig() is automatically
called.
At application exit, all handlers can be flushed by calling the function:
def shutdown(): ...
This will flush and close all handlers.
Implementation
The reference implementation is Vinay Sajip’s logging module [6].
Packaging
The reference implementation is implemented as a single module.
This offers the simplest interface - all users have to do is
“import logging” and they are in a position to use all the
functionality available.
References
[1] (1, 2)
java.util.logging
http://java.sun.com/j2se/1.4/docs/guide/util/logging/
[2]
log4j: a Java logging package
https://logging.apache.org/log4j/
[3]
Protomatter’s Syslog
http://protomatter.sourceforge.net/1.1.6/index.html
http://protomatter.sourceforge.net/1.1.6/javadoc/com/protomatter/syslog/syslog-whitepaper.html
[4]
MAL mentions his mx.Log logging module:
https://mail.python.org/pipermail/python-dev/2002-February/019767.html
[5]
Jeff Bauer’s Mr. Creosote
http://starship.python.net/crew/jbauer/creosote/
[6] (1, 2, 3, 4)
Vinay Sajip’s logging module.
https://old.red-dove.com/python_logging.html
Copyright
This document has been placed in the public domain.
| Final | PEP 282 – A Logging System | Standards Track | This PEP describes a proposed logging package for Python’s
standard library. |
PEP 283 – Python 2.3 Release Schedule
Author:
Guido van Rossum
Status:
Final
Type:
Informational
Topic:
Release
Created:
27-Feb-2002
Python-Version:
2.3
Post-History:
27-Feb-2002
Table of Contents
Abstract
Release Manager
Completed features for 2.3
Planned features for 2.3
Ongoing tasks
Open issues
Features that did not make it into Python 2.3
Copyright
Abstract
This document describes the development and release schedule for
Python 2.3. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release.
There will be at least two alpha releases, two beta releases, and
one release candidate. Alpha and beta releases will be spaced at
least 4 weeks apart (except if an emergency release must be made
to correct a blunder in the previous release; then the blunder
release does not count). Release candidates will be spaced at
least one week apart (excepting again blunder corrections).
alpha 1
31 Dec 2002
alpha 2
19 Feb 2003
beta 1
25 Apr 2003
beta 2
29 Jun 2003
candidate 1
18 Jul 2003
candidate 2
24 Jul 2003
final
29 Jul 2003
Release Manager
Barry Warsaw, Jeremy Hylton, Tim Peters
Completed features for 2.3
This list is not complete. See Doc/whatsnew/whatsnew23.tex in CVS
for more, and of course Misc/NEWS for the full list.
Tk 8.4 update.
The bool type and its constants, True and False (PEP 285).
PyMalloc was greatly enhanced and is enabled by default.
Universal newline support (PEP 278).
PEP 263 Defining Python Source Code Encodings, LemburgImplemented (at least phase 1, which is all that’s planned for
2.3).
Extended slice notation for all built-in sequences. The patch
by Michael Hudson is now all checked in.
Speed up list iterations by filling tp_iter and other tweaks.
See https://bugs.python.org/issue560736; also done for xrange and
tuples.
Timeout sockets. https://bugs.python.org/issue555085
Stage B0 of the int/long integration (PEP 237). This means
issuing a FutureWarning about situations where hex or oct
conversions or left shifts returns a different value for an int
than for a long with the same value. The semantics do not
change in Python 2.3; that will happen in Python 2.4.
Nuke SET_LINENO from all code objects (providing a different way
to set debugger breakpoints). This can boost pystone by >5%.
https://bugs.python.org/issue587993, now checked in. (Unfortunately
the pystone boost didn’t happen. What happened?)
Write a pymemcompat.h that people can bundle with their
extensions and then use the 2.3 memory interface with all
Pythons in the range 1.5.2 to 2.3. (Michael Hudson checked in
Misc/pymemcompat.h.)
Add a new concept, “pending deprecation”, with associated
warning PendingDeprecationWarning. This warning is normally
suppressed, but can be enabled by a suitable -W option. Only a
few things use this at this time.
Warn when an extension type’s tp_compare returns anything except
-1, 0 or 1. https://bugs.python.org/issue472523
Warn for assignment to None (in various forms).
PEP 218 Adding a Built-In Set Object Type, WilsonAlex Martelli contributed a new version of Greg Wilson’s
prototype, and I’ve reworked that quite a bit. It’s in the
standard library now as the module sets, although some details
may still change until the first beta release. (There are no
plans to make this a built-in type, for now.)
PEP 293 Codec error handling callbacks, DörwaldFully implemented. Error handling in unicode.encode or
str.decode can now be customized.
PEP 282 A Logging System, MickVinay Sajip’s implementation has been packagized and imported.
(Documentation and unit tests still pending.)
https://bugs.python.org/issue578494
A modified MRO (Method Resolution Order) algorithm. Consensus
is that we should adopt C3. Samuele Pedroni has contributed a
draft implementation in C, see https://bugs.python.org/issue619475
This has now been checked in.
A new command line option parser. Greg Ward’s Optik package
(http://optik.sf.net) has been adopted, converted to a single
module named optparse. See also
http://www.python.org/sigs/getopt-sig/
A standard datetime type. This started as a wiki:
http://www.zope.org/Members/fdrake/DateTimeWiki/FrontPage. A
prototype was coded in nondist/sandbox/datetime/. Tim Peters
has finished the C implementation and checked it in.
PEP 273 Import Modules from Zip Archives, AhlstromImplemented as a part of the PEP 302 implementation work.
PEP 302 New Import Hooks, JvRImplemented (though the 2.3a1 release contained some bugs that
have been fixed post-release).
A new pickling protocol. See PEP 307.
PEP 305 (CSV File API, by Skip Montanaro et al.) is in; this is
the csv module.
Raymond Hettinger’s itertools module is in.
PEP 311 (Simplified GIL Acquisition for Extensions, by Mark
Hammond) has been included in beta 1.
Two new PyArg_Parse*() format codes, ‘k’ returns an unsigned C
long int that receives the lower LONG_BIT bits of the Python
argument, truncating without range checking. ‘K’ returns an
unsigned C long long int that receives the lower LONG_LONG_BIT
bits, truncating without range checking. (SF 595026; Thomas
Heller did this work.)
A new version of IDLE was imported from the IDLEfork project
(http://idlefork.sf.net). The code now lives in the idlelib
package in the standard library and the idle script is installed
by setup.py.
Planned features for 2.3
Too late for anything more to get done here.
Ongoing tasks
The following are ongoing TO-DO items which we should attempt to
work on without hoping for completion by any particular date.
Documentation: complete the distribution and installation
manuals.
Documentation: complete the documentation for new-style
classes.
Look over the Demos/ directory and update where required (Andrew
Kuchling has done a lot of this)
New tests.
Fix doc bugs on SF.
Remove use of deprecated features in the core.
Document deprecated features appropriately.
Mark deprecated C APIs with Py_DEPRECATED.
Deprecate modules which are unmaintained, or perhaps make a new
category for modules ‘Unmaintained’
In general, lots of cleanup so it is easier to move forward.
Open issues
There are some issues that may need more work and/or thought
before the final release (and preferably before the first beta
release): No issues remaining.
Features that did not make it into Python 2.3
The import lock could use some redesign. (SF 683658.)
Set API issues; is the sets module perfect?I expect it’s good enough to stop polishing it until we’ve had
more widespread user experience.
A nicer API to open text files, replacing the ugly (in some
people’s eyes) “U” mode flag. There’s a proposal out there to
have a new built-in type textfile(filename, mode, encoding).
(Shouldn’t it have a bufsize argument too?)Ditto.
New widgets for Tkinter???Has anyone gotten the time for this? Are there any new
widgets in Tk 8.4? Note that we’ve got better Tix support
already (though not on Windows yet).
Fredrik Lundh’s basetime proposal:http://effbot.org/ideas/time-type.htm
I believe this is dead now.
PEP 304 (Controlling Generation of Bytecode Files by Montanaro)
seems to have lost steam.
For a class defined inside another class, the __name__ should be
"outer.inner", and pickling should work. (SF 633930. I’m no
longer certain this is easy or even right.)
reST is going to be used a lot in Zope3. Maybe it could become
a standard library module? (Since reST’s author thinks it’s too
unstable, I’m inclined not to do this.)
Decide on a clearer deprecation policy (especially for modules)
and act on it. For a start, see this message from Neal Norwitz:
https://mail.python.org/pipermail/python-dev/2002-April/023165.html
There seems insufficient interest in moving this further in an
organized fashion, and it’s not particularly important.
Provide alternatives for common uses of the types module;Skip Montanaro has posted a proto-PEP for this idea:
https://mail.python.org/pipermail/python-dev/2002-May/024346.html
There hasn’t been any progress on this, AFAICT.
Use pending deprecation for the types and string modules. This
requires providing alternatives for the parts that aren’t
covered yet (e.g. string.whitespace and types.TracebackType).
It seems we can’t get consensus on this.
Deprecate the buffer object.
https://mail.python.org/pipermail/python-dev/2002-July/026388.html
https://mail.python.org/pipermail/python-dev/2002-July/026408.html
It seems that this is never going to be resolved.
PEP 269 Pgen Module for Python, Riehl(Some necessary changes are in; the pgen module itself needs to
mature more.)
Add support for the long-awaited Python catalog. Kapil
Thangavelu has a Zope-based implementation that he demoed at
OSCON 2002. Now all we need is a place to host it and a person
to champion it. (Some changes to distutils to support this are
in, at least.)
PEP 266 Optimizing Global Variable/Attribute Access, MontanaroPEP 267 Optimized Access to Module Namespaces, Hylton
PEP 280 Optimizing access to globals, van Rossum
These are basically three friendly competing proposals. Jeremy
has made a little progress with a new compiler, but it’s going
slow and the compiler is only the first step. Maybe we’ll be
able to refactor the compiler in this release. I’m tempted to
say we won’t hold our breath. In the meantime, Oren Tirosh has
a much simpler idea that may give a serious boost to the
performance of accessing globals and built-ins, by optimizing
and inlining the dict access: http://tothink.com/python/fastnames/
Lazily tracking tuples?
https://mail.python.org/pipermail/python-dev/2002-May/023926.html
https://bugs.python.org/issue558745
Not much enthusiasm I believe.
PEP 286 Enhanced Argument Tuples, von LoewisI haven’t had the time to review this thoroughly. It seems a
deep optimization hack (also makes better correctness guarantees
though).
Make ‘as’ a keyword. It has been a pseudo-keyword long enough.
Too much effort to bother.
Copyright
This document has been placed in the public domain.
| Final | PEP 283 – Python 2.3 Release Schedule | Informational | This document describes the development and release schedule for
Python 2.3. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release. |
PEP 284 – Integer for-loops
Author:
David Eppstein <eppstein at ics.uci.edu>,
Gregory Ewing <greg.ewing at canterbury.ac.nz>
Status:
Rejected
Type:
Standards Track
Created:
01-Mar-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Pronouncement
Rationale
Specification
Issues
Implementation
References
Copyright
Abstract
This PEP proposes to simplify iteration over intervals of
integers, by extending the range of expressions allowed after a
“for” keyword to allow three-way comparisons such as
for lower <= var < upper:
in place of the current
for item in list:
syntax. The resulting loop or list iteration will loop over all
values of var that make the comparison true, starting from the
left endpoint of the given interval.
Pronouncement
This PEP is rejected. There were a number of fixable issues with
the proposal (see the fixups listed in Raymond Hettinger’s
python-dev post on 18 June 2005 [1]). However, even with the fixups the
proposal did not garner support. Specifically, Guido did not buy
the premise that the range() format needed fixing, “The whole point
(15 years ago) of range() was to avoid needing syntax to specify a
loop over numbers. I think it’s worked out well and there’s nothing
that needs to be fixed (except range() needs to become an iterator,
which it will in Python 3.0).”
Rationale
One of the most common uses of for-loops in Python is to iterate
over an interval of integers. Python provides functions range()
and xrange() to generate lists and iterators for such intervals,
which work best for the most frequent case: half-open intervals
increasing from zero. However, the range() syntax is more awkward
for open or closed intervals, and lacks symmetry when reversing
the order of iteration. In addition, the call to an unfamiliar
function makes it difficult for newcomers to Python to understand
code that uses range() or xrange().
The perceived lack of a natural, intuitive integer iteration
syntax has led to heated debate on python-list, and spawned at
least four PEPs before this one. PEP 204 (rejected) proposed
to re-use Python’s slice syntax for integer ranges, leading to a
terser syntax but not solving the readability problem of
multi-argument range(). PEP 212 (deferred) proposed several
syntaxes for directly converting a list to a sequence of integer
indices, in place of the current idiom
range(len(list))
for such conversion, and PEP 281 proposes to simplify the same
idiom by allowing it to be written as
range(list).
PEP 276 proposes to allow automatic conversion of integers to
iterators, simplifying the most common half-open case but not
addressing the complexities of other types of interval.
Additional alternatives have been discussed on python-list.
The solution described here is to allow a three-way comparison
after a “for” keyword, both in the context of a for-loop and of a
list comprehension:
for lower <= var < upper:
This would cause iteration over an interval of consecutive
integers, beginning at the left bound in the comparison and ending
at the right bound. The exact comparison operations used would
determine whether the interval is open or closed at either end and
whether the integers are considered in ascending or descending
order.
This syntax closely matches standard mathematical notation, so is
likely to be more familiar to Python novices than the current
range() syntax. Open and closed interval endpoints are equally
easy to express, and the reversal of an integer interval can be
formed simply by swapping the two endpoints and reversing the
comparisons. In addition, the semantics of such a loop would
closely resemble one way of interpreting the existing Python
for-loops:
for item in list
iterates over exactly those values of item that cause the
expression
item in list
to be true. Similarly, the new format
for lower <= var < upper:
would iterate over exactly those integer values of var that cause
the expression
lower <= var < upper
to be true.
Specification
We propose to extend the syntax of a for statement, currently
for_stmt: "for" target_list "in" expression_list ":" suite
["else" ":" suite]
as described below:
for_stmt: "for" for_test ":" suite ["else" ":" suite]
for_test: target_list "in" expression_list |
or_expr less_comp or_expr less_comp or_expr |
or_expr greater_comp or_expr greater_comp or_expr
less_comp: "<" | "<="
greater_comp: ">" | ">="
Similarly, we propose to extend the syntax of list comprehensions,
currently
list_for: "for" expression_list "in" testlist [list_iter]
by replacing it with:
list_for: "for" for_test [list_iter]
In all cases the expression formed by for_test would be subject to
the same precedence rules as comparisons in expressions. The two
comp_operators in a for_test must be required to be both of
similar types, unlike chained comparisons in expressions which do
not have such a restriction.
We refer to the two or_expr’s occurring on the left and right
sides of the for-loop syntax as the bounds of the loop, and the
middle or_expr as the variable of the loop. When a for-loop using
the new syntax is executed, the expressions for both bounds will
be evaluated, and an iterator object created that iterates through
all integers between the two bounds according to the comparison
operations used. The iterator will begin with an integer equal or
near to the left bound, and then step through the remaining
integers with a step size of +1 or -1 if the comparison operation
is in the set described by less_comp or greater_comp respectively.
The execution will then proceed as if the expression had been
for variable in iterator
where “variable” refers to the variable of the loop and “iterator”
refers to the iterator created for the given integer interval.
The values taken by the loop variable in an integer for-loop may
be either plain integers or long integers, according to the
magnitude of the bounds. Both bounds of an integer for-loop must
evaluate to a real numeric type (integer, long, or float). Any
other value will cause the for-loop statement to raise a TypeError
exception.
Issues
The following issues were raised in discussion of this and related
proposals on the Python list.
Should the right bound be evaluated once, or every time through
the loop? Clearly, it only makes sense to evaluate the left
bound once. For reasons of consistency and efficiency, we have
chosen the same convention for the right bound.
Although the new syntax considerably simplifies integer
for-loops, list comprehensions using the new syntax are not as
simple. We feel that this is appropriate since for-loops are
more frequent than comprehensions.
The proposal does not allow access to integer iterator objects
such as would be created by xrange. True, but we see this as a
shortcoming in the general list-comprehension syntax, beyond the
scope of this proposal. In addition, xrange() will still be
available.
The proposal does not allow increments other than 1 and -1.
More general arithmetic progressions would need to be created by
range() or xrange(), or by a list comprehension syntax such as[2*x for 0 <= x <= 100]
The position of the loop variable in the middle of a three-way
comparison is not as apparent as the variable in the presentfor item in list
syntax, leading to a possible loss of readability. We feel that
this loss is outweighed by the increase in readability from a
natural integer iteration syntax.
To some extent, this PEP addresses the same issues as PEP 276.
We feel that the two PEPs are not in conflict since PEP 276
is primarily concerned with half-open ranges starting in 0
(the easy case of range()) while this PEP is primarily concerned
with simplifying all other cases. However, if this PEP is
approved, its new simpler syntax for integer loops could to some
extent reduce the motivation for PEP 276.
It is not clear whether it makes sense to allow floating point
bounds for an integer loop: if a float represents an inexact
value, how can it be used to determine an exact sequence of
integers? On the other hand, disallowing float bounds would
make it difficult to use floor() and ceiling() in integer
for-loops, as it is difficult to use them now with range(). We
have erred on the side of flexibility, but this may lead to some
implementation difficulties in determining the smallest and
largest integer values that would cause a given comparison to be
true.
Should types other than int, long, and float be allowed as
bounds? Another choice would be to convert all bounds to
integers by int(), and allow as bounds anything that can be so
converted instead of just floats. However, this would change
the semantics: 0.3 <= x is not the same as int(0.3) <= x, and it
would be confusing for a loop with 0.3 as lower bound to start
at zero. Also, in general int(f) can be very far from f.
Implementation
An implementation is not available at this time. Implementation
is not expected to pose any great difficulties: the new syntax
could, if necessary, be recognized by parsing a general expression
after each “for” keyword and testing whether the top level
operation of the expression is “in” or a three-way comparison.
The Python compiler would convert any instance of the new syntax
into a loop over the items in a special iterator object.
References
[1]
Raymond Hettinger, Propose updating PEP 284 – Integer for-loops
https://mail.python.org/pipermail/python-dev/2005-June/054316.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 284 – Integer for-loops | Standards Track | This PEP proposes to simplify iteration over intervals of
integers, by extending the range of expressions allowed after a
“for” keyword to allow three-way comparisons such as |
PEP 285 – Adding a bool type
Author:
Guido van Rossum <guido at python.org>
Status:
Final
Type:
Standards Track
Created:
08-Mar-2002
Python-Version:
2.3
Post-History:
08-Mar-2002, 30-Mar-2002, 03-Apr-2002
Table of Contents
Abstract
Review
Rationale
Specification
C API
Clarification
Compatibility
Resolved Issues
Implementation
Copyright
Abstract
This PEP proposes the introduction of a new built-in type, bool,
with two constants, False and True. The bool type would be a
straightforward subtype (in C) of the int type, and the values
False and True would behave like 0 and 1 in most respects (for
example, False==0 and True==1 would be true) except repr() and
str(). All built-in operations that conceptually return a Boolean
result will be changed to return False or True instead of 0 or 1;
for example, comparisons, the “not” operator, and predicates like
isinstance().
Review
I’ve collected enough feedback to last me a lifetime, so I declare
the review period officially OVER. I had Chinese food today; my
fortune cookie said “Strong and bitter words indicate a weak
cause.” It reminded me of some of the posts against this
PEP… :-)
Anyway, here are my BDFL pronouncements. (Executive summary: I’m
not changing a thing; all variants are rejected.)
Should this PEP be accepted?=> Yes.
There have been many arguments against the PEP. Many of them
were based on misunderstandings. I’ve tried to clarify some of
the most common misunderstandings below in the main text of the
PEP. The only issue that weighs at all for me is the tendency
of newbies to write “if x == True” where “if x” would suffice.
More about that below too. I think this is not a sufficient
reason to reject the PEP.
Should str(True) return “True” or “1”? “1” might reduce
backwards compatibility problems, but looks strange.
(repr(True) would always return “True”.)=> “True”.
Almost all reviewers agree with this.
Should the constants be called ‘True’ and ‘False’ (similar to
None) or ‘true’ and ‘false’ (as in C++, Java and C99)?=> True and False.
Most reviewers agree that consistency within Python is more
important than consistency with other languages.
Should we strive to eliminate non-Boolean operations on bools
in the future, through suitable warnings, so that for example
True+1 would eventually (in Python 3000) be illegal?=> No.
There’s a small but vocal minority that would prefer to see
“textbook” bools that don’t support arithmetic operations at
all, but most reviewers agree with me that bools should always
allow arithmetic operations.
Should operator.truth(x) return an int or a bool?=> bool.
Tim Peters believes it should return an int, but almost all
other reviewers agree that it should return a bool. My
rationale: operator.truth() exists to force a Boolean context
on its argument (it calls the C API PyObject_IsTrue()).
Whether the outcome is reported as int or bool is secondary; if
bool exists there’s no reason not to use it. (Under the PEP,
operator.truth() now becomes an alias for bool(); that’s fine.)
Should bool inherit from int?=> Yes.
In an ideal world, bool might be better implemented as a
separate integer type that knows how to perform mixed-mode
arithmetic. However, inheriting bool from int eases the
implementation enormously (in part since all C code that calls
PyInt_Check() will continue to work – this returns true for
subclasses of int). Also, I believe this is right in terms of
substitutability: code that requires an int can be fed a bool
and it will behave the same as 0 or 1. Code that requires a
bool may not work when it is given an int; for example, 3 & 4
is 0, but both 3 and 4 are true when considered as truth
values.
Should the name ‘bool’ be changed?=> No.
Some reviewers have argued for boolean instead of bool, because
this would be easier to understand (novices may have heard of
Boolean algebra but may not make the connection with bool) or
because they hate abbreviations. My take: Python uses
abbreviations judiciously (like ‘def’, ‘int’, ‘dict’) and I
don’t think these are a burden to understanding. To a newbie,
it doesn’t matter whether it’s called a waffle or a bool; it’s
a new word, and they learn quickly what it means.
One reviewer has argued to make the name ‘truth’. I find this
an unattractive name, and would actually prefer to reserve this
term (in documentation) for the more abstract concept of truth
values that already exists in Python. For example: “when a
container is interpreted as a truth value, an empty container
is considered false and a non-empty one is considered true.”
Should we strive to require that Boolean operations (like “if”,
“and”, “not”) have a bool as an argument in the future, so that
for example “if []:” would become illegal and would have to be
written as “if bool([]):” ???=> No!!!
Some people believe that this is how a language with a textbook
Boolean type should behave. Because it was brought up, others
have worried that I might agree with this position. Let me
make my position on this quite clear. This is not part of the
PEP’s motivation and I don’t intend to make this change. (See
also the section “Clarification” below.)
Rationale
Most languages eventually grow a Boolean type; even C99 (the new
and improved C standard, not yet widely adopted) has one.
Many programmers apparently feel the need for a Boolean type; most
Python documentation contains a bit of an apology for the absence
of a Boolean type. I’ve seen lots of modules that defined
constants “False=0” and “True=1” (or similar) at the top and used
those. The problem with this is that everybody does it
differently. For example, should you use “FALSE”, “false”,
“False”, “F” or even “f”? And should false be the value zero or
None, or perhaps a truth value of a different type that will print
as “true” or “false”? Adding a standard bool type to the language
resolves those issues.
Some external libraries (like databases and RPC packages) need to
be able to distinguish between Boolean and integral values, and
while it’s usually possible to craft a solution, it would be
easier if the language offered a standard Boolean type. This also
applies to Jython: some Java classes have separately overloaded
methods or constructors for int and boolean arguments. The bool
type can be used to select the boolean variant. (The same is
apparently the case for some COM interfaces.)
The standard bool type can also serve as a way to force a value to
be interpreted as a Boolean, which can be used to normalize
Boolean values. When a Boolean value needs to be normalized to
one of two values, bool(x) is much clearer than “not not x” and
much more concise than
if x:
return 1
else:
return 0
Here are some arguments derived from teaching Python. When
showing people comparison operators etc. in the interactive shell,
I think this is a bit ugly:
>>> a = 13
>>> b = 12
>>> a > b
1
>>>
If this was:
>>> a > b
True
>>>
it would require a millisecond less thinking each time a 0 or 1
was printed.
There’s also the issue (which I’ve seen baffling even experienced
Pythonistas who had been away from the language for a while) that
if you see:
>>> cmp(a, b)
1
>>> cmp(a, a)
0
>>>
you might be tempted to believe that cmp() also returned a truth
value, whereas in reality it can return three different values
(-1, 0, 1). If ints were not (normally) used to represent
Booleans results, this would stand out much more clearly as
something completely different.
Specification
The following Python code specifies most of the properties of the
new type:
class bool(int):
def __new__(cls, val=0):
# This constructor always returns an existing instance
if val:
return True
else:
return False
def __repr__(self):
if self:
return "True"
else:
return "False"
__str__ = __repr__
def __and__(self, other):
if isinstance(other, bool):
return bool(int(self) & int(other))
else:
return int.__and__(self, other)
__rand__ = __and__
def __or__(self, other):
if isinstance(other, bool):
return bool(int(self) | int(other))
else:
return int.__or__(self, other)
__ror__ = __or__
def __xor__(self, other):
if isinstance(other, bool):
return bool(int(self) ^ int(other))
else:
return int.__xor__(self, other)
__rxor__ = __xor__
# Bootstrap truth values through sheer willpower
False = int.__new__(bool, 0)
True = int.__new__(bool, 1)
The values False and True will be singletons, like None. Because
the type has two values, perhaps these should be called
“doubletons”? The real implementation will not allow other
instances of bool to be created.
True and False will properly round-trip through pickling and
marshalling; for example pickle.loads(pickle.dumps(True)) will
return True, and so will marshal.loads(marshal.dumps(True)).
All built-in operations that are defined to return a Boolean
result will be changed to return False or True instead of 0 or 1.
In particular, this affects comparisons (<, <=, ==, !=,
>, >=, is, is not, in, not in), the unary operator ‘not’, the built-in
functions callable(), hasattr(), isinstance() and issubclass(),
the dict method has_key(), the string and unicode methods
endswith(), isalnum(), isalpha(), isdigit(), islower(), isspace(),
istitle(), isupper(), and startswith(), the unicode methods
isdecimal() and isnumeric(), and the ‘closed’ attribute of file
objects. The predicates in the operator module are also changed
to return a bool, including operator.truth().
Because bool inherits from int, True+1 is valid and equals 2, and
so on. This is important for backwards compatibility: because
comparisons and so on currently return integer values, there’s no
way of telling what uses existing applications make of these
values.
It is expected that over time, the standard library will be
updated to use False and True when appropriate (but not to require
a bool argument type where previous an int was allowed). This
change should not pose additional problems and is not specified in
detail by this PEP.
C API
The header file “boolobject.h” defines the C API for the bool
type. It is included by “Python.h” so there is no need to include
it directly.
The existing names Py_False and Py_True reference the unique bool
objects False and True (previously these referenced static int
objects with values 0 and 1, which were not unique amongst int
values).
A new API, PyObject *PyBool_FromLong(long), takes a C long int
argument and returns a new reference to either Py_False (when the
argument is zero) or Py_True (when it is nonzero).
To check whether an object is a bool, the macro PyBool_Check() can
be used.
The type of bool instances is PyBoolObject *.
The bool type object is available as PyBool_Type.
Clarification
This PEP does not change the fact that almost all object types
can be used as truth values. For example, when used in an if
statement, an empty list is false and a non-empty one is true;
this does not change and there is no plan to ever change this.
The only thing that changes is the preferred values to represent
truth values when returned or assigned explicitly. Previously,
these preferred truth values were 0 and 1; the PEP changes the
preferred values to False and True, and changes built-in
operations to return these preferred values.
Compatibility
Because of backwards compatibility, the bool type lacks many
properties that some would like to see. For example, arithmetic
operations with one or two bool arguments is allowed, treating
False as 0 and True as 1. Also, a bool may be used as a sequence
index.
I don’t see this as a problem, and I don’t want evolve the
language in this direction either. I don’t believe that a
stricter interpretation of “Booleanness” makes the language any
clearer.
Another consequence of the compatibility requirement is that the
expression “True and 6” has the value 6, and similarly the
expression “False or None” has the value None. The “and” and “or”
operators are usefully defined to return the first argument that
determines the outcome, and this won’t change; in particular, they
don’t force the outcome to be a bool. Of course, if both
arguments are bools, the outcome is always a bool. It can also
easily be coerced into being a bool by writing for example “bool(x
and y)”.
Resolved Issues
(See also the Review section above.)
Because the repr() or str() of a bool value is different from an
int value, some code (for example doctest-based unit tests, and
possibly database code that relies on things like “%s” % truth)
may fail. It is easy to work around this (without explicitly
referencing the bool type), and it is expected that this only
affects a very small amount of code that can easily be fixed.
Other languages (C99, C++, Java) name the constants “false” and
“true”, in all lowercase. For Python, I prefer to stick with
the example set by the existing built-in constants, which all
use CapitalizedWords: None, Ellipsis, NotImplemented (as well as
all built-in exceptions). Python’s built-in namespace uses all
lowercase for functions and types only.
It has been suggested that, in order to satisfy user
expectations, for every x that is considered true in a Boolean
context, the expression x == True should be true, and likewise
if x is considered false, x == False should be true. In
particular newbies who have only just learned about Boolean
variables are likely to writeif x == True: ...
instead of the correct form,
if x: ...
There seem to be strong psychological and linguistic reasons why
many people are at first uncomfortable with the latter form, but
I believe that the solution should be in education rather than
in crippling the language. After all, == is general seen as a
transitive operator, meaning that from a==b and b==c we can
deduce a==c. But if any comparison to True were to report
equality when the other operand was a true value of any type,
atrocities like 6==True==7 would hold true, from which one could
infer the falsehood 6==7. That’s unacceptable. (In addition,
it would break backwards compatibility. But even if it didn’t,
I’d still be against this, for the stated reasons.)
Newbies should also be reminded that there’s never a reason to
write
if bool(x): ...
since the bool is implicit in the “if”. Explicit is not
better than implicit here, since the added verbiage impairs
readability and there’s no other interpretation possible. There
is, however, sometimes a reason to write
b = bool(x)
This is useful when it is unattractive to keep a reference to an
arbitrary object x, or when normalization is required for some
other reason. It is also sometimes appropriate to write
i = int(bool(x))
which converts the bool to an int with the value 0 or 1. This
conveys the intention to henceforth use the value as an int.
Implementation
A complete implementation in C has been uploaded to the
SourceForge patch manager: https://bugs.python.org/issue528022
This will soon be checked into CVS for python 2.3a0.
Copyright
This document has been placed in the public domain.
| Final | PEP 285 – Adding a bool type | Standards Track | This PEP proposes the introduction of a new built-in type, bool,
with two constants, False and True. The bool type would be a
straightforward subtype (in C) of the int type, and the values
False and True would behave like 0 and 1 in most respects (for
example, False==0 and True==1 would be true) except repr() and
str(). All built-in operations that conceptually return a Boolean
result will be changed to return False or True instead of 0 or 1;
for example, comparisons, the “not” operator, and predicates like
isinstance(). |
PEP 286 – Enhanced Argument Tuples
Author:
Martin von Löwis <martin at v.loewis.de>
Status:
Deferred
Type:
Standards Track
Created:
03-Mar-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
PEP Deferral
Problem description
Proposed solution
Affected converters
New converters
References
Copyright
Abstract
PyArg_ParseTuple is confronted with difficult memory management if
an argument converter creates new memory. To deal with these
cases, a specialized argument type is proposed.
PEP Deferral
Further exploration of the concepts covered in this PEP has been deferred
for lack of a current champion interested in promoting the goals of the
PEP and collecting and incorporating feedback, and with sufficient
available time to do so effectively.
The resolution of this PEP may also be affected by the resolution of
PEP 426, which proposes the use of a preprocessing step to generate
some aspects of C API interface code.
Problem description
Today, argument tuples keep references to the function arguments,
which are guaranteed to live as long as the argument tuple exists
which is at least as long as the function call is being executed.
In some cases, parsing an argument will allocate new memory, which
is then to be released by the caller. This has two problems:
In case of failure, the application cannot know what memory to
release; most callers don’t even know that they have the
responsibility to release that memory. Example for this are
the N converter (bug #416288 [1]) and the es# converter (bug
#501716 [2]).
Even for successful argument parsing, it is still inconvenient
for the caller to be responsible for releasing the memory. In
some cases, this is unnecessarily inefficient. For example,
the es converter copies the conversion result into memory, even
though there already is a string object that has the right
contents.
Proposed solution
A new type ‘argument tuple’ is introduced. This type derives from
tuple, adding an __dict__ member (at tp_dictoffset -4). Instances
of this type might get the following attributes:
‘failobjects’, a list of objects which need to be deallocated
in case of success
‘okobjects’, a list of object which will be released when the
argument tuple is released
To manage this type, the following functions will be added, and
used appropriately in ceval.c and getargs.c:
PyArgTuple_New(int);
PyArgTuple_AddFailObject(PyObject*, PyObject*);
PyArgTuple_AddFailMemory(PyObject*, void*);
PyArgTuple_AddOkObject(PyObject*, PyObject*);
PyArgTuple_AddOkMemory(PyObject*, void*);
PyArgTuple_ClearFailed(PyObject*);
When argument parsing fails, all fail objects will be released
through Py_DECREF, and all fail memory will be released through
PyMem_Free. If parsing succeeds, the references to the fail
objects and fail memory are dropped, without releasing anything.
When the argument tuple is released, all ok objects and memory
will be released.
If those functions are called with an object of a different type,
a warning is issued and no further action is taken; usage of the
affected converters without using argument tuples is deprecated.
Affected converters
The following converters will add fail memory and fail objects: N,
es, et, es#, et# (unless memory is passed into the converter)
New converters
To simplify Unicode conversion, the e* converters are duplicated
as E* converters (Es, Et, Es#, Et#). The usage of the E*
converters is identical to that of the e* converters, except that
the application will not need to manage the resulting memory.
This will be implemented through registration of Ok objects with
the argument tuple. The e* converters are deprecated.
References
[1]
infrequent memory leak in pyexpat
(http://bugs.python.org/issue416288)
[2]
“es#” parser marker leaks memory
(http://bugs.python.org/issue501716)
Copyright
This document has been placed in the public domain.
| Deferred | PEP 286 – Enhanced Argument Tuples | Standards Track | PyArg_ParseTuple is confronted with difficult memory management if
an argument converter creates new memory. To deal with these
cases, a specialized argument type is proposed. |
PEP 288 – Generators Attributes and Exceptions
Author:
Raymond Hettinger <python at rcn.com>
Status:
Withdrawn
Type:
Standards Track
Created:
21-Mar-2002
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Status
Rationale
Specification for Generator Attributes
Specification for Generator Exception Passing
References
Copyright
Abstract
This PEP proposes to enhance generators by providing mechanisms for
raising exceptions and sharing data with running generators.
Status
This PEP is withdrawn. The exception raising mechanism was extended
and subsumed into PEP 343. The attribute passing capability
never built a following, did not have a clear implementation,
and did not have a clean way for the running generator to access
its own namespace.
Rationale
Currently, only class based iterators can provide attributes and
exception handling. However, class based iterators are harder to
write, less compact, less readable, and slower. A better solution
is to enable these capabilities for generators.
Enabling attribute assignments allows data to be passed to and from
running generators. The approach of sharing data using attributes
pervades Python. Other approaches exist but are somewhat hackish
in comparison.
Another evolutionary step is to add a generator method to allow
exceptions to be passed to a generator. Currently, there is no
clean method for triggering exceptions from outside the generator.
Also, generator exception passing helps mitigate the try/finally
prohibition for generators. The need is especially acute for
generators needing to flush buffers or close resources upon termination.
The two proposals are backwards compatible and require no new
keywords. They are being recommended for Python version 2.5.
Specification for Generator Attributes
Essentially, the proposal is to emulate attribute writing for classes.
The only wrinkle is that generators lack a way to refer to instances of
themselves. So, the proposal is to provide a function for discovering
the reference. For example:
def mygen(filename):
self = sys.get_generator()
myfile = open(filename)
for line in myfile:
if len(line) < 10:
continue
self.pos = myfile.tell()
yield line.upper()
g = mygen('sample.txt')
line1 = g.next()
print 'Position', g.pos
Uses for generator attributes include:
Providing generator clients with extra information (as shown
above).
Externally setting control flags governing generator operation
(possibly telling a generator when to step in or step over
data groups).
Writing lazy consumers with complex execution states
(an arithmetic encoder output stream for example).
Writing co-routines (as demonstrated in Dr. Mertz’s articles [1]).
The control flow of ‘yield’ and ‘next’ is unchanged by this
proposal. The only change is that data can passed to and from the
generator. Most of the underlying machinery is already in place,
only the access function needs to be added.
Specification for Generator Exception Passing
Add a .throw(exception) method to the generator interface:
def logger():
start = time.time()
log = []
try:
while True:
log.append(time.time() - start)
yield log[-1]
except WriteLog:
writelog(log)
g = logger()
for i in [10,20,40,80,160]:
testsuite(i)
g.next()
g.throw(WriteLog)
There is no existing work-around for triggering an exception
inside a generator. It is the only case in Python where active
code cannot be excepted to or through.
Generator exception passing also helps address an intrinsic
limitation on generators, the prohibition against their using
try/finally to trigger clean-up code (PEP 255).
Note A: The name of the throw method was selected for several
reasons. Raise is a keyword and so cannot be used as a method
name. Unlike raise which immediately raises an exception from the
current execution point, throw will first return to the generator
and then raise the exception. The word throw is suggestive of
putting the exception in another location. The word throw is
already associated with exceptions in other languages.
Alternative method names were considered: resolve(), signal(),
genraise(), raiseinto(), and flush(). None of these fit as well
as throw().
Note B: To keep the throw() syntax simple only the instance
version of the raise syntax would be supported (no variants for
“raise string” or “raise class, instance”).
Calling g.throw(instance) would correspond to writing
raise instance immediately after the most recent yield.
References
[1]
Dr. David Mertz’s draft columns for Charming Python
http://gnosis.cx/publish/programming/charming_python_b5.txt
http://gnosis.cx/publish/programming/charming_python_b7.txt
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 288 – Generators Attributes and Exceptions | Standards Track | This PEP proposes to enhance generators by providing mechanisms for
raising exceptions and sharing data with running generators. |
PEP 289 – Generator Expressions
Author:
Raymond Hettinger <python at rcn.com>
Status:
Final
Type:
Standards Track
Created:
30-Jan-2002
Python-Version:
2.4
Post-History:
22-Oct-2003
Table of Contents
Abstract
Rationale
BDFL Pronouncements
The Details
Early Binding versus Late Binding
Reduction Functions
Acknowledgements
References
Copyright
Abstract
This PEP introduces generator expressions as a high performance,
memory efficient generalization of list comprehensions PEP 202 and
generators PEP 255.
Rationale
Experience with list comprehensions has shown their widespread
utility throughout Python. However, many of the use cases do
not need to have a full list created in memory. Instead, they
only need to iterate over the elements one at a time.
For instance, the following summation code will build a full list of
squares in memory, iterate over those values, and, when the reference
is no longer needed, delete the list:
sum([x*x for x in range(10)])
Memory is conserved by using a generator expression instead:
sum(x*x for x in range(10))
Similar benefits are conferred on constructors for container objects:
s = set(word for line in page for word in line.split())
d = dict( (k, func(k)) for k in keylist)
Generator expressions are especially useful with functions like sum(),
min(), and max() that reduce an iterable input to a single value:
max(len(line) for line in file if line.strip())
Generator expressions also address some examples of functionals coded
with lambda:
reduce(lambda s, a: s + a.myattr, data, 0)
reduce(lambda s, a: s + a[3], data, 0)
These simplify to:
sum(a.myattr for a in data)
sum(a[3] for a in data)
List comprehensions greatly reduced the need for filter() and map().
Likewise, generator expressions are expected to minimize the need
for itertools.ifilter() and itertools.imap(). In contrast, the
utility of other itertools will be enhanced by generator expressions:
dotproduct = sum(x*y for x,y in itertools.izip(x_vector, y_vector))
Having a syntax similar to list comprehensions also makes it easy to
convert existing code into a generator expression when scaling up
application.
Early timings showed that generators had a significant performance
advantage over list comprehensions. However, the latter were highly
optimized for Py2.4 and now the performance is roughly comparable
for small to mid-sized data sets. As the data volumes grow larger,
generator expressions tend to perform better because they do not
exhaust cache memory and they allow Python to re-use objects between
iterations.
BDFL Pronouncements
This PEP is ACCEPTED for Py2.4.
The Details
(None of this is exact enough in the eye of a reader from Mars, but I
hope the examples convey the intention well enough for a discussion in
c.l.py. The Python Reference Manual should contain a 100% exact
semantic and syntactic specification.)
The semantics of a generator expression are equivalent to creating
an anonymous generator function and calling it. For example:g = (x**2 for x in range(10))
print g.next()
is equivalent to:
def __gen(exp):
for x in exp:
yield x**2
g = __gen(iter(range(10)))
print g.next()
Only the outermost for-expression is evaluated immediately, the other
expressions are deferred until the generator is run:
g = (tgtexp for var1 in exp1 if exp2 for var2 in exp3 if exp4)
is equivalent to:
def __gen(bound_exp):
for var1 in bound_exp:
if exp2:
for var2 in exp3:
if exp4:
yield tgtexp
g = __gen(iter(exp1))
del __gen
The syntax requires that a generator expression always needs to be
directly inside a set of parentheses and cannot have a comma on
either side. With reference to the file Grammar/Grammar in CVS,
two rules change:
The rule:atom: '(' [testlist] ')'
changes to:
atom: '(' [testlist_gexp] ')'
where testlist_gexp is almost the same as listmaker, but only
allows a single test after ‘for’ … ‘in’:
testlist_gexp: test ( gen_for | (',' test)* [','] )
The rule for arglist needs similar changes.
This means that you can write:
sum(x**2 for x in range(10))
but you would have to write:
reduce(operator.add, (x**2 for x in range(10)))
and also:
g = (x**2 for x in range(10))
i.e. if a function call has a single positional argument, it can be
a generator expression without extra parentheses, but in all other
cases you have to parenthesize it.
The exact details were checked in to Grammar/Grammar version 1.49.
The loop variable (if it is a simple variable or a tuple of simple
variables) is not exposed to the surrounding function. This
facilitates the implementation and makes typical use cases more
reliable. In some future version of Python, list comprehensions
will also hide the induction variable from the surrounding code
(and, in Py2.4, warnings will be issued for code accessing the
induction variable).For example:
x = "hello"
y = list(x for x in "abc")
print x # prints "hello", not "c"
List comprehensions will remain unchanged. For example:[x for x in S] # This is a list comprehension.
[(x for x in S)] # This is a list containing one generator
# expression.
Unfortunately, there is currently a slight syntactic difference.
The expression:
[x for x in 1, 2, 3]
is legal, meaning:
[x for x in (1, 2, 3)]
But generator expressions will not allow the former version:
(x for x in 1, 2, 3)
is illegal.
The former list comprehension syntax will become illegal in Python
3.0, and should be deprecated in Python 2.4 and beyond.
List comprehensions also “leak” their loop variable into the
surrounding scope. This will also change in Python 3.0, so that
the semantic definition of a list comprehension in Python 3.0 will
be equivalent to list(<generator expression>). Python 2.4 and
beyond should issue a deprecation warning if a list comprehension’s
loop variable has the same name as a variable used in the
immediately surrounding scope.
Early Binding versus Late Binding
After much discussion, it was decided that the first (outermost)
for-expression should be evaluated immediately and that the remaining
expressions be evaluated when the generator is executed.
Asked to summarize the reasoning for binding the first expression,
Guido offered [1]:
Consider sum(x for x in foo()). Now suppose there's a bug in foo()
that raises an exception, and a bug in sum() that raises an
exception before it starts iterating over its argument. Which
exception would you expect to see? I'd be surprised if the one in
sum() was raised rather the one in foo(), since the call to foo()
is part of the argument to sum(), and I expect arguments to be
processed before the function is called.
OTOH, in sum(bar(x) for x in foo()), where sum() and foo()
are bugfree, but bar() raises an exception, we have no choice but
to delay the call to bar() until sum() starts iterating -- that's
part of the contract of generators. (They do nothing until their
next() method is first called.)
Various use cases were proposed for binding all free variables when
the generator is defined. And some proponents felt that the resulting
expressions would be easier to understand and debug if bound immediately.
However, Python takes a late binding approach to lambda expressions and
has no precedent for automatic, early binding. It was felt that
introducing a new paradigm would unnecessarily introduce complexity.
After exploring many possibilities, a consensus emerged that binding
issues were hard to understand and that users should be strongly
encouraged to use generator expressions inside functions that consume
their arguments immediately. For more complex applications, full
generator definitions are always superior in terms of being obvious
about scope, lifetime, and binding [2].
Reduction Functions
The utility of generator expressions is greatly enhanced when combined
with reduction functions like sum(), min(), and max(). The heapq
module in Python 2.4 includes two new reduction functions: nlargest()
and nsmallest(). Both work well with generator expressions and keep
no more than n items in memory at one time.
Acknowledgements
Raymond Hettinger first proposed the idea of “generator
comprehensions” in January 2002.
Peter Norvig resurrected the discussion in his proposal for
Accumulation Displays.
Alex Martelli provided critical measurements that proved the
performance benefits of generator expressions. He also provided
strong arguments that they were a desirable thing to have.
Phillip Eby suggested “iterator expressions” as the name.
Subsequently, Tim Peters suggested the name “generator expressions”.
Armin Rigo, Tim Peters, Guido van Rossum, Samuele Pedroni,
Hye-Shik Chang and Raymond Hettinger teased out the issues surrounding
early versus late binding [1].
Jiwon Seo single-handedly implemented various versions of the proposal
including the final version loaded into CVS. Along the way, there
were periodic code reviews by Hye-Shik Chang and Raymond Hettinger.
Guido van Rossum made the key design decisions after comments from
Armin Rigo and newsgroup discussions. Raymond Hettinger provided
the test suite, documentation, tutorial, and examples [2].
References
[1] (1, 2)
Discussion over the relative merits of early versus late binding
https://mail.python.org/pipermail/python-dev/2004-April/044555.html
[2] (1, 2)
Patch discussion and alternative patches on Source Forge
https://bugs.python.org/issue872326
Copyright
This document has been placed in the public domain.
| Final | PEP 289 – Generator Expressions | Standards Track | This PEP introduces generator expressions as a high performance,
memory efficient generalization of list comprehensions PEP 202 and
generators PEP 255. |
PEP 290 – Code Migration and Modernization
Author:
Raymond Hettinger <python at rcn.com>
Status:
Active
Type:
Informational
Created:
06-Jun-2002
Post-History:
Table of Contents
Abstract
Rationale
Guidelines for New Entries
Migration Issues
Comparison Operators Not a Shortcut for Producing 0 or 1
Modernization Procedures
Python 2.4 or Later
Inserting and Popping at the Beginning of Lists
Simplifying Custom Sorts
Replacing Common Uses of Lambda
Simplified Reverse Iteration
Python 2.3 or Later
Testing String Membership
Replace apply() with a Direct Function Call
Python 2.2 or Later
Testing Dictionary Membership
Looping Over Dictionaries
stat Methods
Reduce Dependency on types Module
Avoid Variable Names that Clash with the __builtins__ Module
Python 2.1 or Later
whrandom Module Deprecated
Python 2.0 or Later
String Methods
startswith and endswith String Methods
The atexit Module
Python 1.5 or Later
Class-Based Exceptions
All Python Versions
Testing for None
Copyright
Abstract
This PEP is a collection of procedures and ideas for updating Python
applications when newer versions of Python are installed.
The migration tips highlight possible areas of incompatibility and
make suggestions on how to find and resolve those differences. The
modernization procedures show how older code can be updated to take
advantage of new language features.
Rationale
This repository of procedures serves as a catalog or checklist of
known migration issues and procedures for addressing those issues.
Migration issues can arise for several reasons. Some obsolete
features are slowly deprecated according to the guidelines in PEP 4.
Also, some code relies on undocumented behaviors which are
subject to change between versions. Some code may rely on behavior
which was subsequently shown to be a bug and that behavior changes
when the bug is fixed.
Modernization options arise when new versions of Python add features
that allow improved clarity or higher performance than previously
available.
Guidelines for New Entries
Developers with commit access may update this PEP directly. Others
can send their ideas to a developer for possible inclusion.
While a consistent format makes the repository easier to use, feel
free to add or subtract sections to improve clarity.
Grep patterns may be supplied as tool to help maintainers locate code
for possible updates. However, fully automated search/replace style
regular expressions are not recommended. Instead, each code fragment
should be evaluated individually.
The contra-indications section is the most important part of a new
entry. It lists known situations where the update SHOULD NOT be
applied.
Migration Issues
Comparison Operators Not a Shortcut for Producing 0 or 1
Prior to Python 2.3, comparison operations returned 0 or 1 rather
than True or False. Some code may have used this as a shortcut for
producing zero or one in places where their boolean counterparts are
not appropriate. For example:
def identity(m=1):
"""Create and m-by-m identity matrix"""
return [[i==j for i in range(m)] for j in range(m)]
In Python 2.2, a call to identity(2) would produce:
[[1, 0], [0, 1]]
In Python 2.3, the same call would produce:
[[True, False], [False, True]]
Since booleans are a subclass of integers, the matrix would continue
to calculate normally, but it will not print as expected. The list
comprehension should be changed to read:
return [[int(i==j) for i in range(m)] for j in range(m)]
There are similar concerns when storing data to be used by other
applications which may expect a number instead of True or False.
Modernization Procedures
Procedures are grouped by the Python version required to be able to
take advantage of the modernization.
Python 2.4 or Later
Inserting and Popping at the Beginning of Lists
Python’s lists are implemented to perform best with appends and pops on
the right. Use of pop(0) or insert(0, x) triggers O(n) data
movement for the entire list. To help address this need, Python 2.4
introduces a new container, collections.deque() which has efficient
append and pop operations on the both the left and right (the trade-off
is much slower getitem/setitem access). The new container is especially
helpful for implementing data queues:
Pattern:
c = list(data) --> c = collections.deque(data)
c.pop(0) --> c.popleft()
c.insert(0, x) --> c.appendleft()
Locating:
grep pop(0 or
grep insert(0
Simplifying Custom Sorts
In Python 2.4, the sort method for lists and the new sorted
built-in function both accept a key function for computing sort
keys. Unlike the cmp function which gets applied to every
comparison, the key function gets applied only once to each record.
It is much faster than cmp and typically more readable while using
less code. The key function also maintains the stability of the
sort (records with the same key are left in their original order.
Original code using a comparison function:
names.sort(lambda x,y: cmp(x.lower(), y.lower()))
Alternative original code with explicit decoration:
tempnames = [(n.lower(), n) for n in names]
tempnames.sort()
names = [original for decorated, original in tempnames]
Revised code using a key function:
names.sort(key=str.lower) # case-insensitive sort
Locating: grep sort *.py
Replacing Common Uses of Lambda
In Python 2.4, the operator module gained two new functions,
itemgetter() and attrgetter() that can replace common uses of
the lambda keyword. The new functions run faster and
are considered by some to improve readability.
Pattern:
lambda r: r[2] --> itemgetter(2)
lambda r: r.myattr --> attrgetter('myattr')
Typical contexts:
sort(studentrecords, key=attrgetter('gpa')) # set a sort field
map(attrgetter('lastname'), studentrecords) # extract a field
Locating: grep lambda *.py
Simplified Reverse Iteration
Python 2.4 introduced the reversed builtin function for reverse
iteration. The existing approaches to reverse iteration suffered
from wordiness, performance issues (speed and memory consumption),
and/or lack of clarity. A preferred style is to express the
sequence in a forwards direction, apply reversed to the result,
and then loop over the resulting fast, memory friendly iterator.
Original code expressed with half-open intervals:
for i in range(n-1, -1, -1):
print seqn[i]
Alternative original code reversed in multiple steps:
rseqn = list(seqn)
rseqn.reverse()
for value in rseqn:
print value
Alternative original code expressed with extending slicing:
for value in seqn[::-1]:
print value
Revised code using the reversed function:
for value in reversed(seqn):
print value
Python 2.3 or Later
Testing String Membership
In Python 2.3, for string2 in string1, the length restriction on
string2 is lifted; it can now be a string of any length. When
searching for a substring, where you don’t care about the position of
the substring in the original string, using the in operator makes
the meaning clear.
Pattern:
string1.find(string2) >= 0 --> string2 in string1
string1.find(string2) != -1 --> string2 in string1
Replace apply() with a Direct Function Call
In Python 2.3, apply() was marked for Pending Deprecation because it
was made obsolete by Python 1.6’s introduction of * and ** in
function calls. Using a direct function call was always a little
faster than apply() because it saved the lookup for the builtin.
Now, apply() is even slower due to its use of the warnings module.
Pattern:
apply(f, args, kwds) --> f(*args, **kwds)
Note: The Pending Deprecation was removed from apply() in Python 2.3.3
since it creates pain for people who need to maintain code that works
with Python versions as far back as 1.5.2, where there was no
alternative to apply(). The function remains deprecated, however.
Python 2.2 or Later
Testing Dictionary Membership
For testing dictionary membership, use the ‘in’ keyword instead of the
‘has_key()’ method. The result is shorter and more readable. The
style becomes consistent with tests for membership in lists. The
result is slightly faster because has_key requires an attribute
search and uses a relatively expensive function call.
Pattern:
if d.has_key(k): --> if k in d:
Contra-indications:
Some dictionary-like objects may not define a
__contains__() method:if dictlike.has_key(k)
Locating: grep has_key
Looping Over Dictionaries
Use the new iter methods for looping over dictionaries. The
iter methods are faster because they do not have to create a new
list object with a complete copy of all of the keys, values, or items.
Selecting only keys, values, or items (key/value pairs) as needed
saves the time for creating throwaway object references and, in the
case of items, saves a second hash look-up of the key.
Pattern:
for key in d.keys(): --> for key in d:
for value in d.values(): --> for value in d.itervalues():
for key, value in d.items():
--> for key, value in d.iteritems():
Contra-indications:
If you need a list, do not change the return type:def getids(): return d.keys()
Some dictionary-like objects may not define
iter methods:for k in dictlike.keys():
Iterators do not support slicing, sorting or other operations:k = d.keys(); j = k[:]
Dictionary iterators prohibit modifying the dictionary:for k in d.keys(): del[k]
stat Methods
Replace stat constants or indices with new os.stat attributes
and methods. The os.stat attributes and methods are not
order-dependent and do not require an import of the stat module.
Pattern:
os.stat("foo")[stat.ST_MTIME] --> os.stat("foo").st_mtime
os.stat("foo")[stat.ST_MTIME] --> os.path.getmtime("foo")
Locating: grep os.stat or grep stat.S
Reduce Dependency on types Module
The types module is likely to be deprecated in the future. Use
built-in constructor functions instead. They may be slightly faster.
Pattern:
isinstance(v, types.IntType) --> isinstance(v, int)
isinstance(s, types.StringTypes) --> isinstance(s, basestring)
Full use of this technique requires Python 2.3 or later
(basestring was introduced in Python 2.3), but Python 2.2 is
sufficient for most uses.
Locating: grep types *.py | grep import
Avoid Variable Names that Clash with the __builtins__ Module
In Python 2.2, new built-in types were added for dict and file.
Scripts should avoid assigning variable names that mask those types.
The same advice also applies to existing builtins like list.
Pattern:
file = open('myfile.txt') --> f = open('myfile.txt')
dict = obj.__dict__ --> d = obj.__dict__
Locating: grep 'file ' *.py
Python 2.1 or Later
whrandom Module Deprecated
All random-related methods have been collected in one place, the
random module.
Pattern:
import whrandom --> import random
Locating: grep whrandom
Python 2.0 or Later
String Methods
The string module is likely to be deprecated in the future. Use
string methods instead. They’re faster too.
Pattern:
import string ; string.method(s, ...) --> s.method(...)
c in string.whitespace --> c.isspace()
Locating: grep string *.py | grep import
startswith and endswith String Methods
Use these string methods instead of slicing. No slice has to be
created and there’s no risk of miscounting.
Pattern:
"foobar"[:3] == "foo" --> "foobar".startswith("foo")
"foobar"[-3:] == "bar" --> "foobar".endswith("bar")
The atexit Module
The atexit module supports multiple functions to be executed upon
program termination. Also, it supports parameterized functions.
Unfortunately, its implementation conflicts with the sys.exitfunc
attribute which only supports a single exit function. Code relying
on sys.exitfunc may interfere with other modules (including library
modules) that elect to use the newer and more versatile atexit module.
Pattern:
sys.exitfunc = myfunc --> atexit.register(myfunc)
Python 1.5 or Later
Class-Based Exceptions
String exceptions are deprecated, so derive from the Exception
base class. Unlike the obsolete string exceptions, class exceptions
all derive from another exception or the Exception base class.
This allows meaningful groupings of exceptions. It also allows an
“except Exception” clause to catch all exceptions.
Pattern:
NewError = 'NewError' --> class NewError(Exception): pass
Locating: Use PyChecker.
All Python Versions
Testing for None
Since there is only one None object, equality can be tested with
identity. Identity tests are slightly faster than equality tests.
Also, some object types may overload comparison, so equality testing
may be much slower.
Pattern:
if v == None --> if v is None:
if v != None --> if v is not None:
Locating: grep '== None' or grep '!= None'
Copyright
This document has been placed in the public domain.
| Active | PEP 290 – Code Migration and Modernization | Informational | This PEP is a collection of procedures and ideas for updating Python
applications when newer versions of Python are installed. |
PEP 291 – Backward Compatibility for the Python 2 Standard Library
Author:
Neal Norwitz <nnorwitz at gmail.com>
Status:
Final
Type:
Informational
Created:
06-Jun-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Rationale
Features to Avoid
Backward Compatible Packages, Modules, and Tools
Notes
Copyright
Abstract
This PEP describes the packages and modules in the Python 2
standard library which should remain backward compatible with
previous versions of Python. If a package is not listed here,
then it need only remain compatible with the version of Python it
is distributed with.
This PEP has no bearing on the Python 3 standard library.
Rationale
Authors have various reasons why packages and modules should
continue to work with previous versions of Python. In order to
maintain backward compatibility for these modules while moving the
rest of the standard library forward, it is necessary to know
which modules can be modified and which should use old and
possibly deprecated features.
Generally, authors should attempt to keep changes backward
compatible with the previous released version of Python in order
to make bug fixes easier to backport.
In addition to a package or module being listed in this PEP,
authors must add a comment at the top of each file documenting
the compatibility requirement.
When a major version of Python is released, a Subversion branch is
created for continued maintenance and bug fix releases. A package
version on a branch may have a different compatibility requirement
than the same package on the trunk (i.e. current bleeding-edge
development). Where appropriate, these branch compatibilities are
listed below.
Features to Avoid
The following list contains common features to avoid in order
to maintain backward compatibility with each version of Python.
This list is not complete! It is only meant as a general guide.
Note that the features below were implemented in the version
following the one listed. For example, features listed next to
1.5.2 were implemented in 2.0.
Version
Features to Avoid
1.5.2
string methods, Unicode, list comprehensions,
augmented assignment (eg, +=), zip(), import x as y,
dict.setdefault(), print >> f,
calling f(*args, **kw), plus all features below
2.0
nested scopes, rich comparisons,
function attributes, plus all features below
2.1
use of object or new-style classes, iterators,
using generators, nested scopes, or //
without from __future__ import … statement,
isinstance(X, TYP) where TYP is a tuple of types,
plus all features below
2.2
bool, True, False, basestring, enumerate(),
{}.pop(), PendingDeprecationWarning,
Universal Newlines, plus all features below
plus all features below
2.3
generator expressions, multi-line imports,
decorators, int/long unification, set/frozenset,
reversed(), sorted(), “”.rsplit(),
plus all features below
2.4
with statement, conditional expressions,
combined try/except/finally, relative imports,
yield expressions or generator.throw/send/close(),
plus all features below
2.5
with statement without from __future__ import,
io module, str.format(), except as,
bytes, b’’ literals, property.setter/deleter
Backward Compatible Packages, Modules, and Tools
Package/Module
Maintainer(s)
Python Version
Notes
2to3
Benjamin Peterson
2.5
bsddb
Greg Smith
Barry Warsaw
2.1
compiler
Jeremy Hylton
2.1
decimal
Raymond Hettinger
2.3
[2]
distutils
Tarek Ziade
2.3
email
Barry Warsaw
2.1 / 2.3
[1]
pkgutil
Phillip Eby
2.3
platform
Marc-Andre Lemburg
1.5.2
pybench
Marc-Andre Lemburg
1.5.2
[3]
sre
Fredrik Lundh
2.1
subprocess
Peter Astrand
2.2
wsgiref
Phillip J. Eby
2.1
xml (PyXML)
Martin v. Loewis
2.0
xmlrpclib
Fredrik Lundh
2.1
Tool
Maintainer(s)
Python Version
None
Notes
The email package version 2 was distributed with Python up to
Python 2.3, and this must remain Python 2.1 compatible. email
package version 3 will be distributed with Python 2.4 and will
need to remain compatible only with Python 2.3.
Specification updates will be treated as bugfixes and backported.
Python 2.3 compatibility will be kept for at least Python 2.4.
The decision will be revisited for Python 2.5 and not changed
unless compelling advantages arise.
pybench lives under the Tools/ directory. Compatibility with
older Python versions is needed in order to be able to compare
performance between Python versions. New features may still
be used in new tests, which may then be configured to fail
gracefully on import by the tool in older Python versions.
Copyright
This document has been placed in the public domain.
| Final | PEP 291 – Backward Compatibility for the Python 2 Standard Library | Informational | This PEP describes the packages and modules in the Python 2
standard library which should remain backward compatible with
previous versions of Python. If a package is not listed here,
then it need only remain compatible with the version of Python it
is distributed with. |
PEP 292 – Simpler String Substitutions
Author:
Barry Warsaw <barry at python.org>
Status:
Final
Type:
Standards Track
Created:
18-Jun-2002
Python-Version:
2.4
Post-History:
18-Jun-2002, 23-Mar-2004, 22-Aug-2004
Replaces:
215
Table of Contents
Abstract
Rationale
A Simpler Proposal
Why $ and Braces?
Comparison to PEP 215
Internationalization
Reference Implementation
References
Copyright
Abstract
This PEP describes a simpler string substitution feature, also
known as string interpolation. This PEP is “simpler” in two
respects:
Python’s current string substitution feature
(i.e. %-substitution) is complicated and error prone. This PEP
is simpler at the cost of some expressiveness.
PEP 215 proposed an alternative string interpolation feature,
introducing a new $ string prefix. PEP 292 is simpler than
this because it involves no syntax changes and has much simpler
rules for what substitutions can occur in the string.
Rationale
Python currently supports a string substitution syntax based on
C’s printf() ‘%’ formatting character [1]. While quite rich,
%-formatting codes are also error prone, even for
experienced Python programmers. A common mistake is to leave off
the trailing format character, e.g. the ‘s’ in "%(name)s".
In addition, the rules for what can follow a % sign are fairly
complex, while the usual application rarely needs such complexity.
Most scripts need to do some string interpolation, but most of
those use simple ‘stringification’ formats, i.e. %s or %(name)s
This form should be made simpler and less error prone.
A Simpler Proposal
We propose the addition of a new class, called Template, which
will live in the string module. The Template class supports new
rules for string substitution; its value contains placeholders,
introduced with the $ character. The following rules for
$-placeholders apply:
$$ is an escape; it is replaced with a single $
$identifier names a substitution placeholder matching a mapping
key of “identifier”. By default, “identifier” must spell a
Python identifier as defined in [2]. The first non-identifier
character after the $ character terminates this placeholder
specification.
${identifier} is equivalent to $identifier. It is required
when valid identifier characters follow the placeholder but are
not part of the placeholder, e.g. "${noun}ification".
If the $ character appears at the end of the line, or is followed
by any other character than those described above, a ValueError
will be raised at interpolation time. Values in mapping are
converted automatically to strings.
No other characters have special meaning, however it is possible
to derive from the Template class to define different substitution
rules. For example, a derived class could allow for periods in
the placeholder (e.g. to support a kind of dynamic namespace and
attribute path lookup), or could define a delimiter character
other than $.
Once the Template has been created, substitutions can be performed
by calling one of two methods:
substitute(). This method returns a new string which results
when the values of a mapping are substituted for the
placeholders in the Template. If there are placeholders which
are not present in the mapping, a KeyError will be raised.
safe_substitute(). This is similar to the substitute() method,
except that KeyErrors are never raised (due to placeholders
missing from the mapping). When a placeholder is missing, the
original placeholder will appear in the resulting string.Here are some examples:
>>> from string import Template
>>> s = Template('${name} was born in ${country}')
>>> print s.substitute(name='Guido', country='the Netherlands')
Guido was born in the Netherlands
>>> print s.substitute(name='Guido')
Traceback (most recent call last):
[...]
KeyError: 'country'
>>> print s.safe_substitute(name='Guido')
Guido was born in ${country}
The signature of substitute() and safe_substitute() allows for
passing the mapping of placeholders to values, either as a single
dictionary-like object in the first positional argument, or as
keyword arguments as shown above. The exact details and
signatures of these two methods is reserved for the standard
library documentation.
Why $ and Braces?
The BDFL said it best [3]: “The $ means “substitution” in so many
languages besides Perl that I wonder where you’ve been. […]
We’re copying this from the shell.”
Thus the substitution rules are chosen because of the similarity
with so many other languages. This makes the substitution rules
easier to teach, learn, and remember.
Comparison to PEP 215
PEP 215 describes an alternate proposal for string interpolation.
Unlike that PEP, this one does not propose any new syntax for
Python. All the proposed new features are embodied in a new
library module. PEP 215 proposes a new string prefix
representation such as $"" which signal to Python that a new type
of string is present. $-strings would have to interact with the
existing r-prefixes and u-prefixes, essentially doubling the
number of string prefix combinations.
PEP 215 also allows for arbitrary Python expressions inside the
$-strings, so that you could do things like:
import sys
print $"sys = $sys, sys = $sys.modules['sys']"
which would return:
sys = <module 'sys' (built-in)>, sys = <module 'sys' (built-in)>
It’s generally accepted that the rules in PEP 215 are safe in the
sense that they introduce no new security issues (see PEP 215,
“Security Issues” for details). However, the rules are still
quite complex, and make it more difficult to see the substitution
placeholder in the original $-string.
The interesting thing is that the Template class defined in this
PEP is designed for inheritance and, with a little extra work,
it’s possible to support PEP 215’s functionality using existing
Python syntax.
For example, one could define subclasses of Template and dict that
allowed for a more complex placeholder syntax and a mapping that
evaluated those placeholders.
Internationalization
The implementation supports internationalization by recording the
original template string in the Template instance’s template
attribute. This attribute would serve as the lookup key in an
gettext-based catalog. It is up to the application to turn the
resulting string back into a Template for substitution.
However, the Template class was designed to work more intuitively
in an internationalized application, by supporting the mixing-in
of Template and unicode subclasses. Thus an internationalized
application could create an application-specific subclass,
multiply inheriting from Template and unicode, and using instances
of that subclass as the gettext catalog key. Further, the
subclass could alias the special __mod__() method to either
.substitute() or .safe_substitute() to provide a more traditional
string/unicode like %-operator substitution syntax.
Reference Implementation
The implementation [4] has been committed to the Python 2.4 source tree.
References
[1]
String Formatting Operations
https://docs.python.org/release/2.6/library/stdtypes.html#string-formatting-operations
[2]
Identifiers and Keywords
https://docs.python.org/release/2.6/reference/lexical_analysis.html#identifiers-and-keywords
[3]
https://mail.python.org/pipermail/python-dev/2002-June/025652.html
[4]
Reference Implementation
http://sourceforge.net/tracker/index.php?func=detail&aid=1014055&group_id=5470&atid=305470
Copyright
This document has been placed in the public domain.
| Final | PEP 292 – Simpler String Substitutions | Standards Track | This PEP describes a simpler string substitution feature, also
known as string interpolation. This PEP is “simpler” in two
respects: |
PEP 293 – Codec Error Handling Callbacks
Author:
Walter Dörwald <walter at livinglogic.de>
Status:
Final
Type:
Standards Track
Created:
18-Jun-2002
Python-Version:
2.3
Post-History:
19-Jun-2002
Table of Contents
Abstract
Specification
Rationale
Implementation Notes
Backwards Compatibility
References
Copyright
Abstract
This PEP aims at extending Python’s fixed codec error handling
schemes with a more flexible callback based approach.
Python currently uses a fixed error handling for codec error
handlers. This PEP describes a mechanism which allows Python to
use function callbacks as error handlers. With these more
flexible error handlers it is possible to add new functionality to
existing codecs by e.g. providing fallback solutions or different
encodings for cases where the standard codec mapping does not
apply.
Specification
Currently the set of codec error handling algorithms is fixed to
either “strict”, “replace” or “ignore” and the semantics of these
algorithms is implemented separately for each codec.
The proposed patch will make the set of error handling algorithms
extensible through a codec error handler registry which maps
handler names to handler functions. This registry consists of the
following two C functions:
int PyCodec_RegisterError(const char *name, PyObject *error)
PyObject *PyCodec_LookupError(const char *name)
and their Python counterparts:
codecs.register_error(name, error)
codecs.lookup_error(name)
PyCodec_LookupError raises a LookupError if no callback function
has been registered under this name.
Similar to the encoding name registry there is no way of
unregistering callback functions or iterating through the
available functions.
The callback functions will be used in the following way by the
codecs: when the codec encounters an encoding/decoding error, the
callback function is looked up by name, the information about the
error is stored in an exception object and the callback is called
with this object. The callback returns information about how to
proceed (or raises an exception).
For encoding, the exception object will look like this:
class UnicodeEncodeError(UnicodeError):
def __init__(self, encoding, object, start, end, reason):
UnicodeError.__init__(self,
"encoding '%s' can't encode characters " +
"in positions %d-%d: %s" % (encoding,
start, end-1, reason))
self.encoding = encoding
self.object = object
self.start = start
self.end = end
self.reason = reason
This type will be implemented in C with the appropriate setter and
getter methods for the attributes, which have the following
meaning:
encoding: The name of the encoding;
object: The original unicode object for which encode() has
been called;
start: The position of the first unencodable character;
end: (The position of the last unencodable character)+1 (or
the length of object, if all characters from start to the end
of object are unencodable);
reason: The reason why object[start:end] couldn’t be encoded.
If object has consecutive unencodable characters, the encoder
should collect those characters for one call to the callback if
those characters can’t be encoded for the same reason. The
encoder is not required to implement this behaviour but may call
the callback for every single character, but it is strongly
suggested that the collecting method is implemented.
The callback must not modify the exception object. If the
callback does not raise an exception (either the one passed in, or
a different one), it must return a tuple:
(replacement, newpos)
replacement is a unicode object that the encoder will encode and
emit instead of the unencodable object[start:end] part, newpos
specifies a new position within object, where (after encoding the
replacement) the encoder will continue encoding.
Negative values for newpos are treated as being relative to
end of object. If newpos is out of bounds the encoder will raise
an IndexError.
If the replacement string itself contains an unencodable character
the encoder raises the exception object (but may set a different
reason string before raising).
Should further encoding errors occur, the encoder is allowed to
reuse the exception object for the next call to the callback.
Furthermore, the encoder is allowed to cache the result of
codecs.lookup_error.
If the callback does not know how to handle the exception, it must
raise a TypeError.
Decoding works similar to encoding with the following differences:
The exception class is named UnicodeDecodeError and the attribute
object is the original 8bit string that the decoder is currently
decoding.
The decoder will call the callback with those bytes that
constitute one undecodable sequence, even if there is more than
one undecodable sequence that is undecodable for the same reason
directly after the first one. E.g. for the “unicode-escape”
encoding, when decoding the illegal string \\u00\\u01x, the
callback will be called twice (once for \\u00 and once for
\\u01). This is done to be able to generate the correct number
of replacement characters.
The replacement returned from the callback is a unicode object
that will be emitted by the decoder as-is without further
processing instead of the undecodable object[start:end] part.
There is a third API that uses the old strict/ignore/replace error
handling scheme:
PyUnicode_TranslateCharmap/unicode.translate
The proposed patch will enhance PyUnicode_TranslateCharmap, so
that it also supports the callback registry. This has the
additional side effect that PyUnicode_TranslateCharmap will
support multi-character replacement strings (see SF feature
request #403100 [1]).
For PyUnicode_TranslateCharmap the exception class will be named
UnicodeTranslateError. PyUnicode_TranslateCharmap will collect
all consecutive untranslatable characters (i.e. those that map to
None) and call the callback with them. The replacement returned
from the callback is a unicode object that will be put in the
translated result as-is, without further processing.
All encoders and decoders are allowed to implement the callback
functionality themselves, if they recognize the callback name
(i.e. if it is a system callback like “strict”, “replace” and
“ignore”). The proposed patch will add two additional system
callback names: “backslashreplace” and “xmlcharrefreplace”, which
can be used for encoding and translating and which will also be
implemented in-place for all encoders and
PyUnicode_TranslateCharmap.
The Python equivalent of these five callbacks will look like this:
def strict(exc):
raise exc
def ignore(exc):
if isinstance(exc, UnicodeError):
return (u"", exc.end)
else:
raise TypeError("can't handle %s" % exc.__name__)
def replace(exc):
if isinstance(exc, UnicodeEncodeError):
return ((exc.end-exc.start)*u"?", exc.end)
elif isinstance(exc, UnicodeDecodeError):
return (u"\\ufffd", exc.end)
elif isinstance(exc, UnicodeTranslateError):
return ((exc.end-exc.start)*u"\\ufffd", exc.end)
else:
raise TypeError("can't handle %s" % exc.__name__)
def backslashreplace(exc):
if isinstance(exc,
(UnicodeEncodeError, UnicodeTranslateError)):
s = u""
for c in exc.object[exc.start:exc.end]:
if ord(c)<=0xff:
s += u"\\x%02x" % ord(c)
elif ord(c)<=0xffff:
s += u"\\u%04x" % ord(c)
else:
s += u"\\U%08x" % ord(c)
return (s, exc.end)
else:
raise TypeError("can't handle %s" % exc.__name__)
def xmlcharrefreplace(exc):
if isinstance(exc,
(UnicodeEncodeError, UnicodeTranslateError)):
s = u""
for c in exc.object[exc.start:exc.end]:
s += u"&#%d;" % ord(c)
return (s, exc.end)
else:
raise TypeError("can't handle %s" % exc.__name__)
These five callback handlers will also be accessible to Python as
codecs.strict_error, codecs.ignore_error, codecs.replace_error,
codecs.backslashreplace_error and codecs.xmlcharrefreplace_error.
Rationale
Most legacy encoding do not support the full range of Unicode
characters. For these cases many high level protocols support a
way of escaping a Unicode character (e.g. Python itself supports
the \x, \u and \U convention, XML supports character references
via &#xxx; etc.).
When implementing such an encoding algorithm, a problem with the
current implementation of the encode method of Unicode objects
becomes apparent: For determining which characters are unencodable
by a certain encoding, every single character has to be tried,
because encode does not provide any information about the location
of the error(s), so
# (1)
us = u"xxx"
s = us.encode(encoding)
has to be replaced by
# (2)
us = u"xxx"
v = []
for c in us:
try:
v.append(c.encode(encoding))
except UnicodeError:
v.append("&#%d;" % ord(c))
s = "".join(v)
This slows down encoding dramatically as now the loop through the
string is done in Python code and no longer in C code.
Furthermore, this solution poses problems with stateful encodings.
For example, UTF-16 uses a Byte Order Mark at the start of the
encoded byte string to specify the byte order. Using (2) with
UTF-16, results in an 8 bit string with a BOM between every
character.
To work around this problem, a stream writer - which keeps state
between calls to the encoding function - has to be used:
# (3)
us = u"xxx"
import codecs, cStringIO as StringIO
writer = codecs.getwriter(encoding)
v = StringIO.StringIO()
uv = writer(v)
for c in us:
try:
uv.write(c)
except UnicodeError:
uv.write(u"&#%d;" % ord(c))
s = v.getvalue()
To compare the speed of (1) and (3) the following test script has
been used:
# (4)
import time
us = u"äa"*1000000
encoding = "ascii"
import codecs, cStringIO as StringIO
t1 = time.time()
s1 = us.encode(encoding, "replace")
t2 = time.time()
writer = codecs.getwriter(encoding)
v = StringIO.StringIO()
uv = writer(v)
for c in us:
try:
uv.write(c)
except UnicodeError:
uv.write(u"?")
s2 = v.getvalue()
t3 = time.time()
assert(s1==s2)
print "1:", t2-t1
print "2:", t3-t2
print "factor:", (t3-t2)/(t2-t1)
On Linux this gives the following output (with Python 2.3a0):
1: 0.274321913719
2: 51.1284689903
factor: 186.381278466
i.e. (3) is 180 times slower than (1).
Callbacks must be stateless, because as soon as a callback is
registered it is available globally and can be called by multiple
encode() calls. To be able to use stateful callbacks, the errors
parameter for encode/decode/translate would have to be changed
from char * to PyObject *, so that the callback could be used
directly, without the need to register the callback globally. As
this requires changes to lots of C prototypes, this approach was
rejected.
Currently all encoding/decoding functions have arguments
const Py_UNICODE *p, int size
or
const char *p, int size
to specify the unicode characters/8bit characters to be
encoded/decoded. So in case of an error the codec has to create a
new unicode or str object from these parameters and store it in
the exception object. The callers of these encoding/decoding
functions extract these parameters from str/unicode objects
themselves most of the time, so it could speed up error handling
if these object were passed directly. As this again requires
changes to many C functions, this approach has been rejected.
For stream readers/writers the errors attribute must be changeable
to be able to switch between different error handling methods
during the lifetime of the stream reader/writer. This is currently
the case for codecs.StreamReader and codecs.StreamWriter and
all their subclasses. All core codecs and probably most of the
third party codecs (e.g. JapaneseCodecs) derive their stream
readers/writers from these classes so this already works,
but the attribute errors should be documented as a requirement.
Implementation Notes
A sample implementation is available as SourceForge patch #432401
[2] including a script for testing the speed of various
string/encoding/error combinations and a test script.
Currently the new exception classes are old style Python
classes. This means that accessing attributes results
in a dict lookup. The C API is implemented in a way
that makes it possible to switch to new style classes
behind the scene, if Exception (and UnicodeError) will
be changed to new style classes implemented in C for
improved performance.
The class codecs.StreamReaderWriter uses the errors parameter for
both reading and writing. To be more flexible this should
probably be changed to two separate parameters for reading and
writing.
The errors parameter of PyUnicode_TranslateCharmap is not
availably to Python, which makes testing of the new functionality
of PyUnicode_TranslateCharmap impossible with Python scripts. The
patch should add an optional argument errors to unicode.translate
to expose the functionality and make testing possible.
Codecs that do something different than encoding/decoding from/to
unicode and want to use the new machinery can define their own
exception classes and the strict handlers will automatically work
with it. The other predefined error handlers are unicode specific
and expect to get a Unicode(Encode|Decode|Translate)Error
exception object so they won’t work.
Backwards Compatibility
The semantics of unicode.encode with errors=”replace” has changed:
The old version always stored a ? character in the output string
even if no character was mapped to ? in the mapping. With the
proposed patch, the replacement string from the callback will
again be looked up in the mapping dictionary. But as all
supported encodings are ASCII based, and thus map ? to ?, this
should not be a problem in practice.
Illegal values for the errors argument raised ValueError before,
now they will raise LookupError.
References
[1]
SF feature request #403100
“Multicharacter replacements in PyUnicode_TranslateCharmap”
https://bugs.python.org/issue403100
[2]
SF patch #432401 “unicode encoding error callbacks”
https://bugs.python.org/issue432401
Copyright
This document has been placed in the public domain.
| Final | PEP 293 – Codec Error Handling Callbacks | Standards Track | This PEP aims at extending Python’s fixed codec error handling
schemes with a more flexible callback based approach. |
PEP 294 – Type Names in the types Module
Author:
Oren Tirosh <oren at hishome.net>
Status:
Rejected
Type:
Standards Track
Created:
19-Jun-2002
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Pronouncement
Rationale
Specification
Backward compatibility
Reference Implementation
Copyright
Abstract
This PEP proposes that symbols matching the type name should be added
to the types module for all basic Python types in the types module:
types.IntegerType -> types.int
types.FunctionType -> types.function
types.TracebackType -> types.traceback
...
The long capitalized names currently in the types module will be
deprecated.
With this change the types module can serve as a replacement for the
new module. The new module shall be deprecated and listed in PEP 4.
Pronouncement
A centralized repository of type names was a mistake. Neither the
“types” nor “new” modules should be carried forward to Python 3.0.
In the meantime, it does not make sense to make the proposed updates
to the modules. This would cause disruption without any compensating
benefit.
Instead, the problem that some internal types (frames, functions,
etc.) don’t live anywhere outside those modules may be addressed by
either adding them to __builtin__ or sys. This will provide a
smoother transition to Python 3.0.
Rationale
Using two sets of names for the same objects is redundant and
confusing.
In Python versions prior to 2.2 the symbols matching many type names
were taken by the factory functions for those types. Now all basic
types have been unified with their factory functions and therefore the
type names are available to be consistently used to refer to the type
object.
Most types are accessible as either builtins or in the new module but
some types such as traceback and generator are only accessible through
the types module under names which do not match the type name. This
PEP provides a uniform way to access all basic types under a single
set of names.
Specification
The types module shall pass the following test:
import types
for t in vars(types).values():
if type(t) is type:
assert getattr(types, t.__name__) is t
The types ‘class’, ‘instance method’ and ‘dict-proxy’ have already
been renamed to the valid Python identifiers ‘classobj’,
‘instancemethod’ and ‘dictproxy’, making this possible.
Backward compatibility
Because of their widespread use it is not planned to actually remove
the long names from the types module in some future version. However,
the long names should be changed in documentation and library sources
to discourage their use in new code.
Reference Implementation
A reference implementation is available in
issue #569328.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 294 – Type Names in the types Module | Standards Track | This PEP proposes that symbols matching the type name should be added
to the types module for all basic Python types in the types module: |
PEP 295 – Interpretation of multiline string constants
Author:
Stepan Koltsov <yozh at mx1.ru>
Status:
Rejected
Type:
Standards Track
Created:
22-Jul-2002
Python-Version:
3.0
Post-History:
Table of Contents
Abstract
Rationale
Implementation
Alternatives
Copyright
Abstract
This PEP describes an interpretation of multiline string constants
for Python. It suggests stripping spaces after newlines and
stripping a newline if it is first character after an opening
quotation.
Rationale
This PEP proposes an interpretation of multiline string constants
in Python. Currently, the value of string constant is all the
text between quotations, maybe with escape sequences substituted,
e.g.:
def f():
"""
la-la-la
limona, banana
"""
def g():
return "This is \
string"
print repr(f.__doc__)
print repr(g())
prints:
'\n\tla-la-la\n\tlimona, banana\n\t'
'This is \tstring'
This PEP suggest two things:
ignore the first character after opening quotation, if it is
newline
ignore in string constants all spaces and tabs up to
first non-whitespace character, but no more than current
indentation.
After applying this, previous program will print:
'la-la-la\nlimona, banana\n'
'This is string'
To get this result, previous programs could be rewritten for
current Python as (note, this gives the same result with new
strings meaning):
def f():
"""\
la-la-la
limona, banana
"""
def g():
"This is \
string"
Or stripping can be done with library routines at runtime (as
pydoc does), but this decreases program readability.
Implementation
I’ll say nothing about CPython, Jython or Python.NET.
In original Python, there is no info about the current indentation
(in spaces) at compile time, so space and tab stripping should be
done at parse time. Currently no flags can be passed to the
parser in program text (like from __future__ import xxx). I
suggest enabling or disabling of this feature at Python compile
time depending of CPP flag Py_PARSE_MULTILINE_STRINGS.
Alternatives
New interpretation of string constants can be implemented with flags
‘i’ and ‘o’ to string constants, like:
i"""
SELECT * FROM car
WHERE model = 'i525'
""" is in new style,
o"""SELECT * FROM employee
WHERE birth < 1982
""" is in old style, and
"""
SELECT employee.name, car.name, car.price FROM employee, car
WHERE employee.salary * 36 > car.price
""" is in new style after Python-x.y.z and in old style otherwise.
Also this feature can be disabled if string is raw, i.e. if flag ‘r’
specified.
Copyright
This document has been placed in the Public Domain.
| Rejected | PEP 295 – Interpretation of multiline string constants | Standards Track | This PEP describes an interpretation of multiline string constants
for Python. It suggests stripping spaces after newlines and
stripping a newline if it is first character after an opening
quotation. |
PEP 296 – Adding a bytes Object Type
Author:
Scott Gilbert <xscottg at yahoo.com>
Status:
Withdrawn
Type:
Standards Track
Created:
12-Jul-2002
Python-Version:
2.3
Post-History:
Table of Contents
Notice
Abstract
Rationale
Specification
Contrast to existing types
Backward Compatibility
Reference Implementation
Additional Notes/Comments
References
Copyright
Notice
This PEP is withdrawn by the author (in favor of PEP 358).
Abstract
This PEP proposes the creation of a new standard type and builtin
constructor called ‘bytes’. The bytes object is an efficiently
stored array of bytes with some additional characteristics that
set it apart from several implementations that are similar.
Rationale
Python currently has many objects that implement something akin to
the bytes object of this proposal. For instance the standard
string, buffer, array, and mmap objects are all very similar in
some regards to the bytes object. Additionally, several
significant third party extensions have created similar objects to
try and fill similar needs. Frustratingly, each of these objects
is too narrow in scope and is missing critical features to make it
applicable to a wider category of problems.
Specification
The bytes object has the following important characteristics:
Efficient underlying array storage via the standard C type “unsigned
char”. This allows fine grain control over how much memory is
allocated. With the alignment restrictions designated in the next
item, it is trivial for low level extensions to cast the pointer
to a different type as needed.Also, since the object is implemented as an array of bytes, it is
possible to pass the bytes object to the extensive library of
routines already in the standard library that presently work with
strings. For instance, the bytes object in conjunction with the
struct module could be used to provide a complete replacement for
the array module using only Python script.
If an unusual platform comes to light, one where there isn’t a
native unsigned 8 bit type, the object will do its best to
represent itself at the Python script level as though it were an
array of 8 bit unsigned values. It is doubtful whether many
extensions would handle this correctly, but Python script could be
portable in these cases.
Alignment of the allocated byte array is whatever is promised by the
platform implementation of malloc. A bytes object created from an
extension can be supplied that provides any arbitrary alignment as
the extension author sees fit.This alignment restriction should allow the bytes object to be
used as storage for all standard C types - including PyComplex
objects or other structs of standard C type types. Further
alignment restrictions can be provided by extensions as necessary.
The bytes object implements a subset of the sequence operations
provided by string/array objects, but with slightly different
semantics in some cases. In particular, a slice always returns a
new bytes object, but the underlying memory is shared between the
two objects. This type of slice behavior has been called creating
a “view”. Additionally, repetition and concatenation are
undefined for bytes objects and will raise an exception.As these objects are likely to find use in high performance
applications, one motivation for the decision to use view slicing
is that copying between bytes objects should be very efficient and
not require the creation of temporary objects. The following code
illustrates this:
# create two 10 Meg bytes objects
b1 = bytes(10000000)
b2 = bytes(10000000)
# copy from part of one to another with out creating a 1 Meg temporary
b1[2000000:3000000] = b2[4000000:5000000]
Slice assignment where the rvalue is not the same length as the
lvalue will raise an exception. However, slice assignment will
work correctly with overlapping slices (typically implemented with
memmove).
The bytes object will be recognized as a native type by the pickle and
cPickle modules for efficient serialization. (In truth, this is
the only requirement that can’t be implemented via a third party
extension.)Partial solutions to address the need to serialize the data stored
in a bytes-like object without creating a temporary copy of the
data into a string have been implemented in the past. The tofile
and fromfile methods of the array object are good examples of
this. The bytes object will support these methods too. However,
pickling is useful in other situations - such as in the shelve
module, or implementing RPC of Python objects, and requiring the
end user to use two different serialization mechanisms to get an
efficient transfer of data is undesirable.
XXX: Will try to implement pickling of the new bytes object in
such a way that previous versions of Python will unpickle it as a
string object.
When unpickling, the bytes object will be created from memory
allocated from Python (via malloc). As such, it will lose any
additional properties that an extension supplied pointer might
have provided (special alignment, or special types of memory).
XXX: Will try to make it so that C subclasses of bytes type can
supply the memory that will be unpickled into. For instance, a
derived class called PageAlignedBytes would unpickle to memory
that is also page aligned.
On any platform where an int is 32 bits (most of them), it is
currently impossible to create a string with a length larger than
can be represented in 31 bits. As such, pickling to a string will
raise an exception when the operation is not possible.
At least on platforms supporting large files (many of them),
pickling large bytes objects to files should be possible via
repeated calls to the file.write() method.
The bytes type supports the PyBufferProcs interface, but a bytes object
provides the additional guarantee that the pointer will not be
deallocated or reallocated as long as a reference to the bytes
object is held. This implies that a bytes object is not resizable
once it is created, but allows the global interpreter lock (GIL)
to be released while a separate thread manipulates the memory
pointed to if the PyBytes_Check(...) test passes.This characteristic of the bytes object allows it to be used in
situations such as asynchronous file I/O or on multiprocessor
machines where the pointer obtained by PyBufferProcs will be used
independently of the global interpreter lock.
Knowing that the pointer can not be reallocated or freed after the
GIL is released gives extension authors the capability to get true
concurrency and make use of additional processors for long running
computations on the pointer.
In C/C++ extensions, the bytes object can be created from a supplied
pointer and destructor function to free the memory when the
reference count goes to zero.The special implementation of slicing for the bytes object allows
multiple bytes objects to refer to the same pointer/destructor.
As such, a refcount will be kept on the actual
pointer/destructor. This refcount is separate from the refcount
typically associated with Python objects.
XXX: It may be desirable to expose the inner refcounted object as an
actual Python object. If a good use case arises, it should be possible
for this to be implemented later with no loss to backwards compatibility.
It is also possible to signify the bytes object as readonly, in this
case it isn’t actually mutable, but does provide the other features of a
bytes object.
The bytes object keeps track of the length of its data with a Python
LONG_LONG type. Even though the current definition for PyBufferProcs
restricts the length to be the size of an int, this PEP does not propose
to make any changes there. Instead, extensions can work around this limit
by making an explicit PyBytes_Check(...) call, and if that succeeds they
can make a PyBytes_GetReadBuffer(...) or PyBytes_GetWriteBuffer
call to get the pointer and full length of the object as a LONG_LONG.The bytes object will raise an exception if the standard PyBufferProcs
mechanism is used and the size of the bytes object is greater than can be
represented by an integer.
From Python scripting, the bytes object will be subscriptable with longs
so the 32 bit int limit can be avoided.
There is still a problem with the len() function as it is
PyObject_Size() and this returns an int as well. As a workaround,
the bytes object will provide a .length() method that will return a long.
The bytes object can be constructed at the Python scripting level by
passing an int/long to the bytes constructor with the number of bytes to
allocate. For example:b = bytes(100000) # alloc 100K bytes
The constructor can also take another bytes object. This will be useful
for the implementation of unpickling, and in converting a read-write bytes
object into a read-only one. An optional second argument will be used to
designate creation of a readonly bytes object.
From the C API, the bytes object can be allocated using any of the
following signatures:PyObject* PyBytes_FromLength(LONG_LONG len, int readonly);
PyObject* PyBytes_FromPointer(void* ptr, LONG_LONG len, int readonly
void (*dest)(void *ptr, void *user), void* user);
In the PyBytes_FromPointer(...) function, if the dest function pointer
is passed in as NULL, it will not be called. This should only be used
for creating bytes objects from statically allocated space.
The user pointer has been called a closure in other places. It is a
pointer that the user can use for whatever purposes. It will be passed to
the destructor function on cleanup and can be useful for a number of
things. If the user pointer is not needed, NULL should be passed
instead.
The bytes type will be a new style class as that seems to be where all
standard Python types are headed.
Contrast to existing types
The most common way to work around the lack of a bytes object has been to
simply use a string object in its place. Binary files, the struct/array
modules, and several other examples exist of this. Putting aside the
style issue that these uses typically have nothing to do with text
strings, there is the real problem that strings are not mutable, so direct
manipulation of the data returned in these cases is not possible. Also,
numerous optimizations in the string module (such as caching the hash
value or interning the pointers) mean that extension authors are on very
thin ice if they try to break the rules with the string object.
The buffer object seems like it was intended to address the purpose that
the bytes object is trying fulfill, but several shortcomings in its
implementation [1] have made it less useful in many common cases. The
buffer object made a different choice for its slicing behavior (it returns
new strings instead of buffers for slicing and other operations), and it
doesn’t make many of the promises on alignment or being able to release
the GIL that the bytes object does.
Also in regards to the buffer object, it is not possible to simply replace
the buffer object with the bytes object and maintain backwards
compatibility. The buffer object provides a mechanism to take the
PyBufferProcs supplied pointer of another object and present it as its
own. Since the behavior of the other object can not be guaranteed to
follow the same set of strict rules that a bytes object does, it can’t be
used in places that a bytes object could.
The array module supports the creation of an array of bytes, but it does
not provide a C API for supplying pointers and destructors to extension
supplied memory. This makes it unusable for constructing objects out of
shared memory, or memory that has special alignment or locking for things
like DMA transfers. Also, the array object does not currently pickle.
Finally since the array object allows its contents to grow, via the extend
method, the pointer can be changed if the GIL is not held while using it.
Creating a buffer object from an array object has the same problem of
leaving an invalid pointer when the array object is resized.
The mmap object caters to its particular niche, but does not attempt to
solve a wider class of problems.
Finally, any third party extension can not implement pickling without
creating a temporary object of a standard Python type. For example, in the
Numeric community, it is unpleasant that a large array can’t pickle
without creating a large binary string to duplicate the array data.
Backward Compatibility
The only possibility for backwards compatibility problems that the author
is aware of are in previous versions of Python that try to unpickle data
containing the new bytes type.
Reference Implementation
XXX: Actual implementation is in progress, but changes are still possible
as this PEP gets further review.
The following new files will be added to the Python baseline:
Include/bytesobject.h # C interface
Objects/bytesobject.c # C implementation
Lib/test/test_bytes.py # unit testing
Doc/lib/libbytes.tex # documentation
The following files will also be modified:
Include/Python.h # adding bytesmodule.h include file
Python/bltinmodule.c # adding the bytes type object
Modules/cPickle.c # adding bytes to the standard types
Lib/pickle.py # adding bytes to the standard types
It is possible that several other modules could be cleaned up and
implemented in terms of the bytes object. The mmap module comes to mind
first, but as noted above it would be possible to reimplement the array
module as a pure Python module. While it is attractive that this PEP
could actually reduce the amount of source code by some amount, the author
feels that this could cause unnecessary risk for breaking existing
applications and should be avoided at this time.
Additional Notes/Comments
Guido van Rossum wondered whether it would make sense to be able
to create a bytes object from a mmap object. The mmap object
appears to support the requirements necessary to provide memory
for a bytes object. (It doesn’t resize, and the pointer is valid
for the lifetime of the object.) As such, a method could be added
to the mmap module such that a bytes object could be created
directly from a mmap object. An initial stab at how this would be
implemented would be to use the PyBytes_FromPointer() function
described above and pass the mmap_object as the user pointer. The
destructor function would decref the mmap_object for cleanup.
Todd Miller notes that it may be useful to have two new functions:
PyObject_AsLargeReadBuffer() and PyObject_AsLargeWriteBuffer that are
similar to PyObject_AsReadBuffer() and PyObject_AsWriteBuffer(), but
support getting a LONG_LONG length in addition to the void* pointer.
These functions would allow extension authors to work transparently with
bytes object (that support LONG_LONG lengths) and most other buffer like
objects (which only support int lengths). These functions could be in
lieu of, or in addition to, creating a specific PyByte_GetReadBuffer() and
PyBytes_GetWriteBuffer() functions.XXX: The author thinks this is very a good idea as it paves the way for
other objects to eventually support large (64 bit) pointers, and it should
only affect abstract.c and abstract.h. Should this be added above?
It was generally agreed that abusing the segment count of the
PyBufferProcs interface is not a good hack to work around the 31 bit
limitation of the length. If you don’t know what this means, then you’re
in good company. Most code in the Python baseline, and presumably in many
third party extensions, punt when the segment count is not 1.
References
[1]
The buffer interface
https://mail.python.org/pipermail/python-dev/2000-October/009974.html
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 296 – Adding a bytes Object Type | Standards Track | This PEP proposes the creation of a new standard type and builtin
constructor called ‘bytes’. The bytes object is an efficiently
stored array of bytes with some additional characteristics that
set it apart from several implementations that are similar. |
PEP 297 – Support for System Upgrades
Author:
Marc-André Lemburg <mal at lemburg.com>
Status:
Rejected
Type:
Standards Track
Created:
19-Jul-2001
Python-Version:
2.6
Post-History:
Table of Contents
Rejection Notice
Abstract
Problem
Proposed Solutions
Scope
Credits
References
Copyright
Rejection Notice
This PEP is rejected for failure to generate significant interest.
Abstract
This PEP proposes strategies to allow the Python standard library
to be upgraded in parts without having to reinstall the complete
distribution or having to wait for a new patch level release.
Problem
Python currently does not allow overriding modules or packages in
the standard library per default. Even though this is possible by
defining a PYTHONPATH environment variable (the paths defined in
this variable are prepended to the Python standard library path),
there is no standard way of achieving this without changing the
configuration.
Since Python’s standard library is starting to host packages which
are also available separately, e.g. the distutils, email and PyXML
packages, which can also be installed independently of the Python
distribution, it is desirable to have an option to upgrade these
packages without having to wait for a new patch level release of
the Python interpreter to bring along the changes.
On some occasions, it may also be desirable to update modules of
the standard library without going through the whole Python release
cycle, e.g. in order to provide hot-fixes for security problems.
Proposed Solutions
This PEP proposes two different but not necessarily conflicting
solutions:
Adding a new standard search path to sys.path:
$stdlibpath/system-packages just before the $stdlibpath
entry. This complements the already existing entry for site
add-ons $stdlibpath/site-packages which is appended to the
sys.path at interpreter startup time.To make use of this new standard location, distutils will need
to grow support for installing certain packages in
$stdlibpath/system-packages rather than the standard location
for third-party packages $stdlibpath/site-packages.
Tweaking distutils to install directly into $stdlibpath for the
system upgrades rather than into $stdlibpath/site-packages.
The first solution has a few advantages over the second:
upgrades can be easily identified (just look in
$stdlibpath/system-packages)
upgrades can be de-installed without affecting the rest
of the interpreter installation
modules can be virtually removed from packages; this is
due to the way Python imports packages: once it finds the
top-level package directory it stay in this directory for
all subsequent package submodule imports
the approach has an overall much cleaner design than the
hackish install on top of an existing installation approach
The only advantages of the second approach are that the Python
interpreter does not have to changed and that it works with
older Python versions.
Both solutions require changes to distutils. These changes can
also be implemented by package authors, but it would be better to
define a standard way of switching on the proposed behaviour.
Scope
Solution 1: Python 2.6 and up
Solution 2: all Python versions supported by distutils
Credits
None
References
None
Copyright
This document has been placed in the public domain.
| Rejected | PEP 297 – Support for System Upgrades | Standards Track | This PEP proposes strategies to allow the Python standard library
to be upgraded in parts without having to reinstall the complete
distribution or having to wait for a new patch level release. |
PEP 298 – The Locked Buffer Interface
Author:
Thomas Heller <theller at python.net>
Status:
Withdrawn
Type:
Standards Track
Created:
26-Jul-2002
Python-Version:
2.3
Post-History:
30-Jul-2002, 01-Aug-2002
Table of Contents
Abstract
Specification
Implementation
Backward Compatibility
Reference Implementation
Additional Notes/Comments
Community Feedback
References
Copyright
Abstract
This PEP proposes an extension to the buffer interface called the
‘locked buffer interface’.
The locked buffer interface avoids the flaws of the ‘old’ buffer
interface [1] as defined in Python versions up to and including
2.2, and has the following semantics:
The lifetime of the retrieved pointer is clearly defined and
controlled by the client.
The buffer size is returned as a ‘size_t’ data type, which
allows access to large buffers on platforms where sizeof(int)
!= sizeof(void *).
(Guido comments: This second sounds like a change we could also
make to the “old” buffer interface, if we introduce another flag
bit that’s not part of the default flags.)
Specification
The locked buffer interface exposes new functions which return the
size and the pointer to the internal memory block of any python
object which chooses to implement this interface.
Retrieving a buffer from an object puts this object in a locked
state during which the buffer may not be freed, resized, or
reallocated.
The object must be unlocked again by releasing the buffer if it’s
no longer used by calling another function in the locked buffer
interface. If the object never resizes or reallocates the buffer
during its lifetime, this function may be NULL. Failure to call
this function (if it is != NULL) is a programming error and may
have unexpected results.
The locked buffer interface omits the memory segment model which
is present in the old buffer interface - only a single memory
block can be exposed.
The memory blocks can be accessed without holding the global
interpreter lock.
Implementation
Define a new flag in Include/object.h:
/* PyBufferProcs contains bf_acquirelockedreadbuffer,
bf_acquirelockedwritebuffer, and bf_releaselockedbuffer */
#define Py_TPFLAGS_HAVE_LOCKEDBUFFER (1L<<15)
This flag would be included in Py_TPFLAGS_DEFAULT:
#define Py_TPFLAGS_DEFAULT ( \
....
Py_TPFLAGS_HAVE_LOCKEDBUFFER | \
....
0)
Extend the PyBufferProcs structure by new fields in
Include/object.h:
typedef size_t (*acquirelockedreadbufferproc)(PyObject *,
const void **);
typedef size_t (*acquirelockedwritebufferproc)(PyObject *,
void **);
typedef void (*releaselockedbufferproc)(PyObject *);
typedef struct {
getreadbufferproc bf_getreadbuffer;
getwritebufferproc bf_getwritebuffer;
getsegcountproc bf_getsegcount;
getcharbufferproc bf_getcharbuffer;
/* locked buffer interface functions */
acquirelockedreadbufferproc bf_acquirelockedreadbuffer;
acquirelockedwritebufferproc bf_acquirelockedwritebuffer;
releaselockedbufferproc bf_releaselockedbuffer;
} PyBufferProcs;
The new fields are present if the Py_TPFLAGS_HAVE_LOCKEDBUFFER
flag is set in the object’s type.
The Py_TPFLAGS_HAVE_LOCKEDBUFFER flag implies the
Py_TPFLAGS_HAVE_GETCHARBUFFER flag.
The acquirelockedreadbufferproc and acquirelockedwritebufferproc
functions return the size in bytes of the memory block on success,
and fill in the passed void * pointer on success. If these
functions fail - either because an error occurs or no memory block
is exposed - they must set the void * pointer to NULL and raise an
exception. The return value is undefined in these cases and
should not be used.
If calls to these functions succeed, eventually the buffer must be
released by a call to the releaselockedbufferproc, supplying the
original object as argument. The releaselockedbufferproc cannot
fail. For objects that actually maintain an internal lock count
it would be a fatal error if the releaselockedbufferproc function
would be called too often, leading to a negative lock count.
Similar to the ‘old’ buffer interface, any of these functions may
be set to NULL, but it is strongly recommended to implement the
releaselockedbufferproc function (even if it does nothing) if any
of the acquireread/writelockedbufferproc functions are
implemented, to discourage extension writers from checking for a
NULL value and not calling it.
These functions aren’t supposed to be called directly, they are
called through convenience functions declared in
Include/abstract.h:
int PyObject_AcquireLockedReadBuffer(PyObject *obj,
const void **buffer,
size_t *buffer_len);
int PyObject_AcquireLockedWriteBuffer(PyObject *obj,
void **buffer,
size_t *buffer_len);
void PyObject_ReleaseLockedBuffer(PyObject *obj);
The former two functions return 0 on success, set buffer to the
memory location and buffer_len to the length of the memory block
in bytes. On failure, or if the locked buffer interface is not
implemented by obj, they return -1 and set an exception.
The latter function doesn’t return anything, and cannot fail.
Backward Compatibility
The size of the PyBufferProcs structure changes if this proposal
is implemented, but the type’s tp_flags slot can be used to
determine if the additional fields are present.
Reference Implementation
An implementation has been uploaded to the SourceForge patch
manager as https://bugs.python.org/issue652857.
Additional Notes/Comments
Python strings, unicode strings, mmap objects, and array objects
would expose the locked buffer interface.
mmap and array objects would actually enter a locked state while
the buffer is active, this is not needed for strings and unicode
objects. Resizing locked array objects is not allowed and will
raise an exception. Whether closing a locked mmap object is an
error or will only be deferred until the lock count reaches zero
is an implementation detail.
Guido recommends
But I’m still very concerned that if most built-in types
(e.g. strings, bytes) don’t implement the release
functionality, it’s too easy for an extension to seem to work
while forgetting to release the buffer.I recommend that at least some built-in types implement the
acquire/release functionality with a counter, and assert that
the counter is zero when the object is deleted – if the
assert fails, someone DECREF’ed their reference to the object
without releasing it. (The rule should be that you must own a
reference to the object while you’ve acquired the object.)
For strings that might be impractical because the string
object would have to grow 4 bytes to hold the counter; but the
new bytes object (PEP 296) could easily implement the counter,
and the array object too – that way there will be plenty of
opportunity to test proper use of the protocol.
Community Feedback
Greg Ewing doubts the locked buffer interface is needed at all, he
thinks the normal buffer interface could be used if the pointer is
(re)fetched each time it’s used. This seems to be dangerous,
because even innocent looking calls to the Python API like
Py_DECREF() may trigger execution of arbitrary Python code.
The first version of this proposal didn’t have the release
function, but it turned out that this would have been too
restrictive: mmap and array objects wouldn’t have been able to
implement it, because mmap objects can be closed anytime if not
locked, and array objects could resize or reallocate the buffer.
This PEP will probably be rejected because nobody except the
author needs it.
References
[1]
The buffer interface
https://mail.python.org/pipermail/python-dev/2000-October/009974.html
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 298 – The Locked Buffer Interface | Standards Track | This PEP proposes an extension to the buffer interface called the
‘locked buffer interface’. |
PEP 299 – Special __main__() function in modules
Author:
Jeff Epler <jepler at unpythonic.net>
Status:
Rejected
Type:
Standards Track
Created:
12-Aug-2002
Python-Version:
2.3
Post-History:
29-Mar-2006
Table of Contents
Abstract
Motivation
Proposal
Implementation
Open Issues
Rejection
References
Copyright
Abstract
Many Python modules are also intended to be callable as standalone
scripts. This PEP proposes that a special function called __main__()
should serve this purpose.
Motivation
There should be one simple and universal idiom for invoking a module
as a standalone script.
The semi-standard idiom:
if __name__ == '__main__':
perform "standalone" functionality
is unclear to programmers of languages like C and C++. It also does
not permit invocation of the standalone function when the module is
imported. The variant:
if __name__ == '__main__':
main_function()
is sometimes seen, but there exists no standard name for the function,
and because arguments are taken from sys.argv it is not possible to
pass specific arguments without changing the argument list seen by all
other modules. (Imagine a threaded Python program, with two threads
wishing to invoke the standalone functionality of different modules
with different argument lists)
Proposal
The standard name of the ‘main function’ should be __main__. When a
module is invoked on the command line, such as:
python mymodule.py
then the module behaves as though the following lines existed at the
end of the module (except that the attribute __sys may not be used or
assumed to exist elsewhere in the script):
if globals().has_key("__main__"):
import sys as __sys
__sys.exit(__main__(__sys.argv))
Other modules may execute:
import mymodule mymodule.__main__(['mymodule', ...])
It is up to mymodule to document thread-safety issues or other
issues which might restrict use of __main__. (Other issues might
include use of mutually exclusive GUI modules, non-sharable resources
like hardware devices, reassignment of sys.stdin/stdout, etc)
Implementation
In modules/main.c, the block near line 385 (after the
PyRun_AnyFileExFlags call) will be changed so that the above code
(or its C equivalent) is executed.
Open Issues
Should the return value from __main__ be treated as the exit value?Yes. Many __main__ will naturally return None, which
sys.exit translates into a “success” return code. In those that
return a numeric result, it behaves just like the argument to
sys.exit() or the return value from C’s main().
Should the argument list to __main__ include argv[0], or just the
“real” arguments argv[1:]?argv[0] is included for symmetry with sys.argv and easy
transition to the new standard idiom.
Rejection
In a short discussion on python-dev [1], two major backwards
compatibility problems were brought up and Guido pronounced that he
doesn’t like the idea anyway as it’s “not worth the change (in docs,
user habits, etc.) and there’s nothing particularly broken.”
References
[1]
Georg Brandl, “What about PEP 299”,
https://mail.python.org/pipermail/python-dev/2006-March/062951.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 299 – Special __main__() function in modules | Standards Track | Many Python modules are also intended to be callable as standalone
scripts. This PEP proposes that a special function called __main__()
should serve this purpose. |
PEP 301 – Package Index and Metadata for Distutils
Author:
Richard Jones <richard at python.org>
Status:
Final
Type:
Standards Track
Topic:
Packaging
Created:
24-Oct-2002
Python-Version:
2.3
Post-History:
08-Nov-2002
Table of Contents
Abstract
Rationale
Specification
Web Interface
User Roles
Index Storage (Schema)
Distutils register Command
Distutils Trove Classification
Implementation
Rejected Proposals
References
Copyright
Acknowledgements
Abstract
This PEP proposes several extensions to the Distutils packaging system
[1]. These enhancements include a central package index server,
tools for submitting package information to the index and extensions
to the package metadata to include Trove [2] information.
This PEP does not address issues of package dependency. It also does
not address storage and download of packages as described in PEP 243.
Nor is it proposing a local database of packages as described
in PEP 262.
Existing package repositories such as the Vaults of Parnassus [3],
CPAN [4] and PAUSE [5] will be investigated as prior art in this
field.
Rationale
Python programmers have long needed a simple method of discovering
existing modules and systems available for their use. It is arguable
that the existence of these systems for other languages have been a
significant contribution to their popularity. The existence of the
Catalog-SIG, and the many discussions there indicate that there is a
large population of users who recognise this need.
The introduction of the Distutils packaging system to Python
simplified the process of distributing shareable code, and included
mechanisms for the capture of package metadata, but did little with
the metadata save ship it with the package.
An interface to the index should be hosted in the python.org domain,
giving it an air of legitimacy that existing catalog efforts do not
have.
The interface for submitting information to the catalog should be as
simple as possible - hopefully just a one-line command for most users.
Issues of package dependency are not addressed due to the complexity
of such a system. PEP 262 proposes such a system, but as of this
writing the PEP is still unfinished.
Issues of package dissemination (storage on a central server) are
not addressed because they require assumptions about availability of
storage and bandwidth that I am not in a position to make. PEP 243,
which is still being developed, is tackling these issues and many
more. This proposal is considered compatible with, and adjunct to
the proposal in PEP 243.
Specification
The specification takes three parts, the web interface, the
Distutils register command and the Distutils Trove
classification.
Web Interface
A web interface is implemented over a simple store. The interface is
available through the python.org domain, either directly or as
packages.python.org.
The store has columns for all metadata fields. The (name, version)
double is used as a uniqueness key. Additional submissions for an
existing (name, version) will result in an update operation.
The web interface implements the following commands/interfaces:
indexLists known packages, optionally filtered. An additional HTML page,
search, presents a form to the user which is used to customise
the index view. The index will include a browsing interface like
that presented in the Trove interface design section 4.3. The
results will be paginated, sorted alphabetically and only showing
the most recent version. The most recent version information will
be determined using the Distutils LooseVersion class.
displayDisplays information about the package. All fields are displayed as
plain text. The “url” (or “home_page”) field is hyperlinked.
submitAccepts a POST submission of metadata about a package. The
“name” and “version” fields are mandatory, as they uniquely identify
an entry in the index. Submit will automatically determine
whether to create a new entry or update an existing entry. The
metadata is checked for correctness where appropriate - specifically
the Trove discriminators are compared with the allowed set. An
update will update all information about the package based on the
new submitted information.There will also be a submit/edit form that will allow manual
submission and updating for those who do not use Distutils.
submit_pkg_infoAccepts a POST submission of a PKG-INFO file and performs the same
function as the submit interface.
userRegisters a new user with the index. Requires username, password
and email address. Passwords will be stored in the index database
as SHA hashes. If the username already exists in the database:
If valid HTTP Basic authentication is provided, the password and
email address are updated with the submission information, or
If no valid authentication is provided, the user is informed that
the login is already taken.
Registration will be a three-step process, involving:
User submission of details via the Distutils register command
or through the web,
Index server sending email to the user’s email address with a URL
to visit to confirm registration with a random one-time key, and
User visits URL with the key and confirms registration.
rolesAn interface for changing user Role assignments.
password_resetUsing a supplied email address as the key, this resets a user’s
password and sends an email with the new password to the user.
The submit command will require HTTP Basic authentication,
preferably over an HTTPS connection.
The server interface will indicate success or failure of the commands
through a subset of the standard HTTP response codes:
Code
Meaning
Register command implications
200
OK
Everything worked just fine
400
Bad request
Data provided for submission was malformed
401
Unauthorised
The username or password supplied were incorrect
403
Forbidden
User does not have permission to update the
package information (not Owner or Maintainer)
User Roles
Three user Roles will be assignable to users:
OwnerOwns a package name, may assign Maintainer Role for that name. The
first user to register information about a package is deemed Owner
of the package name. The Admin user may change this if necessary.
May submit updates for the package name.
MaintainerCan submit and update info for a particular package name.
AdminCan assign Owner Role and edit user details. Not specific to a
package name.
Index Storage (Schema)
The index is stored in a set of relational database tables:
packagesLists package names and holds package-level metadata (currently
just the stable release version)
releasesEach package has an entry in releases for each version of the
package that is released. A row holds the bulk of the information
given in the package’s PKG-INFO file. There is one row for each
package (name, version).
trove_discriminatorsLists the Trove discriminator text and assigns each one a unique
ID.
release_discriminatorsEach entry maps a package (name, version) to a
discriminator_id. We map to releases instead of packages because
the set of discriminators may change between releases.
journalsHolds information about changes to package information in the
index. Changes to the packages, releases, roles,
and release_discriminators tables are listed here by
package name and version if the change is release-specific.
usersHolds our user database - user name, email address and password.
rolesMaps user_name and role_name to a package_name.
An additional table, rego_otk holds the One Time Keys generated
during registration and is not interesting in the scope of the index
itself.
Distutils register Command
An additional Distutils command, register, is implemented which
posts the package metadata to the central index. The register
command automatically handles user registration; the user is presented
with three options:
login and submit package information
register as a new packager
send password reminder email
On systems where the $HOME environment variable is set, the user
will be prompted at exit to save their username/password to a file
in their $HOME directory in the file .pypirc.
Notification of changes to a package entry will be sent to all users
who have submitted information about the package. That is, the
original submitter and any subsequent updaters.
The register command will include a --verify option which
performs a test submission to the index without actually committing
the data. The index will perform its submission verification checks
as usual and report any errors it would have reported during a normal
submission. This is useful for verifying correctness of Trove
discriminators.
Distutils Trove Classification
The Trove concept of discrimination will be added to the metadata
set available to package authors through the new attribute
“classifiers”. The list of classifiers will be available through the
web, and added to the package like so:
setup(
name = "roundup",
version = __version__,
classifiers = [
'Development Status :: 4 - Beta',
'Environment :: Console',
'Environment :: Web Environment',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: Python Software Foundation License',
'Operating System :: MacOS :: MacOS X',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python',
'Topic :: Communications :: Email',
'Topic :: Office/Business',
'Topic :: Software Development :: Bug Tracking',
],
url = 'http://sourceforge.net/projects/roundup/',
...
)
It was decided that strings would be used for the classification
entries due to the deep nesting that would be involved in a more
formal Python structure.
The original Trove specification that classification namespaces be
separated by slashes (“/”) unfortunately collides with many of the
names having slashes in them (e.g. “OS/2”). The double-colon solution
(” :: “) implemented by SourceForge and FreshMeat gets around this
limitation.
The list of classification values on the module index has been merged
from FreshMeat and SourceForge (with their permission). This list
will be made available both through the web interface and through the
register command’s --list-classifiers option as a text list
which may then be copied to the setup.py file. The register
command’s --verify option will check classifiers values against
the server’s list.
Unfortunately, the addition of the “classifiers” property is not
backwards-compatible. A setup.py file using it will not work under
Python 2.1.3. It is hoped that a bug-fix release of Python 2.2 (most
likely 2.2.3) will relax the argument checking of the setup() command
to allow new keywords, even if they’re not actually used. It is
preferable that a warning be produced, rather than a show-stopping
error. The use of the new keyword should be discouraged in situations
where the package is advertised as being compatible with python
versions earlier than 2.2.3 or 2.3.
In the PKG-INFO, the classifiers list items will appear as individual
Classifier: entries:
Name: roundup
Version: 0.5.2
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console (Text Based)
.
.
Classifier: Topic :: Software Development :: Bug Tracking
Url: http://sourceforge.net/projects/roundup/
Implementation
The server is available at:
http://www.python.org/pypi
The code is available from the SourceForge project:
http://sourceforge.net/projects/pypi/
The register command has been integrated into Python 2.3.
Rejected Proposals
Originally, the index server was to return custom headers (inspired by
PEP 243):
X-Pypi-StatusEither “success” or “fail”.
X-Pypi-ReasonA description of the reason for failure, or additional information
in the case of a success.
However, it has been pointed out [6] that this is a bad scheme to
use.
References
[1]
Distutils packaging system
(http://docs.python.org/library/distutils.html)
[2]
Trove
(http://www.catb.org/~esr/trove/)
[3]
Vaults of Parnassus
(http://www.vex.net/parnassus/)
[4]
CPAN
(http://www.cpan.org/)
[5]
PAUSE
(http://pause.cpan.org/)
[6]
[PEP243] upload status is bogus
(https://mail.python.org/pipermail/distutils-sig/2001-March/002262.html)
Copyright
This document has been placed in the public domain.
Acknowledgements
Anthony Baxter, Martin v. Loewis and David Goodger for encouragement
and feedback during initial drafting.
A.M. Kuchling for support including hosting the second prototype.
Greg Stein for recommending that the register command interpret the
HTTP response codes rather than custom X-PyPI-* headers.
The many participants of the Distutils and Catalog SIGs for their
ideas over the years.
| Final | PEP 301 – Package Index and Metadata for Distutils | Standards Track | This PEP proposes several extensions to the Distutils packaging system
[1]. These enhancements include a central package index server,
tools for submitting package information to the index and extensions
to the package metadata to include Trove [2] information. |
PEP 302 – New Import Hooks
Author:
Just van Rossum <just at letterror.com>,
Paul Moore <p.f.moore at gmail.com>
Status:
Final
Type:
Standards Track
Created:
19-Dec-2002
Python-Version:
2.3
Post-History:
19-Dec-2002
Table of Contents
Abstract
Motivation
Use cases
Rationale
Specification part 1: The Importer Protocol
Specification part 2: Registering Hooks
Packages and the role of __path__
Optional Extensions to the Importer Protocol
Integration with the ‘imp’ module
Forward Compatibility
Open Issues
Implementation
References and Footnotes
Copyright
Warning
The language reference for import [10] and importlib documentation
[11] now supersede this PEP. This document is no longer updated
and provided for historical purposes only.
Abstract
This PEP proposes to add a new set of import hooks that offer better
customization of the Python import mechanism. Contrary to the current
__import__ hook, a new-style hook can be injected into the existing
scheme, allowing for a finer grained control of how modules are found and how
they are loaded.
Motivation
The only way to customize the import mechanism is currently to override the
built-in __import__ function. However, overriding __import__ has many
problems. To begin with:
An __import__ replacement needs to fully reimplement the entire
import mechanism, or call the original __import__ before or after the
custom code.
It has very complex semantics and responsibilities.
__import__ gets called even for modules that are already in
sys.modules, which is almost never what you want, unless you’re writing
some sort of monitoring tool.
The situation gets worse when you need to extend the import mechanism from C:
it’s currently impossible, apart from hacking Python’s import.c or
reimplementing much of import.c from scratch.
There is a fairly long history of tools written in Python that allow extending
the import mechanism in various way, based on the __import__ hook. The
Standard Library includes two such tools: ihooks.py (by GvR) and
imputil.py [1] (Greg Stein), but perhaps the most famous is iu.py by
Gordon McMillan, available as part of his Installer package. Their usefulness
is somewhat limited because they are written in Python; bootstrapping issues
need to worked around as you can’t load the module containing the hook with
the hook itself. So if you want the entire Standard Library to be loadable
from an import hook, the hook must be written in C.
Use cases
This section lists several existing applications that depend on import hooks.
Among these, a lot of duplicate work was done that could have been saved if
there had been a more flexible import hook at the time. This PEP should make
life a lot easier for similar projects in the future.
Extending the import mechanism is needed when you want to load modules that
are stored in a non-standard way. Examples include modules that are bundled
together in an archive; byte code that is not stored in a pyc formatted
file; modules that are loaded from a database over a network.
The work on this PEP was partly triggered by the implementation of PEP 273,
which adds imports from Zip archives as a built-in feature to Python. While
the PEP itself was widely accepted as a must-have feature, the implementation
left a few things to desire. For one thing it went through great lengths to
integrate itself with import.c, adding lots of code that was either
specific for Zip file imports or not specific to Zip imports, yet was not
generally useful (or even desirable) either. Yet the PEP 273 implementation
can hardly be blamed for this: it is simply extremely hard to do, given the
current state of import.c.
Packaging applications for end users is a typical use case for import hooks,
if not the typical use case. Distributing lots of source or pyc files
around is not always appropriate (let alone a separate Python installation),
so there is a frequent desire to package all needed modules in a single file.
So frequent in fact that multiple solutions have been implemented over the
years.
The oldest one is included with the Python source code: Freeze [2]. It puts
marshalled byte code into static objects in C source code. Freeze’s “import
hook” is hard wired into import.c, and has a couple of issues. Later
solutions include Fredrik Lundh’s Squeeze, Gordon McMillan’s Installer, and
Thomas Heller’s py2exe [3]. MacPython ships with a tool called
BuildApplication.
Squeeze, Installer and py2exe use an __import__ based scheme (py2exe
currently uses Installer’s iu.py, Squeeze used ihooks.py), MacPython
has two Mac-specific import hooks hard wired into import.c, that are
similar to the Freeze hook. The hooks proposed in this PEP enables us (at
least in theory; it’s not a short-term goal) to get rid of the hard coded
hooks in import.c, and would allow the __import__-based tools to get
rid of most of their import.c emulation code.
Before work on the design and implementation of this PEP was started, a new
BuildApplication-like tool for Mac OS X prompted one of the authors of
this PEP (JvR) to expose the table of frozen modules to Python, in the imp
module. The main reason was to be able to use the freeze import hook
(avoiding fancy __import__ support), yet to also be able to supply a set
of modules at runtime. This resulted in issue #642578 [4], which was
mysteriously accepted (mostly because nobody seemed to care either way ;-).
Yet it is completely superfluous when this PEP gets accepted, as it offers a
much nicer and general way to do the same thing.
Rationale
While experimenting with alternative implementation ideas to get built-in Zip
import, it was discovered that achieving this is possible with only a fairly
small amount of changes to import.c. This allowed to factor out the
Zip-specific stuff into a new source file, while at the same time creating a
general new import hook scheme: the one you’re reading about now.
An earlier design allowed non-string objects on sys.path. Such an object
would have the necessary methods to handle an import. This has two
disadvantages: 1) it breaks code that assumes all items on sys.path are
strings; 2) it is not compatible with the PYTHONPATH environment variable.
The latter is directly needed for Zip imports. A compromise came from Jython:
allow string subclasses on sys.path, which would then act as importer
objects. This avoids some breakage, and seems to work well for Jython (where
it is used to load modules from .jar files), but it was perceived as an
“ugly hack”.
This led to a more elaborate scheme, (mostly copied from McMillan’s
iu.py) in which each in a list of candidates is asked whether it can
handle the sys.path item, until one is found that can. This list of
candidates is a new object in the sys module: sys.path_hooks.
Traversing sys.path_hooks for each path item for each new import can be
expensive, so the results are cached in another new object in the sys
module: sys.path_importer_cache. It maps sys.path entries to importer
objects.
To minimize the impact on import.c as well as to avoid adding extra
overhead, it was chosen to not add an explicit hook and importer object for
the existing file system import logic (as iu.py has), but to simply fall
back to the built-in logic if no hook on sys.path_hooks could handle the
path item. If this is the case, a None value is stored in
sys.path_importer_cache, again to avoid repeated lookups. (Later we can
go further and add a real importer object for the built-in mechanism, for now,
the None fallback scheme should suffice.)
A question was raised: what about importers that don’t need any entry on
sys.path? (Built-in and frozen modules fall into that category.) Again,
Gordon McMillan to the rescue: iu.py contains a thing he calls the
metapath. In this PEP’s implementation, it’s a list of importer objects
that is traversed before sys.path. This list is yet another new object
in the sys module: sys.meta_path. Currently, this list is empty by
default, and frozen and built-in module imports are done after traversing
sys.meta_path, but still before sys.path.
Specification part 1: The Importer Protocol
This PEP introduces a new protocol: the “Importer Protocol”. It is important
to understand the context in which the protocol operates, so here is a brief
overview of the outer shells of the import mechanism.
When an import statement is encountered, the interpreter looks up the
__import__ function in the built-in name space. __import__ is then
called with four arguments, amongst which are the name of the module being
imported (may be a dotted name) and a reference to the current global
namespace.
The built-in __import__ function (known as PyImport_ImportModuleEx()
in import.c) will then check to see whether the module doing the import is
a package or a submodule of a package. If it is indeed a (submodule of a)
package, it first tries to do the import relative to the package (the parent
package for a submodule). For example, if a package named “spam” does “import
eggs”, it will first look for a module named “spam.eggs”. If that fails, the
import continues as an absolute import: it will look for a module named
“eggs”. Dotted name imports work pretty much the same: if package “spam” does
“import eggs.bacon” (and “spam.eggs” exists and is itself a package),
“spam.eggs.bacon” is tried. If that fails “eggs.bacon” is tried. (There are
more subtleties that are not described here, but these are not relevant for
implementers of the Importer Protocol.)
Deeper down in the mechanism, a dotted name import is split up by its
components. For “import spam.ham”, first an “import spam” is done, and only
when that succeeds is “ham” imported as a submodule of “spam”.
The Importer Protocol operates at this level of individual imports. By the
time an importer gets a request for “spam.ham”, module “spam” has already been
imported.
The protocol involves two objects: a finder and a loader. A finder object
has a single method:
finder.find_module(fullname, path=None)
This method will be called with the fully qualified name of the module. If
the finder is installed on sys.meta_path, it will receive a second
argument, which is None for a top-level module, or package.__path__
for submodules or subpackages [5]. It should return a loader object if the
module was found, or None if it wasn’t. If find_module() raises an
exception, it will be propagated to the caller, aborting the import.
A loader object also has one method:
loader.load_module(fullname)
This method returns the loaded module or raises an exception, preferably
ImportError if an existing exception is not being propagated. If
load_module() is asked to load a module that it cannot, ImportError is
to be raised.
In many cases the finder and loader can be one and the same object:
finder.find_module() would just return self.
The fullname argument of both methods is the fully qualified module name,
for example “spam.eggs.ham”. As explained above, when
finder.find_module("spam.eggs.ham") is called, “spam.eggs” has already
been imported and added to sys.modules. However, the find_module()
method isn’t necessarily always called during an actual import: meta tools
that analyze import dependencies (such as freeze, Installer or py2exe) don’t
actually load modules, so a finder shouldn’t depend on the parent package
being available in sys.modules.
The load_module() method has a few responsibilities that it must fulfill
before it runs any code:
If there is an existing module object named ‘fullname’ in sys.modules,
the loader must use that existing module. (Otherwise, the reload()
builtin will not work correctly.) If a module named ‘fullname’ does not
exist in sys.modules, the loader must create a new module object and
add it to sys.modules.Note that the module object must be in sys.modules before the loader
executes the module code. This is crucial because the module code may
(directly or indirectly) import itself; adding it to sys.modules
beforehand prevents unbounded recursion in the worst case and multiple
loading in the best.
If the load fails, the loader needs to remove any module it may have
inserted into sys.modules. If the module was already in sys.modules
then the loader should leave it alone.
The __file__ attribute must be set. This must be a string, but it may
be a dummy value, for example “<frozen>”. The privilege of not having a
__file__ attribute at all is reserved for built-in modules.
The __name__ attribute must be set. If one uses imp.new_module()
then the attribute is set automatically.
If it’s a package, the __path__ variable must be set. This must be a
list, but may be empty if __path__ has no further significance to the
importer (more on this later).
The __loader__ attribute must be set to the loader object. This is
mostly for introspection and reloading, but can be used for
importer-specific extras, for example getting data associated with an
importer.
The __package__ attribute must be set (PEP 366).If the module is a Python module (as opposed to a built-in module or a
dynamically loaded extension), it should execute the module’s code in the
module’s global name space (module.__dict__).
Here is a minimal pattern for a load_module() method:
# Consider using importlib.util.module_for_loader() to handle
# most of these details for you.
def load_module(self, fullname):
code = self.get_code(fullname)
ispkg = self.is_package(fullname)
mod = sys.modules.setdefault(fullname, imp.new_module(fullname))
mod.__file__ = "<%s>" % self.__class__.__name__
mod.__loader__ = self
if ispkg:
mod.__path__ = []
mod.__package__ = fullname
else:
mod.__package__ = fullname.rpartition('.')[0]
exec(code, mod.__dict__)
return mod
Specification part 2: Registering Hooks
There are two types of import hooks: Meta hooks and Path hooks. Meta
hooks are called at the start of import processing, before any other import
processing (so that meta hooks can override sys.path processing, frozen
modules, or even built-in modules). To register a meta hook, simply add the
finder object to sys.meta_path (the list of registered meta hooks).
Path hooks are called as part of sys.path (or package.__path__)
processing, at the point where their associated path item is encountered. A
path hook is registered by adding an importer factory to sys.path_hooks.
sys.path_hooks is a list of callables, which will be checked in sequence
to determine if they can handle a given path item. The callable is called
with one argument, the path item. The callable must raise ImportError if
it is unable to handle the path item, and return an importer object if it can
handle the path item. Note that if the callable returns an importer object
for a specific sys.path entry, the builtin import machinery will not be
invoked to handle that entry any longer, even if the importer object later
fails to find a specific module. The callable is typically the class of the
import hook, and hence the class __init__() method is called. (This is
also the reason why it should raise ImportError: an __init__() method
can’t return anything. This would be possible with a __new__() method in
a new style class, but we don’t want to require anything about how a hook is
implemented.)
The results of path hook checks are cached in sys.path_importer_cache,
which is a dictionary mapping path entries to importer objects. The cache is
checked before sys.path_hooks is scanned. If it is necessary to force a
rescan of sys.path_hooks, it is possible to manually clear all or part of
sys.path_importer_cache.
Just like sys.path itself, the new sys variables must have specific
types:
sys.meta_path and sys.path_hooks must be Python lists.
sys.path_importer_cache must be a Python dict.
Modifying these variables in place is allowed, as is replacing them with new
objects.
Packages and the role of __path__
If a module has a __path__ attribute, the import mechanism will treat it
as a package. The __path__ variable is used instead of sys.path when
importing submodules of the package. The rules for sys.path therefore
also apply to pkg.__path__. So sys.path_hooks is also consulted when
pkg.__path__ is traversed. Meta importers don’t necessarily use
sys.path at all to do their work and may therefore ignore the value of
pkg.__path__. In this case it is still advised to set it to list, which
can be empty.
Optional Extensions to the Importer Protocol
The Importer Protocol defines three optional extensions. One is to retrieve
data files, the second is to support module packaging tools and/or tools that
analyze module dependencies (for example Freeze), while the last is to support
execution of modules as scripts. The latter two categories of tools usually
don’t actually load modules, they only need to know if and where they are
available. All three extensions are highly recommended for general purpose
importers, but may safely be left out if those features aren’t needed.
To retrieve the data for arbitrary “files” from the underlying storage
backend, loader objects may supply a method named get_data():
loader.get_data(path)
This method returns the data as a string, or raise IOError if the “file”
wasn’t found. The data is always returned as if “binary” mode was used -
there is no CRLF translation of text files, for example. It is meant for
importers that have some file-system-like properties. The ‘path’ argument is
a path that can be constructed by munging module.__file__ (or
pkg.__path__ items) with the os.path.* functions, for example:
d = os.path.dirname(__file__)
data = __loader__.get_data(os.path.join(d, "logo.gif"))
The following set of methods may be implemented if support for (for example)
Freeze-like tools is desirable. It consists of three additional methods
which, to make it easier for the caller, each of which should be implemented,
or none at all:
loader.is_package(fullname)
loader.get_code(fullname)
loader.get_source(fullname)
All three methods should raise ImportError if the module wasn’t found.
The loader.is_package(fullname) method should return True if the
module specified by ‘fullname’ is a package and False if it isn’t.
The loader.get_code(fullname) method should return the code object
associated with the module, or None if it’s a built-in or extension
module. If the loader doesn’t have the code object but it does have the
source code, it should return the compiled source code. (This is so that our
caller doesn’t also need to check get_source() if all it needs is the code
object.)
The loader.get_source(fullname) method should return the source code for
the module as a string (using newline characters for line endings) or None
if the source is not available (yet it should still raise ImportError if
the module can’t be found by the importer at all).
To support execution of modules as scripts (PEP 338),
the above three methods for
finding the code associated with a module must be implemented. In addition to
those methods, the following method may be provided in order to allow the
runpy module to correctly set the __file__ attribute:
loader.get_filename(fullname)
This method should return the value that __file__ would be set to if the
named module was loaded. If the module is not found, then ImportError
should be raised.
Integration with the ‘imp’ module
The new import hooks are not easily integrated in the existing
imp.find_module() and imp.load_module() calls. It’s questionable
whether it’s possible at all without breaking code; it is better to simply add
a new function to the imp module. The meaning of the existing
imp.find_module() and imp.load_module() calls changes from: “they
expose the built-in import mechanism” to “they expose the basic unhooked
built-in import mechanism”. They simply won’t invoke any import hooks. A new
imp module function is proposed (but not yet implemented) under the name
get_loader(), which is used as in the following pattern:
loader = imp.get_loader(fullname, path)
if loader is not None:
loader.load_module(fullname)
In the case of a “basic” import, one the imp.find_module() function would
handle, the loader object would be a wrapper for the current output of
imp.find_module(), and loader.load_module() would call
imp.load_module() with that output.
Note that this wrapper is currently not yet implemented, although a Python
prototype exists in the test_importhooks.py script (the ImpWrapper
class) included with the patch.
Forward Compatibility
Existing __import__ hooks will not invoke new-style hooks by magic, unless
they call the original __import__ function as a fallback. For example,
ihooks.py, iu.py and imputil.py are in this sense not forward
compatible with this PEP.
Open Issues
Modules often need supporting data files to do their job, particularly in the
case of complex packages or full applications. Current practice is generally
to locate such files via sys.path (or a package.__path__ attribute).
This approach will not work, in general, for modules loaded via an import
hook.
There are a number of possible ways to address this problem:
“Don’t do that”. If a package needs to locate data files via its
__path__, it is not suitable for loading via an import hook. The
package can still be located on a directory in sys.path, as at present,
so this should not be seen as a major issue.
Locate data files from a standard location, rather than relative to the
module file. A relatively simple approach (which is supported by
distutils) would be to locate data files based on sys.prefix (or
sys.exec_prefix). For example, looking in
os.path.join(sys.prefix, "data", package_name).
Import hooks could offer a standard way of getting at data files relative
to the module file. The standard zipimport object provides a method
get_data(name) which returns the content of the “file” called name,
as a string. To allow modules to get at the importer object, zipimport
also adds an attribute __loader__ to the module, containing the
zipimport object used to load the module. If such an approach is used,
it is important that client code takes care not to break if the
get_data() method is not available, so it is not clear that this
approach offers a general answer to the problem.
It was suggested on python-dev that it would be useful to be able to receive a
list of available modules from an importer and/or a list of available data
files for use with the get_data() method. The protocol could grow two
additional extensions, say list_modules() and list_files(). The
latter makes sense on loader objects with a get_data() method. However,
it’s a bit unclear which object should implement list_modules(): the
importer or the loader or both?
This PEP is biased towards loading modules from alternative places: it
currently doesn’t offer dedicated solutions for loading modules from
alternative file formats or with alternative compilers. In contrast, the
ihooks module from the standard library does have a fairly straightforward
way to do this. The Quixote project [7] uses this technique to import PTL
files as if they are ordinary Python modules. To do the same with the new
hooks would either mean to add a new module implementing a subset of
ihooks as a new-style importer, or add a hookable built-in path importer
object.
There is no specific support within this PEP for “stacking” hooks. For
example, it is not obvious how to write a hook to load modules from tar.gz
files by combining separate hooks to load modules from .tar and .gz
files. However, there is no support for such stacking in the existing hook
mechanisms (either the basic “replace __import__” method, or any of the
existing import hook modules) and so this functionality is not an obvious
requirement of the new mechanism. It may be worth considering as a future
enhancement, however.
It is possible (via sys.meta_path) to add hooks which run before
sys.path is processed. However, there is no equivalent way of adding
hooks to run after sys.path is processed. For now, if a hook is required
after sys.path has been processed, it can be simulated by adding an
arbitrary “cookie” string at the end of sys.path, and having the required
hook associated with this cookie, via the normal sys.path_hooks
processing. In the longer term, the path handling code will become a “real”
hook on sys.meta_path, and at that stage it will be possible to insert
user-defined hooks either before or after it.
Implementation
The PEP 302 implementation has been integrated with Python as of 2.3a1. An
earlier version is available as patch #652586 [9], but more interestingly,
the issue contains a fairly detailed history of the development and design.
PEP 273 has been implemented using PEP 302’s import hooks.
References and Footnotes
[1]
imputil module
http://docs.python.org/library/imputil.html
[2]
The Freeze tool.
See also the Tools/freeze/ directory in a Python source distribution
[3]
py2exe by Thomas Heller
http://www.py2exe.org/
[4]
imp.set_frozenmodules() patch
http://bugs.python.org/issue642578
[5]
The path argument to finder.find_module() is there because the
pkg.__path__ variable may be needed at this point. It may either come
from the actual parent module or be supplied by imp.find_module() or
the proposed imp.get_loader() function.
[7]
Quixote, a framework for developing Web applications
http://www.mems-exchange.org/software/quixote/
[9]
New import hooks + Import from Zip files
http://bugs.python.org/issue652586
[10]
Language reference for imports
http://docs.python.org/3/reference/import.html
[11]
importlib documentation
http://docs.python.org/3/library/importlib.html#module-importlib
Copyright
This document has been placed in the public domain.
| Final | PEP 302 – New Import Hooks | Standards Track | This PEP proposes to add a new set of import hooks that offer better
customization of the Python import mechanism. Contrary to the current
__import__ hook, a new-style hook can be injected into the existing
scheme, allowing for a finer grained control of how modules are found and how
they are loaded. |
PEP 303 – Extend divmod() for Multiple Divisors
Author:
Thomas Bellman <bellman+pep-divmod at lysator.liu.se>
Status:
Rejected
Type:
Standards Track
Created:
31-Dec-2002
Python-Version:
2.3
Post-History:
Table of Contents
Abstract
Pronouncement
Specification
Motivation
Rationale
Backwards Compatibility
Reference Implementation
References
Copyright
Abstract
This PEP describes an extension to the built-in divmod() function,
allowing it to take multiple divisors, chaining several calls to
divmod() into one.
Pronouncement
This PEP is rejected. Most uses for chained divmod() involve a
constant modulus (in radix conversions for example) and are more
properly coded as a loop. The example of splitting seconds
into days/hours/minutes/seconds does not generalize to months
and years; rather, the whole use case is handled more flexibly and
robustly by date and time modules. The other use cases mentioned
in the PEP are somewhat rare in real code. The proposal is also
problematic in terms of clarity and obviousness. In the examples,
it is not immediately clear that the argument order is correct or
that the target tuple is of the right length. Users from other
languages are more likely to understand the standard two argument
form without having to re-read the documentation. See python-dev
discussion on 17 June 2005 [1].
Specification
The built-in divmod() function would be changed to accept multiple
divisors, changing its signature from divmod(dividend, divisor) to
divmod(dividend, *divisors). The dividend is divided by the last
divisor, giving a quotient and a remainder. The quotient is then
divided by the second to last divisor, giving a new quotient and
remainder. This is repeated until all divisors have been used,
and divmod() then returns a tuple consisting of the quotient from
the last step, and the remainders from all the steps.
A Python implementation of the new divmod() behaviour could look
like:
def divmod(dividend, *divisors):
modulos = ()
q = dividend
while divisors:
q, r = q.__divmod__(divisors[-1])
modulos = (r,) + modulos
divisors = divisors[:-1]
return (q,) + modulos
Motivation
Occasionally one wants to perform a chain of divmod() operations,
calling divmod() on the quotient from the previous step, with
varying divisors. The most common case is probably converting a
number of seconds into weeks, days, hours, minutes and seconds.
This would today be written as:
def secs_to_wdhms(seconds):
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
d, h = divmod(h, 24)
w, d = divmod(d, 7)
return (w, d, h, m, s)
This is tedious and easy to get wrong each time you need it.
If instead the divmod() built-in is changed according the proposal,
the code for converting seconds to weeks, days, hours, minutes and
seconds then become
def secs_to_wdhms(seconds):
w, d, h, m, s = divmod(seconds, 7, 24, 60, 60)
return (w, d, h, m, s)
which is easier to type, easier to type correctly, and easier to
read.
Other applications are:
Astronomical angles (declination is measured in degrees, minutes
and seconds, right ascension is measured in hours, minutes and
seconds).
Old British currency (1 pound = 20 shilling, 1 shilling = 12 pence).
Anglo-Saxon length units: 1 mile = 1760 yards, 1 yard = 3 feet,
1 foot = 12 inches.
Anglo-Saxon weight units: 1 long ton = 160 stone, 1 stone = 14
pounds, 1 pound = 16 ounce, 1 ounce = 16 dram.
British volumes: 1 gallon = 4 quart, 1 quart = 2 pint, 1 pint
= 20 fluid ounces.
Rationale
The idea comes from APL, which has an operator that does this. (I
don’t remember what the operator looks like, and it would probably
be impossible to render in ASCII anyway.)
The APL operator takes a list as its second operand, while this
PEP proposes that each divisor should be a separate argument to
the divmod() function. This is mainly because it is expected that
the most common uses will have the divisors as constants right in
the call (as the 7, 24, 60, 60 above), and adding a set of
parentheses or brackets would just clutter the call.
Requiring an explicit sequence as the second argument to divmod()
would seriously break backwards compatibility. Making divmod()
check its second argument for being a sequence is deemed to be too
ugly to contemplate. And in the case where one does have a
sequence that is computed other-where, it is easy enough to write
divmod(x, *divs) instead.
Requiring at least one divisor, i.e rejecting divmod(x), has been
considered, but no good reason to do so has come to mind, and is
thus allowed in the name of generality.
Calling divmod() with no divisors should still return a tuple (of
one element). Code that calls divmod() with a varying number of
divisors, and thus gets a return value with an “unknown” number of
elements, would otherwise have to special case that case. Code
that knows it is calling divmod() with no divisors is considered
to be too silly to warrant a special case.
Processing the divisors in the other direction, i.e dividing with
the first divisor first, instead of dividing with the last divisor
first, has been considered. However, the result comes with the
most significant part first and the least significant part last
(think of the chained divmod as a way of splitting a number into
“digits”, with varying weights), and it is reasonable to specify
the divisors (weights) in the same order as the result.
The inverse operation:
def inverse_divmod(seq, *factors):
product = seq[0]
for x, y in zip(factors, seq[1:]):
product = product * x + y
return product
could also be useful. However, writing
seconds = (((((w * 7) + d) * 24 + h) * 60 + m) * 60 + s)
is less cumbersome both to write and to read than the chained
divmods. It is therefore deemed to be less important, and its
introduction can be deferred to its own PEP. Also, such a
function needs a good name, and the PEP author has not managed to
come up with one yet.
Calling divmod("spam") does not raise an error, despite strings
supporting neither division nor modulo. However, unless we know
the other object too, we can’t determine whether divmod() would
work or not, and thus it seems silly to forbid it.
Backwards Compatibility
Any module that replaces the divmod() function in the __builtin__
module, may cause other modules using the new syntax to break. It
is expected that this is very uncommon.
Code that expects a TypeError exception when calling divmod() with
anything but two arguments will break. This is also expected to
be very uncommon.
No other issues regarding backwards compatibility are known.
Reference Implementation
Not finished yet, but it seems a rather straightforward
new implementation of the function builtin_divmod() in
Python/bltinmodule.c.
References
[1]
Raymond Hettinger, “Propose rejection of PEP 303 – Extend divmod() for
Multiple Divisors” https://mail.python.org/pipermail/python-dev/2005-June/054283.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 303 – Extend divmod() for Multiple Divisors | Standards Track | This PEP describes an extension to the built-in divmod() function,
allowing it to take multiple divisors, chaining several calls to
divmod() into one. |
PEP 304 – Controlling Generation of Bytecode Files
Author:
Skip Montanaro
Status:
Withdrawn
Type:
Standards Track
Created:
22-Jan-2003
Post-History:
27-Jan-2003, 31-Jan-2003, 17-Jun-2005
Table of Contents
Historical Note
Abstract
Proposal
Glossary
Locating bytecode files
Writing bytecode files
Defining augmented directories
Fixing the location of the bytecode base
Rationale
Alternatives
Issues
Examples
Implementation
References
Copyright
Historical Note
While this original PEP was withdrawn, a variant of this feature
was eventually implemented for Python 3.8 in https://bugs.python.org/issue33499
Several of the issues and concerns originally raised in this PEP were resolved
by other changes in the intervening years:
the introduction of isolated mode to handle potential security concerns
the switch to importlib, a fully import-hook based import system implementation
PEP 3147’s change in the bytecode cache layout to use __pycache__
subdirectories, including the source_to_cache(path) and
cache_to_source(path) APIs that allow the interpreter to automatically
handle the redirection to a separate cache directory
Abstract
This PEP outlines a mechanism for controlling the generation and
location of compiled Python bytecode files. This idea originally
arose as a patch request [1] and evolved into a discussion thread on
the python-dev mailing list [2]. The introduction of an environment
variable will allow people installing Python or Python-based
third-party packages to control whether or not bytecode files should
be generated at installation time, and if so, where they should be
written. It will also allow users to control whether or not bytecode
files should be generated at application run-time, and if so, where
they should be written.
Proposal
Add a new environment variable, PYTHONBYTECODEBASE, to the mix of
environment variables which Python understands. PYTHONBYTECODEBASE is
interpreted as follows:
If not defined, Python bytecode is generated in exactly the same way
as is currently done. sys.bytecodebase is set to the root directory
(either / on Unix and Mac OSX or the root directory of the startup
(installation???) drive – typically C:\ – on Windows).
If defined and it refers to an existing directory to which the user
has write permission, sys.bytecodebase is set to that directory and
bytecode files are written into a directory structure rooted at that
location.
If defined but empty, sys.bytecodebase is set to None and generation
of bytecode files is suppressed altogether.
If defined and one of the following is true:
it does not refer to a directory,
it refers to a directory, but not one for which the user has write
permission
a warning is displayed, sys.bytecodebase is set to None and
generation of bytecode files is suppressed altogether.
After startup initialization, all runtime references are to
sys.bytecodebase, not the PYTHONBYTECODEBASE environment variable.
sys.path is not modified.
From the above, we see sys.bytecodebase can only take on two valid
types of values: None or a string referring to a valid directory on
the system.
During import, this extension works as follows:
The normal search for a module is conducted. The search order is
roughly: dynamically loaded extension module, Python source file,
Python bytecode file. The only time this mechanism comes into play
is if a Python source file is found.
Once we’ve found a source module, an attempt to read a byte-compiled
file in the same directory is made. (This is the same as before.)
If no byte-compiled file is found, an attempt to read a
byte-compiled file from the augmented directory is made.
If bytecode generation is required, the generated bytecode is written
to the augmented directory if possible.
Note that this PEP is explicitly not about providing
module-by-module or directory-by-directory control over the
disposition of bytecode files.
Glossary
“bytecode base” refers to the current setting of
sys.bytecodebase.
“augmented directory” refers to the directory formed from the
bytecode base and the directory name of the source file.
PYTHONBYTECODEBASE refers to the environment variable when necessary
to distinguish it from “bytecode base”.
Locating bytecode files
When the interpreter is searching for a module, it will use sys.path
as usual. However, when a possible bytecode file is considered, an
extra probe for a bytecode file may be made. First, a check is made
for the bytecode file using the directory in sys.path which holds the
source file (the current behavior). If a valid bytecode file is not
found there (either one does not exist or exists but is out-of-date)
and the bytecode base is not None, a second probe is made using the
directory in sys.path prefixed appropriately by the bytecode base.
Writing bytecode files
When the bytecode base is not None, a new bytecode file is written to
the appropriate augmented directory, never directly to a directory in
sys.path.
Defining augmented directories
Conceptually, the augmented directory for a bytecode file is the
directory in which the source file exists prefixed by the bytecode
base. In a Unix environment this would be:
pcb = os.path.abspath(sys.bytecodebase)
if sourcefile[0] == os.sep: sourcefile = sourcefile[1:]
augdir = os.path.join(pcb, os.path.dirname(sourcefile))
On Windows, which does not have a single-rooted directory tree, the
drive letter of the directory containing the source file is treated as
a directory component after removing the trailing colon. The
augmented directory is thus derived as
pcb = os.path.abspath(sys.bytecodebase)
drive, base = os.path.splitdrive(os.path.dirname(sourcefile))
drive = drive[:-1]
if base[0] == "\\": base = base[1:]
augdir = os.path.join(pcb, drive, base)
Fixing the location of the bytecode base
During program startup, the value of the PYTHONBYTECODEBASE
environment variable is made absolute, checked for validity and added
to the sys module, effectively:
pcb = os.path.abspath(os.environ["PYTHONBYTECODEBASE"])
probe = os.path.join(pcb, "foo")
try:
open(probe, "w")
except IOError:
sys.bytecodebase = None
else:
os.unlink(probe)
sys.bytecodebase = pcb
This allows the user to specify the bytecode base as a relative path,
but not have it subject to changes to the current working directory
during program execution. (I can’t imagine you’d want it to move
around during program execution.)
There is nothing special about sys.bytecodebase. The user may change
it at runtime if desired, but normally it will not be modified.
Rationale
In many environments it is not possible for non-root users to write
into directories containing Python source files. Most of the time,
this is not a problem as Python source is generally byte compiled
during installation. However, there are situations where bytecode
files are either missing or need to be updated. If the directory
containing the source file is not writable by the current user a
performance penalty is incurred each time a program importing the
module is run. [3] Warning messages may also be generated in certain
circumstances. If the directory is writable, nearly simultaneous
attempts to write the bytecode file by two separate processes
may occur, resulting in file corruption. [4]
In environments with RAM disks available, it may be desirable for
performance reasons to write bytecode files to a directory on such a
disk. Similarly, in environments where Python source code resides on
network file systems, it may be desirable to cache bytecode files on
local disks.
Alternatives
The only other alternative proposed so far [1] seems to be to add a
-R flag to the interpreter to disable writing bytecode files
altogether. This proposal subsumes that. Adding a command-line
option is certainly possible, but is probably not sufficient, as the
interpreter’s command line is not readily available during
installation (early during program startup???).
Issues
Interpretation of a module’s __file__ attribute. I believe the
__file__ attribute of a module should reflect the true location of
the bytecode file. If people want to locate a module’s source code,
they should use imp.find_module(module).
Security - What if root has PYTHONBYTECODEBASE set? Yes, this can
present a security risk, but so can many other things the root user
does. The root user should probably not set PYTHONBYTECODEBASE
except possibly during installation. Still, perhaps this problem
can be minimized. When running as root the interpreter should check
to see if PYTHONBYTECODEBASE refers to a directory which is writable
by anyone other than root. If so, it could raise an exception or
warning and set sys.bytecodebase to None. Or, see the next item.
More security - What if PYTHONBYTECODEBASE refers to a general
directory (say, /tmp)? In this case, perhaps loading of a
preexisting bytecode file should occur only if the file is owned by
the current user or root. (Does this matter on Windows?)
The interaction of this PEP with import hooks has not been
considered yet. In fact, the best way to implement this idea might
be as an import hook. See PEP 302.
In the current (pre-PEP 304) environment, it is safe to delete a
source file after the corresponding bytecode file has been created,
since they reside in the same directory. With PEP 304 as currently
defined, this is not the case. A bytecode file in the augmented
directory is only considered when the source file is present and it
thus never considered when looking for module files ending in
“.pyc”. I think this behavior may have to change.
Examples
In the examples which follow, the urllib source code resides in
/usr/lib/python2.3/urllib.py and /usr/lib/python2.3 is in sys.path but
is not writable by the current user.
The bytecode base is /tmp. /usr/lib/python2.3/urllib.pyc exists and
is valid. When urllib is imported, the contents of
/usr/lib/python2.3/urllib.pyc are used. The augmented directory is
not consulted. No other bytecode file is generated.
The bytecode base is /tmp. /usr/lib/python2.3/urllib.pyc exists,
but is out-of-date. When urllib is imported, the generated bytecode
file is written to urllib.pyc in the augmented directory which has
the value /tmp/usr/lib/python2.3. Intermediate directories will be
created as needed.
The bytecode base is None. No urllib.pyc file is found. When
urllib is imported, no bytecode file is written.
The bytecode base is /tmp. No urllib.pyc file is found. When
urllib is imported, the generated bytecode file is written to the
augmented directory which has the value /tmp/usr/lib/python2.3.
Intermediate directories will be created as needed.
At startup, PYTHONBYTECODEBASE is /tmp/foobar, which does not exist.
A warning is emitted, sys.bytecodebase is set to None and no
bytecode files are written during program execution unless
sys.bytecodebase is later changed to refer to a valid,
writable directory.
At startup, PYTHONBYTECODEBASE is set to /, which exists, but is not
writable by the current user. A warning is emitted,
sys.bytecodebase is set to None and no bytecode files are
written during program execution unless sys.bytecodebase is
later changed to refer to a valid, writable directory. Note that
even though the augmented directory constructed for a particular
bytecode file may be writable by the current user, what counts is
that the bytecode base directory itself is writable.
At startup PYTHONBYTECODEBASE is set to the empty string.
sys.bytecodebase is set to None. No warning is generated, however.
If no urllib.pyc file is found when urllib is imported, no bytecode
file is written.
In the Windows examples which follow, the urllib source code resides
in C:\PYTHON22\urllib.py. C:\PYTHON22 is in sys.path but is
not writable by the current user.
The bytecode base is set to C:\TEMP. C:\PYTHON22\urllib.pyc
exists and is valid. When urllib is imported, the contents of
C:\PYTHON22\urllib.pyc are used. The augmented directory is not
consulted.
The bytecode base is set to C:\TEMP. C:\PYTHON22\urllib.pyc
exists, but is out-of-date. When urllib is imported, a new bytecode
file is written to the augmented directory which has the value
C:\TEMP\C\PYTHON22. Intermediate directories will be created as
needed.
At startup PYTHONBYTECODEBASE is set to TEMP and the current
working directory at application startup is H:\NET. The
potential bytecode base is thus H:\NET\TEMP. If this directory
exists and is writable by the current user, sys.bytecodebase will be
set to that value. If not, a warning will be emitted and
sys.bytecodebase will be set to None.
The bytecode base is C:\TEMP. No urllib.pyc file is found.
When urllib is imported, the generated bytecode file is written to
the augmented directory which has the value C:\TEMP\C\PYTHON22.
Intermediate directories will be created as needed.
Implementation
See the patch on Sourceforge. [6]
References
[1] (1, 2)
patch 602345, Option for not writing py.[co] files, Klose
(https://bugs.python.org/issue602345)
[2]
python-dev thread, Disable writing .py[co], Norwitz
(https://mail.python.org/pipermail/python-dev/2003-January/032270.html)
[3]
Debian bug report, Mailman is writing to /usr in cron, Wegner
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=96111)
[4]
python-dev thread, Parallel pyc construction, Dubois
(https://mail.python.org/pipermail/python-dev/2003-January/032060.html)
[6]
patch 677103, PYTHONBYTECODEBASE patch (PEP 304), Montanaro
(https://bugs.python.org/issue677103)
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 304 – Controlling Generation of Bytecode Files | Standards Track | This PEP outlines a mechanism for controlling the generation and
location of compiled Python bytecode files. This idea originally
arose as a patch request [1] and evolved into a discussion thread on
the python-dev mailing list [2]. The introduction of an environment
variable will allow people installing Python or Python-based
third-party packages to control whether or not bytecode files should
be generated at installation time, and if so, where they should be
written. It will also allow users to control whether or not bytecode
files should be generated at application run-time, and if so, where
they should be written. |
PEP 306 – How to Change Python’s Grammar
Author:
Michael Hudson <mwh at python.net>, Jack Diederich <jackdied at gmail.com>, Alyssa Coghlan <ncoghlan at gmail.com>, Benjamin Peterson <benjamin at python.org>
Status:
Withdrawn
Type:
Informational
Created:
29-Jan-2003
Post-History:
30-Jan-2003
Table of Contents
Note
Abstract
Rationale
Checklist
References
Copyright
Note
This PEP has been moved to the Python dev guide [1].
Abstract
There’s more to changing Python’s grammar than editing
Grammar/Grammar and Python/compile.c. This PEP aims to be a
checklist of places that must also be fixed.
It is probably incomplete. If you see omissions, just add them if
you can – you are not going to offend the author’s sense of
ownership. Otherwise submit a bug or patch and assign it to mwh.
This PEP is not intended to be an instruction manual on Python
grammar hacking, for several reasons.
Rationale
People are getting this wrong all the time; it took well over a
year before someone noticed [2] that adding the floor division
operator (//) broke the parser module.
Checklist
Grammar/Grammar: OK, you’d probably worked this one out :)
Parser/Python.asdl may need changes to match the Grammar. Run
make to regenerate Include/Python-ast.h and
Python/Python-ast.c.
Python/ast.c will need changes to create the AST objects
involved with the Grammar change. Lib/compiler/ast.py will
need matching changes to the pure-python AST objects.
Parser/pgen needs to be rerun to regenerate Include/graminit.h
and Python/graminit.c. (make should handle this for you.)
Python/symbtable.c: This handles the symbol collection pass
that happens immediately before the compilation pass.
Python/compile.c: You will need to create or modify the
compiler_* functions to generate opcodes for your productions.
You may need to regenerate Lib/symbol.py and/or Lib/token.py
and/or Lib/keyword.py.
The parser module. Add some of your new syntax to test_parser,
bang on Modules/parsermodule.c until it passes.
Add some usage of your new syntax to test_grammar.py.
The compiler package. A good test is to compile the standard
library and test suite with the compiler package and then check
it runs. Note that this only needs to be done in Python 2.x.
If you’ve gone so far as to change the token structure of
Python, then the Lib/tokenizer.py library module will need to
be changed.
Certain changes may require tweaks to the library module
pyclbr.
Documentation must be written!
After everything’s been checked in, you’re likely to see a new
change to Python/Python-ast.c. This is because this
(generated) file contains the SVN version of the source from
which it was generated. There’s no way to avoid this; you just
have to submit this file separately.
References
[1]
CPython Developer’s Guide: Changing CPython’s Grammar
https://devguide.python.org/grammar/
[2]
SF Bug #676521, parser module validation failure
https://bugs.python.org/issue676521
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 306 – How to Change Python’s Grammar | Informational | There’s more to changing Python’s grammar than editing
Grammar/Grammar and Python/compile.c. This PEP aims to be a
checklist of places that must also be fixed. |
PEP 309 – Partial Function Application
Author:
Peter Harris <scav at blueyonder.co.uk>
Status:
Final
Type:
Standards Track
Created:
08-Feb-2003
Python-Version:
2.5
Post-History:
10-Feb-2003, 27-Feb-2003, 22-Feb-2004, 28-Apr-2006
Table of Contents
Note
Abstract
Acceptance
Motivation
Example Implementation
Examples of Use
Abandoned Syntax Proposal
Feedback from comp.lang.python and python-dev
Summary
References
Copyright
Note
Following the acceptance of this PEP, further discussion on python-dev and
comp.lang.python revealed a desire for several tools that operated on
function objects, but were not related to functional programming. Rather
than create a new module for these tools, it was agreed [1] that the
“functional” module be renamed to “functools” to reflect its newly-widened
focus.
References in this PEP to a “functional” module have been left in for
historical reasons.
Abstract
This proposal is for a function or callable class that allows a new
callable to be constructed from a callable and a partial argument list
(including positional and keyword arguments).
I propose a standard library module called “functional”, to hold
useful higher-order functions, including the implementation of
partial().
An implementation has been submitted to SourceForge [2].
Acceptance
Patch #941881 was accepted and applied in 2005 for Py2.5. It is
essentially as outlined here, a partial() type constructor binding
leftmost positional arguments and any keywords. The partial object has
three read-only attributes func, args, and keywords. Calls to the partial
object can specify keywords that override those in the object itself.
There is a separate and continuing discussion of whether to modify the
partial implementation with a __get__ method to more closely emulate
the behavior of an equivalent function.
Motivation
In functional programming, function currying is a way of implementing
multi-argument functions in terms of single-argument functions. A
function with N arguments is really a function with 1 argument that
returns another function taking (N-1) arguments. Function application
in languages like Haskell and ML works such that a function call:
f x y z
actually means:
(((f x) y) z)
This would be only an obscure theoretical issue except that in actual
programming it turns out to be very useful. Expressing a function in
terms of partial application of arguments to another function can be
both elegant and powerful, and in functional languages it is heavily
used.
In some functional languages, (e.g. Miranda) you can use an expression
such as (+1) to mean the equivalent of Python’s
(lambda x: x + 1).
In general, languages like that are strongly typed, so the compiler
always knows the number of arguments expected and can do the right
thing when presented with a functor and less arguments than expected.
Python does not implement multi-argument functions by currying, so if
you want a function with partially-applied arguments you would
probably use a lambda as above, or define a named function for each
instance.
However, lambda syntax is not to everyone’s taste, so say the least.
Furthermore, Python’s flexible parameter passing using both positional
and keyword presents an opportunity to generalise the idea of partial
application and do things that lambda cannot.
Example Implementation
Here is one way to do a create a callable with partially-applied
arguments in Python. The implementation below is based on improvements
provided by Scott David Daniels:
class partial(object):
def __init__(*args, **kw):
self = args[0]
self.fn, self.args, self.kw = (args[1], args[2:], kw)
def __call__(self, *args, **kw):
if kw and self.kw:
d = self.kw.copy()
d.update(kw)
else:
d = kw or self.kw
return self.fn(*(self.args + args), **d)
(A recipe similar to this has been in the Python Cookbook for some
time [3].)
Note that when the object is called as though it were a function,
positional arguments are appended to those provided to the
constructor, and keyword arguments override and augment those provided
to the constructor.
Positional arguments, keyword arguments or both can be supplied at
when creating the object and when calling it.
Examples of Use
So partial(operator.add, 1) is a bit like (lambda x: 1 + x).
Not an example where you see the benefits, of course.
Note too, that you could wrap a class in the same way, since classes
themselves are callable factories for objects. So in some cases,
rather than defining a subclass, you can specialise classes by partial
application of the arguments to the constructor.
For example, partial(Tkinter.Label, fg='blue') makes Tkinter
Labels that have a blue foreground by default.
Here’s a simple example that uses partial application to construct
callbacks for Tkinter widgets on the fly:
from Tkinter import Tk, Canvas, Button
import sys
from functional import partial
win = Tk()
c = Canvas(win,width=200,height=50)
c.pack()
for colour in sys.argv[1:]:
b = Button(win, text=colour,
command=partial(c.config, bg=colour))
b.pack(side='left')
win.mainloop()
Abandoned Syntax Proposal
I originally suggested the syntax fn@(*args, **kw), meaning the
same as partial(fn, *args, **kw).
The @ sign is used in some assembly languages to imply register
indirection, and the use here is also a kind of indirection.
f@(x) is not f(x), but a thing that becomes f(x) when you
call it.
It was not well-received, so I have withdrawn this part of the
proposal. In any case, @ has been taken for the new decorator syntax.
Feedback from comp.lang.python and python-dev
Among the opinions voiced were the following (which I summarise):
Lambda is good enough.
The @ syntax is ugly (unanimous).
It’s really a curry rather than a closure. There is an almost
identical implementation of a curry class on ActiveState’s Python
Cookbook.
A curry class would indeed be a useful addition to the standard
library.
It isn’t function currying, but partial application. Hence the
name is now proposed to be partial().
It maybe isn’t useful enough to be in the built-ins.
The idea of a module called functional was well received, and
there are other things that belong there (for example function
composition).
For completeness, another object that appends partial arguments
after those supplied in the function call (maybe called
rightcurry) has been suggested.
I agree that lambda is usually good enough, just not always. And I
want the possibility of useful introspection and subclassing.
I disagree that @ is particularly ugly, but it may be that I’m just
weird. We have dictionary, list and tuple literals neatly
differentiated by special punctuation – a way of directly expressing
partially-applied function literals is not such a stretch. However,
not one single person has said they like it, so as far as I’m
concerned it’s a dead parrot.
I concur with calling the class partial rather than curry or closure,
so I have amended the proposal in this PEP accordingly. But not
throughout: some incorrect references to ‘curry’ have been left in
since that’s where the discussion was at the time.
Partially applying arguments from the right, or inserting arguments at
arbitrary positions creates its own problems, but pending discovery of
a good implementation and non-confusing semantics, I don’t think it
should be ruled out.
Carl Banks posted an implementation as a real functional closure:
def curry(fn, *cargs, **ckwargs):
def call_fn(*fargs, **fkwargs):
d = ckwargs.copy()
d.update(fkwargs)
return fn(*(cargs + fargs), **d)
return call_fn
which he assures me is more efficient.
I also coded the class in Pyrex, to estimate how the performance
might be improved by coding it in C:
cdef class curry:
cdef object fn, args, kw
def __init__(self, fn, *args, **kw):
self.fn=fn
self.args=args
self.kw = kw
def __call__(self, *args, **kw):
if self.kw: # from Python Cookbook version
d = self.kw.copy()
d.update(kw)
else:
d=kw
return self.fn(*(self.args + args), **d)
The performance gain in Pyrex is less than 100% over the nested
function implementation, since to be fully general it has to operate
by Python API calls. For the same reason, a C implementation will be
unlikely to be much faster, so the case for a built-in coded in C is
not very strong.
Summary
I prefer that some means to partially-apply functions and other
callables should be present in the standard library.
A standard library module functional should contain an
implementation of partial, and any other higher-order functions
the community want. Other functions that might belong there fall
outside the scope of this PEP though.
Patches for the implementation, documentation and unit tests (SF
patches 931005, 931007, and 931010 respectively) have been
submitted but not yet checked in.
A C implementation by Hye-Shik Chang has also been submitted, although
it is not expected to be included until after the Python
implementation has proven itself useful enough to be worth optimising.
References
[1]
https://mail.python.org/pipermail/python-dev/2006-March/062290.html
[2]
Patches 931005, 931007, and 931010.
[3]
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52549
Copyright
This document has been placed in the public domain.
| Final | PEP 309 – Partial Function Application | Standards Track | This proposal is for a function or callable class that allows a new
callable to be constructed from a callable and a partial argument list
(including positional and keyword arguments). |
PEP 310 – Reliable Acquisition/Release Pairs
Author:
Michael Hudson <mwh at python.net>,
Paul Moore <p.f.moore at gmail.com>
Status:
Rejected
Type:
Standards Track
Created:
18-Dec-2002
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Pronouncement
Rationale
Basic Syntax and Semantics
Possible Extensions
Multiple expressions
Exception handling
Implementation Notes
Open Issues
Alternative Ideas
Backwards Compatibility
Cost of Adoption
Cost of Non-Adoption
References
Copyright
Abstract
It would be nice to have a less typing-intense way of writing:
the_lock.acquire()
try:
....
finally:
the_lock.release()
This PEP proposes a piece of syntax (a ‘with’ block) and a
“small-i” interface that generalizes the above.
Pronouncement
This PEP is rejected in favor of PEP 343.
Rationale
One of the advantages of Python’s exception handling philosophy is
that it makes it harder to do the “wrong” thing (e.g. failing to
check the return value of some system call). Currently, this does
not apply to resource cleanup. The current syntax for acquisition
and release of a resource (for example, a lock) is:
the_lock.acquire()
try:
....
finally:
the_lock.release()
This syntax separates the acquisition and release by a (possibly
large) block of code, which makes it difficult to confirm “at a
glance” that the code manages the resource correctly. Another
common error is to code the “acquire” call within the try block,
which incorrectly releases the lock if the acquire fails.
Basic Syntax and Semantics
The syntax of a ‘with’ statement is as follows:
'with' [ var '=' ] expr ':'
suite
This statement is defined as being equivalent to the following
sequence of statements:
var = expr
if hasattr(var, "__enter__"):
var.__enter__()
try:
suite
finally:
var.__exit__()
(The presence of an __exit__ method is not checked like that of
__enter__ to ensure that using inappropriate objects in with:
statements gives an error).
If the variable is omitted, an unnamed object is allocated on the
stack. In that case, the suite has no access to the unnamed object.
Possible Extensions
A number of potential extensions to the basic syntax have been
discussed on the Python Developers list. None of these extensions
are included in the solution proposed by this PEP. In many cases,
the arguments are nearly equally strong in both directions. In
such cases, the PEP has always chosen simplicity, simply because
where extra power is needed, the existing try block is available.
Multiple expressions
One proposal was for allowing multiple expressions within one
‘with’ statement. The __enter__ methods would be called left to
right, and the __exit__ methods right to left. The advantage of
doing so is that where more than one resource is being managed,
nested ‘with’ statements will result in code drifting towards the
right margin. The solution to this problem is the same as for any
other deep nesting - factor out some of the code into a separate
function. Furthermore, the question of what happens if one of the
__exit__ methods raises an exception (should the other __exit__
methods be called?) needs to be addressed.
Exception handling
An extension to the protocol to include an optional __except__
handler, which is called when an exception is raised, and which
can handle or re-raise the exception, has been suggested. It is
not at all clear that the semantics of this extension can be made
precise and understandable. For example, should the equivalent
code be try ... except ... else if an exception handler is
defined, and try ... finally if not? How can this be determined
at compile time, in general? The alternative is to define the
code as expanding to a try ... except inside a try ... finally.
But this may not do the right thing in real life.
The only use case identified for exception handling is with
transactional processing (commit on a clean finish, and rollback
on an exception). This is probably just as easy to handle with a
conventional try ... except ... else block, and so the PEP does
not include any support for exception handlers.
Implementation Notes
There is a potential race condition in the code specified as
equivalent to the with statement. For example, if a
KeyboardInterrupt exception is raised between the completion of
the __enter__ method call and the start of the try block, the
__exit__ method will not be called. This can lead to resource
leaks, or to deadlocks. [XXX Guido has stated that he cares about
this sort of race condition, and intends to write some C magic to
handle them. The implementation of the ‘with’ statement should
copy this.]
Open Issues
Should existing classes (for example, file-like objects and locks)
gain appropriate __enter__ and __exit__ methods? The obvious
reason in favour is convenience (no adapter needed). The argument
against is that if built-in files have this but (say) StringIO
does not, then code that uses “with” on a file object can’t be
reused with a StringIO object. So __exit__ = close becomes a part
of the “file-like object” protocol, which user-defined classes may
need to support.
The __enter__ hook may be unnecessary - for many use cases, an
adapter class is needed and in that case, the work done by the
__enter__ hook can just as easily be done in the __init__ hook.
If a way of controlling object lifetimes explicitly was available,
the function of the __exit__ hook could be taken over by the
existing __del__ hook. An email exchange [1] with a proponent of
this approach left one of the authors even more convinced that
it isn’t the right idea…
It has been suggested [2] that the “__exit__” method be called
“close”, or that a “close” method should be considered if no
__exit__ method is found, to increase the “out-of-the-box utility”
of the “with …” construct.
There are some similarities in concept between ‘with …’ blocks
and generators, which have led to proposals that for loops could
implement the with block functionality [3]. While neat on some
levels, we think that for loops should stick to being loops.
Alternative Ideas
IEXEC: Holger Krekel – generalised approach with XML-like syntax
(no URL found…).
Holger has much more far-reaching ideas about “execution monitors”
that are informed about details of control flow in the monitored
block. While interesting, these ideas could change the language
in deep and subtle ways and as such belong to a different PEP.
Any Smalltalk/Ruby anonymous block style extension obviously
subsumes this one.
PEP 319 is in the same area, but did not win support when aired on
python-dev.
Backwards Compatibility
This PEP proposes a new keyword, so the __future__ game will need
to be played.
Cost of Adoption
Those who claim the language is getting larger and more
complicated have something else to complain about. It’s something
else to teach.
For the proposal to be useful, many file-like and lock-like
classes in the standard library and other code will have to have
__exit__ = close
or similar added.
Cost of Non-Adoption
Writing correct code continues to be more effort than writing
incorrect code.
References
There are various python-list and python-dev discussions that
could be mentioned here.
[1]
Off-list conversation between Michael Hudson and Bill Soudan
(made public with permission)
http://starship.python.net/crew/mwh/pep310/
[2]
Samuele Pedroni on python-dev
https://mail.python.org/pipermail/python-dev/2003-August/037795.html
[3]
Thread on python-dev with subject
[Python-Dev] pre-PEP: Resource-Release Support for Generators
starting at
https://mail.python.org/pipermail/python-dev/2003-August/037803.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 310 – Reliable Acquisition/Release Pairs | Standards Track | It would be nice to have a less typing-intense way of writing: |
PEP 311 – Simplified Global Interpreter Lock Acquisition for Extensions
Author:
Mark Hammond <mhammond at skippinet.com.au>
Status:
Final
Type:
Standards Track
Created:
05-Feb-2003
Python-Version:
2.3
Post-History:
05-Feb-2003, 14-Feb-2003, 19-Apr-2003
Table of Contents
Abstract
Rationale
Limitations and Exclusions
Proposal
Design and Implementation
Implementation
References
Copyright
Abstract
This PEP proposes a simplified API for access to the Global
Interpreter Lock (GIL) for Python extension modules.
Specifically, it provides a solution for authors of complex
multi-threaded extensions, where the current state of Python
(i.e., the state of the GIL is unknown.
This PEP proposes a new API, for platforms built with threading
support, to manage the Python thread state. An implementation
strategy is proposed, along with an initial, platform independent
implementation.
Rationale
The current Python interpreter state API is suitable for simple,
single-threaded extensions, but quickly becomes incredibly complex
for non-trivial, multi-threaded extensions.
Currently Python provides two mechanisms for dealing with the GIL:
Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS macros.
These macros are provided primarily to allow a simple Python
extension that already owns the GIL to temporarily release it
while making an “external” (ie, non-Python), generally
expensive, call. Any existing Python threads that are blocked
waiting for the GIL are then free to run. While this is fine
for extensions making calls from Python into the outside world,
it is no help for extensions that need to make calls into Python
when the thread state is unknown.
PyThreadState and PyInterpreterState APIs.
These API functions allow an extension/embedded application to
acquire the GIL, but suffer from a serious boot-strapping
problem - they require you to know the state of the Python
interpreter and of the GIL before they can be used. One
particular problem is for extension authors that need to deal
with threads never before seen by Python, but need to call
Python from this thread. It is very difficult, delicate and
error prone to author an extension where these “new” threads
always know the exact state of the GIL, and therefore can
reliably interact with this API.
For these reasons, the question of how such extensions should
interact with Python is quickly becoming a FAQ. The main impetus
for this PEP, a thread on python-dev [1], immediately identified
the following projects with this exact issue:
The win32all extensions
Boost
ctypes
Python-GTK bindings
Uno
PyObjC
Mac toolbox
PyXPCOM
Currently, there is no reasonable, portable solution to this
problem, forcing each extension author to implement their own
hand-rolled version. Further, the problem is complex, meaning
many implementations are likely to be incorrect, leading to a
variety of problems that will often manifest simply as “Python has
hung”.
While the biggest problem in the existing thread-state API is the
lack of the ability to query the current state of the lock, it is
felt that a more complete, simplified solution should be offered
to extension authors. Such a solution should encourage authors to
provide error-free, complex extension modules that take full
advantage of Python’s threading mechanisms.
Limitations and Exclusions
This proposal identifies a solution for extension authors with
complex multi-threaded requirements, but that only require a
single “PyInterpreterState”. There is no attempt to cater for
extensions that require multiple interpreter states. At the time
of writing, no extension has been identified that requires
multiple PyInterpreterStates, and indeed it is not clear if that
facility works correctly in Python itself.
This API will not perform automatic initialization of Python, or
initialize Python for multi-threaded operation. Extension authors
must continue to call Py_Initialize(), and for multi-threaded
applications, PyEval_InitThreads(). The reason for this is that
the first thread to call PyEval_InitThreads() is nominated as the
“main thread” by Python, and so forcing the extension author to
specify the main thread (by requiring them to make this first call)
removes ambiguity. As Py_Initialize() must be called before
PyEval_InitThreads(), and as both of these functions currently
support being called multiple times, the burden this places on
extension authors is considered reasonable.
It is intended that this API be all that is necessary to acquire
the Python GIL. Apart from the existing, standard
Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS macros, it is
assumed that no additional thread state API functions will be used
by the extension. Extensions with such complicated requirements
are free to continue to use the existing thread state API.
Proposal
This proposal recommends a new API be added to Python to simplify
the management of the GIL. This API will be available on all
platforms built with WITH_THREAD defined.
The intent is that assuming Python has correctly been initialized,
an extension author be able to use a small, well-defined “prologue
dance”, at any time and on any thread, which will ensure Python
is ready to be used on that thread. After the extension has
finished with Python, it must also perform an “epilogue dance” to
release any resources previously acquired. Ideally, these dances
can be expressed in a single line.
Specifically, the following new APIs are proposed:
/* Ensure that the current thread is ready to call the Python
C API, regardless of the current state of Python, or of its
thread lock. This may be called as many times as desired
by a thread so long as each call is matched with a call to
PyGILState_Release(). In general, other thread-state APIs may
be used between _Ensure() and _Release() calls, so long as the
thread-state is restored to its previous state before the Release().
For example, normal use of the Py_BEGIN_ALLOW_THREADS/
Py_END_ALLOW_THREADS macros are acceptable.
The return value is an opaque "handle" to the thread state when
PyGILState_Acquire() was called, and must be passed to
PyGILState_Release() to ensure Python is left in the same state. Even
though recursive calls are allowed, these handles can *not* be
shared - each unique call to PyGILState_Ensure must save the handle
for its call to PyGILState_Release.
When the function returns, the current thread will hold the GIL.
Failure is a fatal error.
*/
PyAPI_FUNC(PyGILState_STATE) PyGILState_Ensure(void);
/* Release any resources previously acquired. After this call, Python's
state will be the same as it was prior to the corresponding
PyGILState_Acquire call (but generally this state will be unknown to
the caller, hence the use of the GILState API.)
Every call to PyGILState_Ensure must be matched by a call to
PyGILState_Release on the same thread.
*/
PyAPI_FUNC(void) PyGILState_Release(PyGILState_STATE);
Common usage will be:
void SomeCFunction(void)
{
/* ensure we hold the lock */
PyGILState_STATE state = PyGILState_Ensure();
/* Use the Python API */
...
/* Restore the state of Python */
PyGILState_Release(state);
}
Design and Implementation
The general operation of PyGILState_Ensure() will be:
assert Python is initialized.
Get a PyThreadState for the current thread, creating and saving
if necessary.
remember the current state of the lock (owned/not owned)
If the current state does not own the GIL, acquire it.
Increment a counter for how many calls to PyGILState_Ensure have been
made on the current thread.
return
The general operation of PyGILState_Release() will be:
assert our thread currently holds the lock.
If old state indicates lock was previously unlocked, release GIL.
Decrement the PyGILState_Ensure counter for the thread.
If counter == 0:
release and delete the PyThreadState.
forget the ThreadState as being owned by the thread.
return
It is assumed that it is an error if two discrete PyThreadStates
are used for a single thread. Comments in pystate.h (“State
unique per thread”) support this view, although it is never
directly stated. Thus, this will require some implementation of
Thread Local Storage. Fortunately, a platform independent
implementation of Thread Local Storage already exists in the
Python source tree, in the SGI threading port. This code will be
integrated into the platform independent Python core, but in such
a way that platforms can provide a more optimal implementation if
desired.
Implementation
An implementation of this proposal can be found at
https://bugs.python.org/issue684256
References
[1]
David Abrahams, Extension modules, Threading, and the GIL
https://mail.python.org/pipermail/python-dev/2002-December/031424.html
Copyright
This document has been placed in the public domain.
| Final | PEP 311 – Simplified Global Interpreter Lock Acquisition for Extensions | Standards Track | This PEP proposes a simplified API for access to the Global
Interpreter Lock (GIL) for Python extension modules.
Specifically, it provides a solution for authors of complex
multi-threaded extensions, where the current state of Python
(i.e., the state of the GIL is unknown. |
PEP 312 – Simple Implicit Lambda
Author:
Roman Suzi <rnd at onego.ru>, Alex Martelli <aleaxit at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
11-Feb-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Deferral
Motivation
Rationale
Syntax
Examples of Use
Implementation
Discussion
Credits
References
Copyright
Abstract
This PEP proposes to make argumentless lambda keyword optional in
some cases where it is not grammatically ambiguous.
Deferral
The BDFL hates the unary colon syntax. This PEP needs to go back
to the drawing board and find a more Pythonic syntax (perhaps an
alternative unary operator). See python-dev discussion on
17 June 2005 [1].
Also, it is probably a good idea to eliminate the alternative
propositions which have no chance at all. The examples section
is good and highlights the readability improvements. It would
carry more weight with additional examples and with real-world
referents (instead of the abstracted dummy calls to :A and :B).
Motivation
Lambdas are useful for defining anonymous functions, e.g. for use
as callbacks or (pseudo)-lazy evaluation schemes. Often, lambdas
are not used when they would be appropriate, just because the
keyword “lambda” makes code look complex. Omitting lambda in some
special cases is possible, with small and backwards compatible
changes to the grammar, and provides a cheap cure against such
“lambdaphobia”.
Rationale
Sometimes people do not use lambdas because they fear to introduce
a term with a theory behind it. This proposal makes introducing
argumentless lambdas easier, by omitting the “lambda” keyword.
itself. Implementation can be done simply changing grammar so it
lets the “lambda” keyword be implied in a few well-known cases.
In particular, adding surrounding brackets lets you specify
nullary lambda anywhere.
Syntax
An argumentless “lambda” keyword can be omitted in the following
cases:
immediately after “=” in named parameter assignment or default
value assignment;
immediately after “(” in any expression;
immediately after a “,” in a function argument list;
immediately after a “:” in a dictionary literal; (not
implemented)
in an assignment statement; (not implemented)
Examples of Use
Inline if:def ifelse(cond, true_part, false_part):
if cond:
return true_part()
else:
return false_part()
# old syntax:
print ifelse(a < b, lambda:A, lambda:B)
# new syntax:
print ifelse(a < b, :A, :B)
# parts A and B may require extensive processing, as in:
print ifelse(a < b, :ext_proc1(A), :ext_proc2(B))
Locking:def with(alock, acallable):
alock.acquire()
try:
acallable()
finally:
alock.release()
with(mylock, :x(y(), 23, z(), 'foo'))
Implementation
Implementation requires some tweaking of the Grammar/Grammar file
in the Python sources, and some adjustment of
Modules/parsermodule.c to make syntactic and pragmatic changes.
(Some grammar/parser guru is needed to make a full
implementation.)
Here are the changes needed to Grammar to allow implicit lambda:
varargslist: (fpdef ['=' imptest] ',')* ('*' NAME [',' '**'
NAME] | '**' NAME) | fpdef ['=' imptest] (',' fpdef ['='
imptest])* [',']
imptest: test | implambdef
atom: '(' [imptestlist] ')' | '[' [listmaker] ']' |
'{' [dictmaker] '}' | '`' testlist1 '`' | NAME | NUMBER | STRING+
implambdef: ':' test
imptestlist: imptest (',' imptest)* [',']
argument: [test '='] imptest
Three new non-terminals are needed: imptest for the place where
implicit lambda may occur, implambdef for the implicit lambda
definition itself, imptestlist for a place where imptest’s may
occur.
This implementation is not complete. First, because some files in
Parser module need to be updated. Second, some additional places
aren’t implemented, see Syntax section above.
Discussion
This feature is not a high-visibility one (the only novel part is
the absence of lambda). The feature is intended to make null-ary
lambdas more appealing syntactically, to provide lazy evaluation
of expressions in some simple cases. This proposal is not targeted
at more advanced cases (demanding arguments for the lambda).
There is an alternative proposition for implicit lambda: implicit
lambda with unused arguments. In this case the function defined by
such lambda can accept any parameters, i.e. be equivalent to:
lambda *args: expr. This form would be more powerful. Grep in the
standard library revealed that such lambdas are indeed in use.
One more extension can provide a way to have a list of parameters
passed to a function defined by implicit lambda. However, such
parameters need some special name to be accessed and are unlikely
to be included in the language. Possible local names for such
parameters are: _, __args__, __. For example:
reduce(:_[0] + _[1], [1,2,3], 0)
reduce(:__[0] + __[1], [1,2,3], 0)
reduce(:__args__[0] + __args__[1], [1,2,3], 0)
These forms do not look very nice, and in the PEP author’s opinion
do not justify the removal of the lambda keyword in such cases.
Credits
The idea of dropping lambda was first coined by Paul Rubin at 08
Feb 2003 16:39:30 -0800 in comp.lang.python while discussing the
thread “For review: PEP 308 - If-then-else expression” [2].
References
[1]
Guido van Rossum, Recommend accepting PEP 312 – Simple Implicit Lambda
https://mail.python.org/pipermail/python-dev/2005-June/054304.html
[2]
Guido van Rossum, For review: PEP 308 - If-then-else expression
https://mail.python.org/pipermail/python-dev/2003-February/033178.html
Copyright
This document has been placed in the public domain.
| Deferred | PEP 312 – Simple Implicit Lambda | Standards Track | This PEP proposes to make argumentless lambda keyword optional in
some cases where it is not grammatically ambiguous. |
PEP 313 – Adding Roman Numeral Literals to Python
Author:
Mike Meyer <mwm at mired.org>
Status:
Rejected
Type:
Standards Track
Created:
01-Apr-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
BDFL Pronouncement
Rationale
Syntax for Roman literals
Built-In “roman” Function
Compatibility Issues
Copyright
Abstract
This PEP (also known as PEP CCCXIII) proposes adding Roman
numerals as a literal type. It also proposes the new built-in
function “roman”, which converts an object to an integer, then
converts the integer to a string that is the Roman numeral literal
equivalent to the integer.
BDFL Pronouncement
This PEP is rejected. While the majority of Python users deemed this
to be a nice-to-have feature, the community was unable to reach a
consensus on whether nine should be represented as IX, the modern
form, or VIIII, the classic form. Likewise, no agreement was
reached on whether MXM or MCMXC would be considered a well-formed
representation of 1990. A vocal minority of users has also requested
support for lower-cased numerals for use in (i) powerpoint slides,
(ii) academic work, and (iii) Perl documentation.
Rationale
Roman numerals are used in a number of areas, and adding them to
Python as literals would make computations in those areas easier.
For instance, Super Bowls are counted with Roman numerals, and many
older movies have copyright dates in Roman numerals. Further,
LISP provides a Roman numerals literal package, so adding Roman
numerals to Python will help ease the LISP-envy sometimes seen in
comp.lang.python. Besides, the author thinks this is the easiest
way to get his name on a PEP.
Syntax for Roman literals
Roman numeral literals will consist of the characters M, D, C, L,
X, V and I, and only those characters. They must be in upper
case, and represent an integer with the following rules:
Except as noted below, they must appear in the order M, D, C,
L, X, V then I. Each occurrence of each character adds 1000, 500,
100, 50, 10, 5 and 1 to the value of the literal, respectively.
Only one D, V or L may appear in any given literal.
At most three each of Is, Xs and Cs may appear consecutively
in any given literal.
A single I may appear immediately to the left of the single V,
followed by no Is, and adds 4 to the value of the literal.
A single I may likewise appear before the last X, followed by
no Is or Vs, and adds 9 to the value.
X is to L and C as I is to V and X, except the values are 40
and 90, respectively.
C is to D and M as I is to V and X, except the values are 400
and 900, respectively.
Any literal composed entirely of M, D, C, L, X, V and I characters
that does not follow this format will raise a syntax error,
because explicit is better than implicit.
Built-In “roman” Function
The new built-in function “roman” will aide the translation from
integers to Roman numeral literals. It will accept a single
object as an argument, and return a string containing the literal
of the same value. If the argument is not an integer or a
rational (see PEP 239) it will passed through the existing
built-in “int” to obtain the value. This may cause a loss of
information if the object was a float. If the object is a
rational, then the result will be formatted as a rational literal
(see PEP 240) with the integers in the string being Roman
numeral literals.
Compatibility Issues
No new keywords are introduced by this proposal. Programs that
use variable names that are all upper case and contain only the
characters M, D, C, L, X, V and I will be affected by the new
literals. These programs will now have syntax errors when those
variables are assigned, and either syntax errors or subtle bugs
when those variables are referenced in expressions. Since such
variable names violate PEP 8, the code is already broken, it
just wasn’t generating exceptions. This proposal corrects that
oversight in the language.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 313 – Adding Roman Numeral Literals to Python | Standards Track | This PEP (also known as PEP CCCXIII) proposes adding Roman
numerals as a literal type. It also proposes the new built-in
function “roman”, which converts an object to an integer, then
converts the integer to a string that is the Roman numeral literal
equivalent to the integer. |
PEP 315 – Enhanced While Loop
Author:
Raymond Hettinger <python at rcn.com>, W Isaac Carroll <icarroll at pobox.com>
Status:
Rejected
Type:
Standards Track
Created:
25-Apr-2003
Python-Version:
2.5
Post-History:
Table of Contents
Abstract
Notice
Motivation
Syntax
Semantics of break and continue
Future Statement
Implementation
References
Copyright
Abstract
This PEP proposes adding an optional “do” clause to the beginning
of the while loop to make loop code clearer and reduce errors
caused by code duplication.
Notice
Rejected; see [1].
This PEP has been deferred since 2006; see [2].
Subsequent efforts to revive the PEP in April 2009 did not
meet with success because no syntax emerged that could
compete with the following form:
while True:
<setup code>
if not <condition>:
break
<loop body>
A syntax alternative to the one proposed in the PEP was found for
a basic do-while loop but it gained little support because the
condition was at the top:
do ... while <cond>:
<loop body>
Users of the language are advised to use the while-True form with
an inner if-break when a do-while loop would have been appropriate.
Motivation
It is often necessary for some code to be executed before each
evaluation of the while loop condition. This code is often
duplicated outside the loop, as setup code that executes once
before entering the loop:
<setup code>
while <condition>:
<loop body>
<setup code>
The problem is that duplicated code can be a source of errors if
one instance is changed but the other is not. Also, the purpose
of the second instance of the setup code is not clear because it
comes at the end of the loop.
It is possible to prevent code duplication by moving the loop
condition into a helper function, or an if statement in the loop
body. However, separating the loop condition from the while
keyword makes the behavior of the loop less clear:
def helper(args):
<setup code>
return <condition>
while helper(args):
<loop body>
This last form has the additional drawback of requiring the loop’s
else clause to be added to the body of the if statement, further
obscuring the loop’s behavior:
while True:
<setup code>
if not <condition>: break
<loop body>
This PEP proposes to solve these problems by adding an optional
clause to the while loop, which allows the setup code to be
expressed in a natural way:
do:
<setup code>
while <condition>:
<loop body>
This keeps the loop condition with the while keyword where it
belongs, and does not require code to be duplicated.
Syntax
The syntax of the while statement
while_stmt : "while" expression ":" suite
["else" ":" suite]
is extended as follows:
while_stmt : ["do" ":" suite]
"while" expression ":" suite
["else" ":" suite]
Semantics of break and continue
In the do-while loop the break statement will behave the same as
in the standard while loop: It will immediately terminate the loop
without evaluating the loop condition or executing the else
clause.
A continue statement in the do-while loop jumps to the while
condition check.
In general, when the while suite is empty (a pass statement),
the do-while loop and break and continue statements should match
the semantics of do-while in other languages.
Likewise, when the do suite is empty, the do-while loop and
break and continue statements should match behavior found
in regular while loops.
Future Statement
Because of the new keyword “do”, the statement
from __future__ import do_while
will initially be required to use the do-while form.
Implementation
The first implementation of this PEP can compile the do-while loop
as an infinite loop with a test that exits the loop.
References
[1]
Guido van Rossum, PEP 315: do-while
https://mail.python.org/pipermail/python-ideas/2013-June/021610.html
[2]
Raymond Hettinger, release plan for 2.5 ?
https://mail.python.org/pipermail/python-dev/2006-February/060718.html
Copyright
This document is placed in the public domain.
| Rejected | PEP 315 – Enhanced While Loop | Standards Track | This PEP proposes adding an optional “do” clause to the beginning
of the while loop to make loop code clearer and reduce errors
caused by code duplication. |
PEP 316 – Programming by Contract for Python
Author:
Terence Way <terry at wayforward.net>
Status:
Deferred
Type:
Standards Track
Created:
02-May-2003
Post-History:
Table of Contents
Abstract
Motivation
Specification
Exceptions
Inheritance
Rationale
Reference Implementation
References
Copyright
Abstract
This submission describes programming by contract for Python.
Eiffel’s Design By Contract(tm) is perhaps the most popular use of
programming contracts [2].
Programming contracts extends the language to include invariant
expressions for classes and modules, and pre- and post-condition
expressions for functions and methods.
These expressions (contracts) are similar to assertions: they must be
true or the program is stopped, and run-time checking of the contracts
is typically only enabled while debugging. Contracts are higher-level
than straight assertions and are typically included in documentation.
Motivation
Python already has assertions, why add extra stuff to the language to
support something like contracts? The two best reasons are 1) better,
more accurate documentation, and 2) easier testing.
Complex modules and classes never seem to be documented quite right.
The documentation provided may be enough to convince a programmer to
use a particular module or class over another, but the programmer
almost always has to read the source code when the real debugging
starts.
Contracts extend the excellent example provided by the doctest
module [4]. Documentation is readable by programmers, yet has
executable tests embedded in it.
Testing code with contracts is easier too. Comprehensive contracts
are equivalent to unit tests [8]. Tests exercise the full range of
pre-conditions, and fail if the post-conditions are triggered.
Theoretically, a correctly specified function can be tested completely
randomly.
So why add this to the language? Why not have several different
implementations, or let programmers implement their own assertions?
The answer is the behavior of contracts under inheritance.
Suppose Alice and Bob use different assertions packages. If Alice
produces a class library protected by assertions, Bob cannot derive
classes from Alice’s library and expect proper checking of
post-conditions and invariants. If they both use the same assertions
package, then Bob can override Alice’s methods yet still test against
Alice’s contract assertions. The natural place to find this
assertions system is in the language’s run-time library.
Specification
The docstring of any module or class can include invariant contracts
marked off with a line that starts with the keyword inv followed
by a colon (:). Whitespace at the start of the line and around the
colon is ignored. The colon is either immediately followed by a
single expression on the same line, or by a series of expressions on
following lines indented past the inv keyword. The normal Python
rules about implicit and explicit line continuations are followed
here. Any number of invariant contracts can be in a docstring.
Some examples:
# state enumeration
START, CONNECTING, CONNECTED, CLOSING, CLOSED = range(5)
class conn:
"""A network connection
inv: self.state in [START, CLOSED, # closed states
CONNECTING, CLOSING, # transition states
CONNECTED]
inv: 0 <= self.seqno < 256
"""
class circbuf:
"""A circular buffer.
inv:
# there can be from 0 to max items on the buffer
0 <= self.len <= len(self.buf)
# g is a valid index into buf
0 <= self.g < len(self.buf)
# p is also a valid index into buf
0 <= self.p < len(self.buf)
# there are len items between get and put
(self.p - self.g) % len(self.buf) == \
self.len % len(self.buf)
"""
Module invariants must be true after the module is loaded, and at the
entry and exit of every public function within the module.
Class invariants must be true after the __init__ function returns,
at the entry of the __del__ function, and at the entry and exit of
every other public method of the class. Class invariants must use the
self variable to access instance variables.
A method or function is public if its name doesn’t start with an
underscore (_), unless it starts and ends with ‘__’ (two underscores).
The docstring of any function or method can have pre-conditions
documented with the keyword pre following the same rules above.
Post-conditions are documented with the keyword post optionally
followed by a list of variables. The variables are in the same scope
as the body of the function or method. This list declares the
variables that the function/method is allowed to modify.
An example:
class circbuf:
def __init__(self, leng):
"""Construct an empty circular buffer.
pre: leng > 0
post[self]:
self.is_empty()
len(self.buf) == leng
"""
A double-colon (::) can be used instead of a single colon (:) to
support docstrings written using reStructuredText [7]. For
example, the following two docstrings describe the same contract:
"""pre: leng > 0"""
"""pre:: leng > 0"""
Expressions in pre- and post-conditions are defined in the module
namespace – they have access to nearly all the variables that the
function can access, except closure variables.
The contract expressions in post-conditions have access to two
additional variables: __old__ which is filled with shallow copies
of values declared in the variable list immediately following the post
keyword, and __return__ which is bound to the return value of the
function or method.
An example:
class circbuf:
def get(self):
"""Pull an entry from a non-empty circular buffer.
pre: not self.is_empty()
post[self.g, self.len]:
__return__ == self.buf[__old__.self.g]
self.len == __old__.self.len - 1
"""
All contract expressions have access to some additional convenience
functions. To make evaluating the truth of sequences easier, two
functions forall and exists are defined as:
def forall(a, fn = bool):
"""Return True only if all elements in a are true.
>>> forall([])
1
>>> even = lambda x: x % 2 == 0
>>> forall([2, 4, 6, 8], even)
1
>>> forall('this is a test'.split(), lambda x: len(x) == 4)
0
"""
def exists(a, fn = bool):
"""Returns True if there is at least one true value in a.
>>> exists([])
0
>>> exists('this is a test'.split(), lambda x: len(x) == 4)
1
"""
An example:
def sort(a):
"""Sort a list.
pre: isinstance(a, type(list))
post[a]:
# array size is unchanged
len(a) == len(__old__.a)
# array is ordered
forall([a[i] >= a[i-1] for i in range(1, len(a))])
# all the old elements are still in the array
forall(__old__.a, lambda e: __old__.a.count(e) == a.count(e))
"""
To make evaluating conditions easier, the function implies is
defined. With two arguments, this is similar to the logical implies
(=>) operator. With three arguments, this is similar to C’s
conditional expression (x?a:b). This is defined as:
implies(False, a) => True
implies(True, a) => a
implies(False, a, b) => b
implies(True, a, b) => a
On entry to a function, the function’s pre-conditions are checked. An
assertion error is raised if any pre-condition is false. If the
function is public, then the class or module’s invariants are also
checked. Copies of variables declared in the post are saved, the
function is called, and if the function exits without raising an
exception, the post-conditions are checked.
Exceptions
Class/module invariants are checked even if a function or method exits
by signalling an exception (post-conditions are not).
All failed contracts raise exceptions which are subclasses of the
ContractViolationError exception, which is in turn a subclass of the
AssertionError exception. Failed pre-conditions raise a
PreconditionViolationError exception. Failed post-conditions raise
a PostconditionViolationError exception, and failed invariants raise
a InvariantViolationError exception.
The class hierarchy:
AssertionError
ContractViolationError
PreconditionViolationError
PostconditionViolationError
InvariantViolationError
InvalidPreconditionError
The InvalidPreconditionError is raised when pre-conditions are
illegally strengthened, see the next section on Inheritance.
Example:
try:
some_func()
except contract.PreconditionViolationError:
# failed pre-condition, ok
pass
Inheritance
A class’s invariants include all the invariants for all super-classes
(class invariants are ANDed with super-class invariants). These
invariants are checked in method-resolution order.
A method’s post-conditions also include all overridden post-conditions
(method post-conditions are ANDed with all overridden method
post-conditions).
An overridden method’s pre-conditions can be ignored if the overriding
method’s pre-conditions are met. However, if the overriding method’s
pre-conditions fail, all of the overridden method’s pre-conditions
must also fail. If not, a separate exception is raised, the
InvalidPreconditionError. This supports weakening pre-conditions.
A somewhat contrived example:
class SimpleMailClient:
def send(self, msg, dest):
"""Sends a message to a destination:
pre: self.is_open() # we must have an open connection
"""
def recv(self):
"""Gets the next unread mail message.
Returns None if no message is available.
pre: self.is_open() # we must have an open connection
post: __return__ is None or isinstance(__return__, Message)
"""
class ComplexMailClient(SimpleMailClient):
def send(self, msg, dest):
"""Sends a message to a destination.
The message is sent immediately if currently connected.
Otherwise, the message is queued locally until a
connection is made.
pre: True # weakens the pre-condition from SimpleMailClient
"""
def recv(self):
"""Gets the next unread mail message.
Waits until a message is available.
pre: True # can always be called
post: isinstance(__return__, Message)
"""
Because pre-conditions can only be weakened, a ComplexMailClient can
replace a SimpleMailClient with no fear of breaking existing code.
Rationale
Except for the following differences, programming-by-contract for
Python mirrors the Eiffel DBC specification [3].
Embedding contracts in docstrings is patterned after the doctest
module. It removes the need for extra syntax, ensures that programs
with contracts are backwards-compatible, and no further work is
necessary to have the contracts included in the docs.
The keywords pre, post, and inv were chosen instead of the
Eiffel-style REQUIRE, ENSURE, and INVARIANT because
they’re shorter, more in line with mathematical notation, and for a
more subtle reason: the word ‘require’ implies caller
responsibilities, while ‘ensure’ implies provider guarantees. Yet
pre-conditions can fail through no fault of the caller when using
multiple inheritance, and post-conditions can fail through no fault of
the function when using multiple threads.
Loop invariants as used in Eiffel are unsupported. They’re a pain to
implement, and not part of the documentation anyway.
The variable names __old__ and __return__ were picked to avoid
conflicts with the return keyword and to stay consistent with
Python naming conventions: they’re public and provided by the Python
implementation.
Having variable declarations after a post keyword describes exactly
what the function or method is allowed to modify. This removes the
need for the NoChange syntax in Eiffel, and makes the
implementation of __old__ much easier. It also is more in line
with Z schemas [9], which are divided into two parts: declaring what
changes followed by limiting the changes.
Shallow copies of variables for the __old__ value prevent an
implementation of contract programming from slowing down a system too
much. If a function changes values that wouldn’t be caught by a
shallow copy, it can declare the changes like so:
post[self, self.obj, self.obj.p]
The forall, exists, and implies functions were added after
spending some time documenting existing functions with contracts.
These capture a majority of common specification idioms. It might
seem that defining implies as a function might not work (the
arguments are evaluated whether needed or not, in contrast with other
boolean operators), but it works for contracts since there should be
no side-effects for any expression in a contract.
Reference Implementation
A reference implementation is available [1]. It replaces existing
functions with new functions that do contract checking, by directly
changing the class’ or module’s namespace.
Other implementations exist that either hack __getattr__ [5]
or use __metaclass__ [6].
References
[1]
Implementation described in this document.
(http://www.wayforward.net/pycontract/)
[2]
Design By Contract is a registered trademark of Eiffel
Software Inc.
(http://archive.eiffel.com/doc/manuals/technology/contract/)
[3]
Object-oriented Software Construction, Bertrand Meyer,
ISBN 0-13-629031-0
[4]
http://docs.python.org/library/doctest.html
doctest – Test docstrings represent reality
[5]
Design by Contract for Python, R. Plosch
IEEE Proceedings of the Joint Asia Pacific Software Engineering
Conference (APSEC97/ICSC97), Hong Kong, December 2-5, 1997
(http://www.swe.uni-linz.ac.at/publications/abstract/TR-SE-97.24.html)
[6]
PyDBC – Design by Contract for Python 2.2+,
Daniel Arbuckle
(http://www.nongnu.org/pydbc/)
[7]
ReStructuredText (http://docutils.sourceforge.net/rst.html)
[8]
Extreme Programming Explained, Kent Beck,
ISBN 0-201-61641-6
[9]
The Z Notation, Second Edition, J.M. Spivey
ISBN 0-13-978529-9
Copyright
This document has been placed in the public domain.
| Deferred | PEP 316 – Programming by Contract for Python | Standards Track | This submission describes programming by contract for Python.
Eiffel’s Design By Contract(tm) is perhaps the most popular use of
programming contracts [2]. |
PEP 317 – Eliminate Implicit Exception Instantiation
Author:
Steven Taschuk <staschuk at telusplanet.net>
Status:
Rejected
Type:
Standards Track
Created:
06-May-2003
Python-Version:
2.4
Post-History:
09-Jun-2003
Table of Contents
Abstract
Motivation
String Exceptions
Implicit Instantiation
Specification
Backwards Compatibility
Migration Plan
Future Statement
Warnings
Examples
Code Using Implicit Instantiation
Code Using String Exceptions
Code Supplying a Traceback Object
A Failure of the Plan
Rejection
Summary of Discussion
New-Style Exceptions
Ugliness of Explicit Instantiation
Performance Penalty of Warnings
Traceback Argument
References
Copyright
Abstract
“For clarity in new code, the form raise class(argument, ...)
is recommended (i.e. make an explicit call to the constructor).”—Guido van Rossum, in 1997 [1]
This PEP proposes the formal deprecation and eventual elimination of
forms of the raise statement which implicitly instantiate an
exception. For example, statements such as
raise HullBreachError
raise KitchenError, 'all out of baked beans'
must under this proposal be replaced with their synonyms
raise HullBreachError()
raise KitchenError('all out of baked beans')
Note that these latter statements are already legal, and that this PEP
does not change their meaning.
Eliminating these forms of raise makes it impossible to use string
exceptions; accordingly, this PEP also proposes the formal deprecation
and eventual elimination of string exceptions.
Adoption of this proposal breaks backwards compatibility. Under the
proposed implementation schedule, Python 2.4 will introduce warnings
about uses of raise which will eventually become incorrect, and
Python 3.0 will eliminate them entirely. (It is assumed that this
transition period – 2.4 to 3.0 – will be at least one year long, to
comply with the guidelines of PEP 5.)
Motivation
String Exceptions
It is assumed that removing string exceptions will be uncontroversial,
since it has been intended since at least Python 1.5, when the
standard exception types were changed to classes [1].
For the record: string exceptions should be removed because the
presence of two kinds of exception complicates the language without
any compensation. Instance exceptions are superior because, for
example,
the class-instance relationship more naturally expresses the
relationship between the exception type and value,
they can be organized naturally using superclass-subclass
relationships, and
they can encapsulate error-reporting behaviour (for example).
Implicit Instantiation
Guido’s 1997 essay [1] on changing the standard exceptions into
classes makes clear why raise can instantiate implicitly:
“The raise statement has been extended to allow raising a class
exception without explicit instantiation. The following forms,
called the “compatibility forms” of the raise statement […] The
motivation for introducing the compatibility forms was to allow
backward compatibility with old code that raised a standard
exception.”
For example, it was desired that pre-1.5 code which used string
exception syntax such as
raise TypeError, 'not an int'
would work both on versions of Python in which TypeError was a
string, and on versions in which it was a class.
When no such consideration obtains – that is, when the desired
exception type is not a string in any version of the software which
the code must support – there is no good reason to instantiate
implicitly, and it is clearer not to. For example:
In the codetry:
raise MyError, raised
except MyError, caught:
pass
the syntactic parallel between the raise and except
statements strongly suggests that raised and caught refer
to the same object. For string exceptions this actually is the
case, but for instance exceptions it is not.
When instantiation is implicit, it is not obvious when it occurs,
for example, whether it occurs when the exception is raised or when
it is caught. Since it actually happens at the raise, the code
should say so.(Note that at the level of the C API, an exception can be “raised”
and “caught” without being instantiated; this is used as an
optimization by, for example, PyIter_Next. But in Python, no
such optimization is or should be available.)
An implicitly instantiating raise statement with no arguments,
such asraise MyError
simply does not do what it says: it does not raise the named
object.
The equivalence ofraise MyError
raise MyError()
conflates classes and instances, creating a possible source of
confusion for beginners. (Moreover, it is not clear that the
interpreter could distinguish between a new-style class and an
instance of such a class, so implicit instantiation may be an
obstacle to any future plan to let exceptions be new-style
objects.)
In short, implicit instantiation has no advantages other than
backwards compatibility, and so should be phased out along with what
it exists to ensure compatibility with, namely, string exceptions.
Specification
The syntax of raise_stmt [3] is to be changed from
raise_stmt ::= "raise" [expression ["," expression ["," expression]]]
to
raise_stmt ::= "raise" [expression ["," expression]]
If no expressions are present, the raise statement behaves as it
does presently: it re-raises the last exception that was active in the
current scope, and if no exception has been active in the current
scope, a TypeError is raised indicating that this is the problem.
Otherwise, the first expression is evaluated, producing the raised
object. Then the second expression is evaluated, if present,
producing the substituted traceback. If no second expression is
present, the substituted traceback is None.
The raised object must be an instance. The class of the instance is
the exception type, and the instance itself is the exception value.
If the raised object is not an instance – for example, if it is a
class or string – a TypeError is raised.
If the substituted traceback is not None, it must be a traceback
object, and it is substituted instead of the current location as the
place where the exception occurred. If it is neither a traceback
object nor None, a TypeError is raised.
Backwards Compatibility
Migration Plan
Future Statement
Under the PEP 236 future statement:
from __future__ import raise_with_two_args
the syntax and semantics of the raise statement will be as
described above. This future feature is to appear in Python 2.4; its
effect is to become standard in Python 3.0.
As the examples below illustrate, this future statement is only needed
for code which uses the substituted traceback argument to raise;
simple exception raising does not require it.
Warnings
Three new warnings, all of category DeprecationWarning, are
to be issued to point out uses of raise which will become
incorrect under the proposed changes.
The first warning is issued when a raise statement is executed in
which the first expression evaluates to a string. The message for
this warning is:
raising strings will be impossible in the future
The second warning is issued when a raise statement is executed in
which the first expression evaluates to a class. The message for this
warning is:
raising classes will be impossible in the future
The third warning is issued when a raise statement with three
expressions is compiled. (Not, note, when it is executed; this is
important because the SyntaxError which this warning presages will
occur at compile-time.) The message for this warning is:
raising with three arguments will be impossible in the future
These warnings are to appear in Python 2.4, and disappear in Python
3.0, when the conditions which cause them are simply errors.
Examples
Code Using Implicit Instantiation
Code such as
class MyError(Exception):
pass
raise MyError, 'spam'
will issue a warning when the raise statement is executed. The
raise statement should be changed to instantiate explicitly:
raise MyError('spam')
Code Using String Exceptions
Code such as
MyError = 'spam'
raise MyError, 'eggs'
will issue a warning when the raise statement is executed. The
exception type should be changed to a class:
class MyError(Exception):
pass
and, as in the previous example, the raise statement should be
changed to instantiate explicitly
raise MyError('eggs')
Code Supplying a Traceback Object
Code such as
raise MyError, 'spam', mytraceback
will issue a warning when compiled. The statement should be changed
to
raise MyError('spam'), mytraceback
and the future statement
from __future__ import raise_with_two_args
should be added at the top of the module. Note that adding this
future statement also turns the other two warnings into errors, so the
changes described in the previous examples must also be applied.
The special case
raise sys.exc_type, sys.exc_info, sys.exc_traceback
(which is intended to re-raise a previous exception) should be changed
simply to
raise
A Failure of the Plan
It may occur that a raise statement which raises a string or
implicitly instantiates is not executed in production or testing
during the phase-in period for this PEP. In that case, it will not
issue any warnings, but will instead suddenly fail one day in Python
3.0 or a subsequent version. (The failure is that the wrong exception
gets raised, namely a TypeError complaining about the arguments to
raise, instead of the exception intended.)
Such cases can be made rarer by prolonging the phase-in period; they
cannot be made impossible short of issuing at compile-time a warning
for every raise statement.
Rejection
If this PEP were accepted, nearly all existing Python code would need
to be reviewed and probably revised; even if all the above arguments
in favour of explicit instantiation are accepted, the improvement in
clarity is too minor to justify the cost of doing the revision and the
risk of new bugs introduced thereby.
This proposal has therefore been rejected [6].
Note that string exceptions are slated for removal independently of
this proposal; what is rejected is the removal of implicit exception
instantiation.
Summary of Discussion
A small minority of respondents were in favour of the proposal, but
the dominant response was that any such migration would be costly
out of proportion to the putative benefit. As noted above, this
point is sufficient in itself to reject the PEP.
New-Style Exceptions
Implicit instantiation might conflict with future plans to allow
instances of new-style classes to be used as exceptions. In order to
decide whether to instantiate implicitly, the raise machinery must
determine whether the first argument is a class or an instance – but
with new-style classes there is no clear and strong distinction.
Under this proposal, the problem would be avoided because the
exception would already have been instantiated. However, there are
two plausible alternative solutions:
Require exception types to be subclasses of Exception, and
instantiate implicitly if and only ifissubclass(firstarg, Exception)
Instantiate implicitly if and only ifisinstance(firstarg, type)
Thus eliminating implicit instantiation entirely is not necessary to
solve this problem.
Ugliness of Explicit Instantiation
Some respondents felt that the explicitly instantiating syntax is
uglier, especially in cases when no arguments are supplied to the
exception constructor:
raise TypeError()
The problem is particularly acute when the exception instance itself
is not of interest, that is, when the only relevant point is the
exception type:
try:
# ... deeply nested search loop ...
raise Found
except Found:
# ...
In such cases the symmetry between raise and except can be
more expressive of the intent of the code.
Guido opined that the implicitly instantiating syntax is “a tad
prettier” even for cases with a single argument, since it has less
punctuation.
Performance Penalty of Warnings
Experience with deprecating apply() shows that use of the warning
framework can incur a significant performance penalty.
Code which instantiates explicitly would not be affected, since the
run-time checks necessary to determine whether to issue a warning are
exactly those which are needed to determine whether to instantiate
implicitly in the first place. That is, such statements are already
incurring the cost of these checks.
Code which instantiates implicitly would incur a large cost: timing
trials indicate that issuing a warning (whether it is suppressed or
not) takes about five times more time than simply instantiating,
raising, and catching an exception.
This penalty is mitigated by the fact that raise statements are
rarely on performance-critical execution paths.
Traceback Argument
As the proposal stands, it would be impossible to use the traceback
argument to raise conveniently with all 2.x versions of Python.
For compatibility with versions < 2.4, the three-argument form must be
used; but this form would produce warnings with versions >= 2.4.
Those warnings could be suppressed, but doing so is awkward because
the relevant type of warning is issued at compile-time.
If this PEP were still under consideration, this objection would be
met by extending the phase-in period. For example, warnings could
first be issued in 3.0, and become errors in some later release.
References
[1] (1, 2, 3)
“Standard Exception Classes in Python 1.5”, Guido van Rossum.
http://www.python.org/doc/essays/stdexceptions.html
[3]
“Python Language Reference”, Guido van Rossum.
http://docs.python.org/reference/simple_stmts.html#raise
[6]
Guido van Rossum, 11 June 2003 post to python-dev.
https://mail.python.org/pipermail/python-dev/2003-June/036176.html
Copyright
This document has been placed in the public domain.
| Rejected | PEP 317 – Eliminate Implicit Exception Instantiation | Standards Track | —Guido van Rossum, in 1997 [1] |
PEP 318 – Decorators for Functions and Methods
Author:
Kevin D. Smith <Kevin.Smith at theMorgue.org>, Jim J. Jewett, Skip Montanaro, Anthony Baxter
Status:
Final
Type:
Standards Track
Created:
05-Jun-2003
Python-Version:
2.4
Post-History:
09-Jun-2003, 10-Jun-2003, 27-Feb-2004, 23-Mar-2004, 30-Aug-2004,
02-Sep-2004
Table of Contents
WarningWarningWarning
Abstract
Motivation
Why Is This So Hard?
Background
On the name ‘Decorator’
Design Goals
Current Syntax
Syntax Alternatives
Decorator Location
Syntax forms
Why @?
Current Implementation, History
Community Consensus
Examples
(No longer) Open Issues
Copyright
WarningWarningWarning
This document is meant to describe the decorator syntax and the
process that resulted in the decisions that were made. It does not
attempt to cover the huge number of potential alternative syntaxes,
nor is it an attempt to exhaustively list all the positives and
negatives of each form.
Abstract
The current method for transforming functions and methods (for instance,
declaring them as a class or static method) is awkward and can lead to
code that is difficult to understand. Ideally, these transformations
should be made at the same point in the code where the declaration
itself is made. This PEP introduces new syntax for transformations of a
function or method declaration.
Motivation
The current method of applying a transformation to a function or method
places the actual transformation after the function body. For large
functions this separates a key component of the function’s behavior from
the definition of the rest of the function’s external interface. For
example:
def foo(self):
perform method operation
foo = classmethod(foo)
This becomes less readable with longer methods. It also seems less
than pythonic to name the function three times for what is conceptually
a single declaration. A solution to this problem is to move the
transformation of the method closer to the method’s own declaration.
The intent of the new syntax is to replace
def foo(cls):
pass
foo = synchronized(lock)(foo)
foo = classmethod(foo)
with an alternative that places the decoration in the function’s
declaration:
@classmethod
@synchronized(lock)
def foo(cls):
pass
Modifying classes in this fashion is also possible, though the benefits
are not as immediately apparent. Almost certainly, anything which could
be done with class decorators could be done using metaclasses, but
using metaclasses is sufficiently obscure that there is some attraction
to having an easier way to make simple modifications to classes. For
Python 2.4, only function/method decorators are being added.
PEP 3129 proposes to add class decorators as of Python 2.6.
Why Is This So Hard?
Two decorators (classmethod() and staticmethod()) have been
available in Python since version 2.2. It’s been assumed since
approximately that time that some syntactic support for them would
eventually be added to the language. Given this assumption, one might
wonder why it’s been so difficult to arrive at a consensus. Discussions
have raged off-and-on at times in both comp.lang.python and the
python-dev mailing list about how best to implement function decorators.
There is no one clear reason why this should be so, but a few problems
seem to be most divisive.
Disagreement about where the “declaration of intent” belongs.
Almost everyone agrees that decorating/transforming a function at the
end of its definition is suboptimal. Beyond that there seems to be no
clear consensus where to place this information.
Syntactic constraints. Python is a syntactically simple language
with fairly strong constraints on what can and can’t be done without
“messing things up” (both visually and with regards to the language
parser). There’s no obvious way to structure this information so
that people new to the concept will think, “Oh yeah, I know what
you’re doing.” The best that seems possible is to keep new users from
creating a wildly incorrect mental model of what the syntax means.
Overall unfamiliarity with the concept. For people who have a
passing acquaintance with algebra (or even basic arithmetic) or have
used at least one other programming language, much of Python is
intuitive. Very few people will have had any experience with the
decorator concept before encountering it in Python. There’s just no
strong preexisting meme that captures the concept.
Syntax discussions in general appear to cause more contention than
almost anything else. Readers are pointed to the ternary operator
discussions that were associated with PEP 308 for another example of
this.
Background
There is general agreement that syntactic support is desirable to
the current state of affairs. Guido mentioned syntactic support
for decorators in his DevDay keynote presentation at the 10th
Python Conference, though he later said it was only one of
several extensions he proposed there “semi-jokingly”. Michael Hudson
raised the topic on python-dev shortly after the conference,
attributing the initial bracketed syntax to an earlier proposal on
comp.lang.python by Gareth McCaughan.
Class decorations seem like an obvious next step because class
definition and function definition are syntactically similar,
however Guido remains unconvinced, and class decorators will almost
certainly not be in Python 2.4.
The discussion continued on and off on python-dev from February
2002 through July 2004. Hundreds and hundreds of posts were made,
with people proposing many possible syntax variations. Guido took
a list of proposals to EuroPython 2004, where a discussion took
place. Subsequent to this, he decided that we’d have the Java-style
@decorator syntax, and this appeared for the first time in 2.4a2.
Barry Warsaw named this the ‘pie-decorator’ syntax, in honor of the
Pie-thon Parrot shootout which occurred around the same time as
the decorator syntax, and because the @ looks a little like a pie.
Guido outlined his case on Python-dev, including this piece
on some of the (many) rejected forms.
On the name ‘Decorator’
There’s been a number of complaints about the choice of the name
‘decorator’ for this feature. The major one is that the name is not
consistent with its use in the GoF book. The name ‘decorator’
probably owes more to its use in the compiler area – a syntax tree is
walked and annotated. It’s quite possible that a better name may turn
up.
Design Goals
The new syntax should
work for arbitrary wrappers, including user-defined callables and
the existing builtins classmethod() and staticmethod(). This
requirement also means that a decorator syntax must support passing
arguments to the wrapper constructor
work with multiple wrappers per definition
make it obvious what is happening; at the very least it should be
obvious that new users can safely ignore it when writing their own
code
be a syntax “that … [is] easy to remember once explained”
not make future extensions more difficult
be easy to type; programs that use it are expected to use it very
frequently
not make it more difficult to scan through code quickly. It should
still be easy to search for all definitions, a particular definition,
or the arguments that a function accepts
not needlessly complicate secondary support tools such as
language-sensitive editors and other “toy parser tools out
there”
allow future compilers to optimize for decorators. With the hope of
a JIT compiler for Python coming into existence at some point this
tends to require the syntax for decorators to come before the function
definition
move from the end of the function, where it’s currently hidden, to
the front where it is more in your face
Andrew Kuchling has links to a bunch of the discussions about
motivations and use cases in his blog. Particularly notable is Jim
Huginin’s list of use cases.
Current Syntax
The current syntax for function decorators as implemented in Python
2.4a2 is:
@dec2
@dec1
def func(arg1, arg2, ...):
pass
This is equivalent to:
def func(arg1, arg2, ...):
pass
func = dec2(dec1(func))
without the intermediate assignment to the variable func. The
decorators are near the function declaration. The @ sign makes it clear
that something new is going on here.
The rationale for the order of application (bottom to top) is that it
matches the usual order for function-application. In mathematics,
composition of functions (g o f)(x) translates to g(f(x)). In Python,
@g @f def foo() translates to foo=g(f(foo).
The decorator statement is limited in what it can accept – arbitrary
expressions will not work. Guido preferred this because of a gut
feeling.
The current syntax also allows decorator declarations to call a
function that returns a decorator:
@decomaker(argA, argB, ...)
def func(arg1, arg2, ...):
pass
This is equivalent to:
func = decomaker(argA, argB, ...)(func)
The rationale for having a function that returns a decorator is that
the part after the @ sign can be considered to be an expression
(though syntactically restricted to just a function), and whatever
that expression returns is called. See declaration arguments.
Syntax Alternatives
There have been a large number of different syntaxes proposed –
rather than attempting to work through these individual syntaxes, it’s
worthwhile to break the syntax discussion down into a number of areas.
Attempting to discuss each possible syntax individually would be an
act of madness, and produce a completely unwieldy PEP.
Decorator Location
The first syntax point is the location of the decorators. For the
following examples, we use the @syntax used in 2.4a2.
Decorators before the def statement are the first alternative, and the
syntax used in 2.4a2:
@classmethod
def foo(arg1,arg2):
pass
@accepts(int,int)
@returns(float)
def bar(low,high):
pass
There have been a number of objections raised to this location – the
primary one is that it’s the first real Python case where a line of code
has an effect on a following line. The syntax available in 2.4a3
requires one decorator per line (in a2, multiple decorators could be
specified on the same line), and the final decision for 2.4 final stayed
one decorator per line.
People also complained that the syntax quickly got unwieldy when
multiple decorators were used. The point was made, though, that the
chances of a large number of decorators being used on a single function
were small and thus this was not a large worry.
Some of the advantages of this form are that the decorators live outside
the method body – they are obviously executed at the time the function
is defined.
Another advantage is that a prefix to the function definition fits
the idea of knowing about a change to the semantics of the code before
the code itself, thus you know how to interpret the code’s semantics
properly without having to go back and change your initial perceptions
if the syntax did not come before the function definition.
Guido decided he preferred having the decorators on the line before
the ‘def’, because it was felt that a long argument list would mean that
the decorators would be ‘hidden’
The second form is the decorators between the def and the function name,
or the function name and the argument list:
def @classmethod foo(arg1,arg2):
pass
def @accepts(int,int),@returns(float) bar(low,high):
pass
def foo @classmethod (arg1,arg2):
pass
def bar @accepts(int,int),@returns(float) (low,high):
pass
There are a couple of objections to this form. The first is that it
breaks easily ‘greppability’ of the source – you can no longer search
for ‘def foo(’ and find the definition of the function. The second,
more serious, objection is that in the case of multiple decorators, the
syntax would be extremely unwieldy.
The next form, which has had a number of strong proponents, is to have
the decorators between the argument list and the trailing : in the
‘def’ line:
def foo(arg1,arg2) @classmethod:
pass
def bar(low,high) @accepts(int,int),@returns(float):
pass
Guido summarized the arguments against this form (many of which also
apply to the previous form) as:
it hides crucial information (e.g. that it is a static method)
after the signature, where it is easily missed
it’s easy to miss the transition between a long argument list and a
long decorator list
it’s cumbersome to cut and paste a decorator list for reuse, because
it starts and ends in the middle of a line
The next form is that the decorator syntax goes inside the method body at
the start, in the same place that docstrings currently live:
def foo(arg1,arg2):
@classmethod
pass
def bar(low,high):
@accepts(int,int)
@returns(float)
pass
The primary objection to this form is that it requires “peeking inside”
the method body to determine the decorators. In addition, even though
the code is inside the method body, it is not executed when the method
is run. Guido felt that docstrings were not a good counter-example, and
that it was quite possible that a ‘docstring’ decorator could help move
the docstring to outside the function body.
The final form is a new block that encloses the method’s code. For this
example, we’ll use a ‘decorate’ keyword, as it makes no sense with the
@syntax.
decorate:
classmethod
def foo(arg1,arg2):
pass
decorate:
accepts(int,int)
returns(float)
def bar(low,high):
pass
This form would result in inconsistent indentation for decorated and
undecorated methods. In addition, a decorated method’s body would start
three indent levels in.
Syntax forms
@decorator:@classmethod
def foo(arg1,arg2):
pass
@accepts(int,int)
@returns(float)
def bar(low,high):
pass
The major objections against this syntax are that the @ symbol is
not currently used in Python (and is used in both IPython and Leo),
and that the @ symbol is not meaningful. Another objection is that
this “wastes” a currently unused character (from a limited set) on
something that is not perceived as a major use.
|decorator:|classmethod
def foo(arg1,arg2):
pass
|accepts(int,int)
|returns(float)
def bar(low,high):
pass
This is a variant on the @decorator syntax – it has the advantage
that it does not break IPython and Leo. Its major disadvantage
compared to the @syntax is that the | symbol looks like both a capital
I and a lowercase l.
list syntax:[classmethod]
def foo(arg1,arg2):
pass
[accepts(int,int), returns(float)]
def bar(low,high):
pass
The major objection to the list syntax is that it’s currently
meaningful (when used in the form before the method). It’s also
lacking any indication that the expression is a decorator.
list syntax using other brackets (<...>, [[...]], …):<classmethod>
def foo(arg1,arg2):
pass
<accepts(int,int), returns(float)>
def bar(low,high):
pass
None of these alternatives gained much traction. The alternatives
which involve square brackets only serve to make it obvious that the
decorator construct is not a list. They do nothing to make parsing any
easier. The ‘<…>’ alternative presents parsing problems because ‘<’
and ‘>’ already parse as un-paired. They present a further parsing
ambiguity because a right angle bracket might be a greater than symbol
instead of a closer for the decorators.
decorate()The decorate() proposal was that no new syntax be implemented
– instead a magic function that used introspection to manipulate
the following function. Both Jp Calderone and Philip Eby produced
implementations of functions that did this. Guido was pretty firmly
against this – with no new syntax, the magicness of a function like
this is extremely high:
Using functions with “action-at-a-distance” through sys.settraceback
may be okay for an obscure feature that can’t be had any other
way yet doesn’t merit changes to the language, but that’s not
the situation for decorators. The widely held view here is that
decorators need to be added as a syntactic feature to avoid the
problems with the postfix notation used in 2.2 and 2.3. Decorators
are slated to be an important new language feature and their
design needs to be forward-looking, not constrained by what can be
implemented in 2.3.
new keyword (and block)This idea was the consensus alternate from comp.lang.python (more
on this in Community Consensus below.) Robert Brewer wrote up a
detailed J2 proposal document outlining the arguments in favor of
this form. The initial issues with this form are:
It requires a new keyword, and therefore a from __future__
import decorators statement.
The choice of keyword is contentious. However using emerged
as the consensus choice, and is used in the proposal and
implementation.
The keyword/block form produces something that looks like a normal
code block, but isn’t. Attempts to use statements in this block
will cause a syntax error, which may confuse users.
A few days later, Guido rejected the proposal on two main grounds,
firstly:
… the syntactic form of an indented block strongly
suggests that its contents should be a sequence of statements, but
in fact it is not – only expressions are allowed, and there is an
implicit “collecting” of these expressions going on until they can
be applied to the subsequent function definition. …
and secondly:
… the keyword starting the line that heads a block
draws a lot of attention to it. This is true for “if”, “while”,
“for”, “try”, “def” and “class”. But the “using” keyword (or any
other keyword in its place) doesn’t deserve that attention; the
emphasis should be on the decorator or decorators inside the suite,
since those are the important modifiers to the function definition
that follows. …
Readers are invited to read the full response.
Other formsThere are plenty of other variants and proposals on the wiki page.
Why @?
There is some history in Java using @ initially as a marker in Javadoc
comments and later in Java 1.5 for annotations, which are similar
to Python decorators. The fact that @ was previously unused as a token
in Python also means it’s clear there is no possibility of such code
being parsed by an earlier version of Python, leading to possibly subtle
semantic bugs. It also means that ambiguity of what is a decorator
and what isn’t is removed. That said, @ is still a fairly arbitrary
choice. Some have suggested using | instead.
For syntax options which use a list-like syntax (no matter where it
appears) to specify the decorators a few alternatives were proposed:
[|...|], *[...]*, and <...>.
Current Implementation, History
Guido asked for a volunteer to implement his preferred syntax, and Mark
Russell stepped up and posted a patch to SF. This new syntax was
available in 2.4a2.
@dec2
@dec1
def func(arg1, arg2, ...):
pass
This is equivalent to:
def func(arg1, arg2, ...):
pass
func = dec2(dec1(func))
though without the intermediate creation of a variable named func.
The version implemented in 2.4a2 allowed multiple @decorator clauses
on a single line. In 2.4a3, this was tightened up to only allowing one
decorator per line.
A previous patch from Michael Hudson which implements the
list-after-def syntax is also still kicking around.
After 2.4a2 was released, in response to community reaction, Guido
stated that he’d re-examine a community proposal, if the community
could come up with a community consensus, a decent proposal, and an
implementation. After an amazing number of posts, collecting a vast
number of alternatives in the Python wiki, a community consensus
emerged (below). Guido subsequently rejected this alternate form,
but added:
In Python 2.4a3 (to be released this Thursday), everything remains
as currently in CVS. For 2.4b1, I will consider a change of @ to
some other single character, even though I think that @ has the
advantage of being the same character used by a similar feature
in Java. It’s been argued that it’s not quite the same, since @
in Java is used for attributes that don’t change semantics. But
Python’s dynamic nature makes that its syntactic elements never mean
quite the same thing as similar constructs in other languages, and
there is definitely significant overlap. Regarding the impact on
3rd party tools: IPython’s author doesn’t think there’s going to be
much impact; Leo’s author has said that Leo will survive (although
it will cause him and his users some transitional pain). I actually
expect that picking a character that’s already used elsewhere in
Python’s syntax might be harder for external tools to adapt to,
since parsing will have to be more subtle in that case. But I’m
frankly undecided, so there’s some wiggle room here. I don’t want
to consider further syntactic alternatives at this point: the buck
has to stop at some point, everyone has had their say, and the show
must go on.
Community Consensus
This section documents the rejected J2 syntax, and is included for
historical completeness.
The consensus that emerged on comp.lang.python was the proposed J2
syntax (the “J2” was how it was referenced on the PythonDecorators wiki
page): the new keyword using prefixing a block of decorators before
the def statement. For example:
using:
classmethod
synchronized(lock)
def func(cls):
pass
The main arguments for this syntax fall under the “readability counts”
doctrine. In brief, they are:
A suite is better than multiple @lines. The using keyword and
block transforms the single-block def statement into a
multiple-block compound construct, akin to try/finally and others.
A keyword is better than punctuation for a new token. A keyword
matches the existing use of tokens. No new token category is
necessary. A keyword distinguishes Python decorators from Java
annotations and .Net attributes, which are significantly different
beasts.
Robert Brewer wrote a detailed proposal for this form, and Michael
Sparks produced a patch.
As noted previously, Guido rejected this form, outlining his problems
with it in a message to python-dev and comp.lang.python.
Examples
Much of the discussion on comp.lang.python and the python-dev
mailing list focuses on the use of decorators as a cleaner way to use
the staticmethod() and classmethod() builtins. This capability
is much more powerful than that. This section presents some examples of
use.
Define a function to be executed at exit. Note that the function
isn’t actually “wrapped” in the usual sense.def onexit(f):
import atexit
atexit.register(f)
return f
@onexit
def func():
...
Note that this example is probably not suitable for real usage, but
is for example purposes only.
Define a class with a singleton instance. Note that once the class
disappears enterprising programmers would have to be more creative to
create more instances. (From Shane Hathaway on python-dev.)def singleton(cls):
instances = {}
def getinstance():
if cls not in instances:
instances[cls] = cls()
return instances[cls]
return getinstance
@singleton
class MyClass:
...
Add attributes to a function. (Based on an example posted by
Anders Munch on python-dev.)def attrs(**kwds):
def decorate(f):
for k in kwds:
setattr(f, k, kwds[k])
return f
return decorate
@attrs(versionadded="2.2",
author="Guido van Rossum")
def mymethod(f):
...
Enforce function argument and return types. Note that this
copies the func_name attribute from the old to the new function.
func_name was made writable in Python 2.4a3:def accepts(*types):
def check_accepts(f):
assert len(types) == f.func_code.co_argcount
def new_f(*args, **kwds):
for (a, t) in zip(args, types):
assert isinstance(a, t), \
"arg %r does not match %s" % (a,t)
return f(*args, **kwds)
new_f.func_name = f.func_name
return new_f
return check_accepts
def returns(rtype):
def check_returns(f):
def new_f(*args, **kwds):
result = f(*args, **kwds)
assert isinstance(result, rtype), \
"return value %r does not match %s" % (result,rtype)
return result
new_f.func_name = f.func_name
return new_f
return check_returns
@accepts(int, (int,float))
@returns((int,float))
def func(arg1, arg2):
return arg1 * arg2
Declare that a class implements a particular (set of) interface(s).
This is from a posting by Bob Ippolito on python-dev based on
experience with PyProtocols.def provides(*interfaces):
"""
An actual, working, implementation of provides for
the current implementation of PyProtocols. Not
particularly important for the PEP text.
"""
def provides(typ):
declareImplementation(typ, instancesProvide=interfaces)
return typ
return provides
class IBar(Interface):
"""Declare something about IBar here"""
@provides(IBar)
class Foo(object):
"""Implement something here..."""
Of course, all these examples are possible today, though without
syntactic support.
(No longer) Open Issues
It’s not yet certain that class decorators will be incorporated
into the language at a future point. Guido expressed skepticism about
the concept, but various people have made some strong arguments
(search for PEP 318 -- posting draft) on their behalf in
python-dev. It’s exceedingly unlikely that class decorators
will be in Python 2.4.PEP 3129 proposes to add class decorators as of Python 2.6.
The choice of the @ character will be re-examined before
Python 2.4b1.In the end, the @ character was kept.
Copyright
This document has been placed in the public domain.
| Final | PEP 318 – Decorators for Functions and Methods | Standards Track | The current method for transforming functions and methods (for instance,
declaring them as a class or static method) is awkward and can lead to
code that is difficult to understand. Ideally, these transformations
should be made at the same point in the code where the declaration
itself is made. This PEP introduces new syntax for transformations of a
function or method declaration. |
PEP 319 – Python Synchronize/Asynchronize Block
Author:
Michel Pelletier <michel at users.sourceforge.net>
Status:
Rejected
Type:
Standards Track
Created:
24-Feb-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Pronouncement
Synchronization Targets
Other Patterns that Synchronize
Formal Syntax
Proposed Implementation
Backward Compatibility
PEP 310 Reliable Acquisition/Release Pairs
How Java Does It
How Jython Does It
Summary of Proposed Changes to Python
Risks
Dissenting Opinion
References
Copyright
Abstract
This PEP proposes adding two new keywords to Python, ‘synchronize’
and ‘asynchronize’.
Pronouncement
This PEP is rejected in favor of PEP 343.
The ‘synchronize’ KeywordThe concept of code synchronization in Python is too low-level.
To synchronize code a programmer must be aware of the details of
the following pseudo-code pattern:initialize_lock()
...
acquire_lock()
try:
change_shared_data()
finally:
release_lock()
This synchronized block pattern is not the only pattern (more
discussed below) but it is very common. This PEP proposes
replacing the above code with the following equivalent:
synchronize:
change_shared_data()
The advantages of this scheme are simpler syntax and less room for
user error. Currently users are required to write code about
acquiring and releasing thread locks in ‘try/finally’ blocks;
errors in this code can cause notoriously difficult concurrent
thread locking issues.
The ‘asynchronize’ KeywordWhile executing a ‘synchronize’ block of code a programmer may
want to “drop back” to running asynchronously momentarily to run
blocking input/output routines or something else that might take an
indeterminate amount of time and does not require synchronization.
This code usually follows the pattern:initialize_lock()
...
acquire_lock()
try:
change_shared_data()
release_lock() # become async
do_blocking_io()
acquire_lock() # sync again
change_shared_data2()
finally:
release_lock()
The asynchronous section of the code is not very obvious visually,
so it is marked up with comments. Using the proposed
‘asynchronize’ keyword this code becomes much cleaner, easier to
understand, and less prone to error:
synchronize:
change_shared_data()
asynchronize:
do_blocking_io()
change_shared_data2()
Encountering an ‘asynchronize’ keyword inside a non-synchronized
block can raise either an error or issue a warning (as all code
blocks are implicitly asynchronous anyway). It is important to
note that the above example is not the same as:
synchronize:
change_shared_data()
do_blocking_io()
synchronize:
change_shared_data2()
Because both synchronized blocks of code may be running inside the
same iteration of a loop, Consider:
while in_main_loop():
synchronize:
change_shared_data()
asynchronize:
do_blocking_io()
change_shared_data2()
Many threads may be looping through this code. Without the
‘asynchronize’ keyword one thread cannot stay in the loop and
release the lock at the same time while blocking IO is going on.
This pattern of releasing locks inside a main loop to do blocking
IO is used extensively inside the CPython interpreter itself.
Synchronization Targets
As proposed the ‘synchronize’ and ‘asynchronize’ keywords
synchronize a block of code. However programmers may want to
specify a target object that threads synchronize on. Any object
can be a synchronization target.
Consider a two-way queue object: two different objects are used by
the same ‘synchronize’ code block to synchronize both queues
separately in the ‘get’ method:
class TwoWayQueue:
def __init__(self):
self.front = []
self.rear = []
def putFront(self, item):
self.put(item, self.front)
def getFront(self):
item = self.get(self.front)
return item
def putRear(self, item):
self.put(item, self.rear)
def getRear(self):
item = self.get(self.rear)
return item
def put(self, item, queue):
synchronize queue:
queue.append(item)
def get(self, queue):
synchronize queue:
item = queue[0]
del queue[0]
return item
Here is the equivalent code in Python as it is now without a
‘synchronize’ keyword:
import thread
class LockableQueue:
def __init__(self):
self.queue = []
self.lock = thread.allocate_lock()
class TwoWayQueue:
def __init__(self):
self.front = LockableQueue()
self.rear = LockableQueue()
def putFront(self, item):
self.put(item, self.front)
def getFront(self):
item = self.get(self.front)
return item
def putRear(self, item):
self.put(item, self.rear)
def getRear(self):
item = self.get(self.rear)
return item
def put(self, item, queue):
queue.lock.acquire()
try:
queue.append(item)
finally:
queue.lock.release()
def get(self, queue):
queue.lock.acquire()
try:
item = queue[0]
del queue[0]
return item
finally:
queue.lock.release()
The last example had to define an extra class to associate a lock
with the queue where the first example the ‘synchronize’ keyword
does this association internally and transparently.
Other Patterns that Synchronize
There are some situations where the ‘synchronize’ and
‘asynchronize’ keywords cannot entirely replace the use of lock
methods like acquire and release. Some examples are if the
programmer wants to provide arguments for acquire or if a lock
is acquired in one code block but released in another, as shown
below.
Here is a class from Zope modified to use both the ‘synchronize’
and ‘asynchronize’ keywords and also uses a pool of explicit locks
that are acquired and released in different code blocks and thus
don’t use ‘synchronize’:
import thread
from ZServerPublisher import ZServerPublisher
class ZRendevous:
def __init__(self, n=1):
pool=[]
self._lists=pool, [], []
synchronize:
while n > 0:
l=thread.allocate_lock()
l.acquire()
pool.append(l)
thread.start_new_thread(ZServerPublisher,
(self.accept,))
n=n-1
def accept(self):
synchronize:
pool, requests, ready = self._lists
while not requests:
l=pool[-1]
del pool[-1]
ready.append(l)
asynchronize:
l.acquire()
pool.append(l)
r=requests[0]
del requests[0]
return r
def handle(self, name, request, response):
synchronize:
pool, requests, ready = self._lists
requests.append((name, request, response))
if ready:
l=ready[-1]
del ready[-1]
l.release()
Here is the original class as found in the
‘Zope/ZServer/PubCore/ZRendevous.py’ module. The “convenience” of
the ‘_a’ and ‘_r’ shortcut names obscure the code:
import thread
from ZServerPublisher import ZServerPublisher
class ZRendevous:
def __init__(self, n=1):
sync=thread.allocate_lock()
self._a=sync.acquire
self._r=sync.release
pool=[]
self._lists=pool, [], []
self._a()
try:
while n > 0:
l=thread.allocate_lock()
l.acquire()
pool.append(l)
thread.start_new_thread(ZServerPublisher,
(self.accept,))
n=n-1
finally: self._r()
def accept(self):
self._a()
try:
pool, requests, ready = self._lists
while not requests:
l=pool[-1]
del pool[-1]
ready.append(l)
self._r()
l.acquire()
self._a()
pool.append(l)
r=requests[0]
del requests[0]
return r
finally: self._r()
def handle(self, name, request, response):
self._a()
try:
pool, requests, ready = self._lists
requests.append((name, request, response))
if ready:
l=ready[-1]
del ready[-1]
l.release()
finally: self._r()
In particular the asynchronize section of the accept method is
not very obvious. To beginner programmers, ‘synchronize’ and
‘asynchronize’ remove many of the problems encountered when
juggling multiple acquire and release methods on different
locks in different try/finally blocks.
Formal Syntax
Python syntax is defined in a modified BNF grammar notation
described in the Python Language Reference [1]. This section
describes the proposed synchronization syntax using this grammar:
synchronize_stmt: 'synchronize' [test] ':' suite
asynchronize_stmt: 'asynchronize' [test] ':' suite
compound_stmt: ... | synchronized_stmt | asynchronize_stmt
(The ‘…’ indicates other compound statements elided).
Proposed Implementation
The author of this PEP has not explored an implementation yet.
There are several implementation issues that must be resolved.
The main implementation issue is what exactly gets locked and
unlocked during a synchronized block.
During an unqualified synchronized block (the use of the
‘synchronize’ keyword without a target argument) a lock could be
created and associated with the synchronized code block object.
Any threads that are to execute the block must first acquire the
code block lock.
When an ‘asynchronize’ keyword is encountered in a ‘synchronize’
block the code block lock is unlocked before the inner block is
executed and re-locked when the inner block terminates.
When a synchronized block target is specified the object is
associated with a lock. How this is implemented cleanly is
probably the highest risk of this proposal. Java Virtual Machines
typically associate a special hidden lock object with target
object and use it to synchronized the block around the target
only.
Backward Compatibility
Backward compatibility is solved with the new from __future__
Python syntax (PEP 236), and the new warning framework (PEP 230)
to evolve the
Python language into phasing out any conflicting names that use
the new keywords ‘synchronize’ and ‘asynchronize’. To use the
syntax now, a developer could use the statement:
from __future__ import threadsync # or whatever
In addition, any code that uses the keyword ‘synchronize’ or
‘asynchronize’ as an identifier will be issued a warning from
Python. After the appropriate period of time, the syntax would
become standard, the above import statement would do nothing, and
any identifiers named ‘synchronize’ or ‘asynchronize’ would raise
an exception.
PEP 310 Reliable Acquisition/Release Pairs
PEP 310 proposes the ‘with’ keyword that can serve the same
function as ‘synchronize’ (but no facility for ‘asynchronize’).
The pattern:
initialize_lock()
with the_lock:
change_shared_data()
is equivalent to the proposed:
synchronize the_lock:
change_shared_data()
PEP 310 must synchronize on an existing lock, while this PEP
proposes that unqualified ‘synchronize’ statements synchronize on
a global, internal, transparent lock in addition to qualified
‘synchronize’ statements. The ‘with’ statement also requires lock
initialization, while the ‘synchronize’ statement can synchronize
on any target object including locks.
While limited in this fashion, the ‘with’ statement is more
abstract and serves more purposes than synchronization. For
example, transactions could be used with the ‘with’ keyword:
initialize_transaction()
with my_transaction:
do_in_transaction()
# when the block terminates, the transaction is committed.
The ‘synchronize’ and ‘asynchronize’ keywords cannot serve this or
any other general acquire/release pattern other than thread
synchronization.
How Java Does It
Java defines a ‘synchronized’ keyword (note the grammatical tense
different between the Java keyword and this PEP’s ‘synchronize’)
which must be qualified on any object. The syntax is:
synchronized (Expression) Block
Expression must yield a valid object (null raises an error and
exceptions during ‘Expression’ terminate the ‘synchronized’ block
for the same reason) upon which ‘Block’ is synchronized.
How Jython Does It
Jython uses a ‘synchronize’ class with the static method
‘make_synchronized’ that accepts one callable argument and returns
a newly created, synchronized, callable “wrapper” around the
argument.
Summary of Proposed Changes to Python
Adding new ‘synchronize’ and ‘asynchronize’ keywords to the
language.
Risks
This PEP proposes adding two keywords to the Python language. This
may break code.
There is no implementation to test.
It’s not the most important problem facing Python programmers
today (although it is a fairly notorious one).
The equivalent Java keyword is the past participle ‘synchronized’.
This PEP proposes the present tense, ‘synchronize’ as being more
in spirit with Python (there being less distinction between
compile-time and run-time in Python than Java).
Dissenting Opinion
This PEP has not been discussed on python-dev.
References
[1]
The Python Language Reference
http://docs.python.org/reference/
Copyright
This document has been placed in the public domain.
| Rejected | PEP 319 – Python Synchronize/Asynchronize Block | Standards Track | This PEP proposes adding two new keywords to Python, ‘synchronize’
and ‘asynchronize’. |
PEP 320 – Python 2.4 Release Schedule
Author:
Barry Warsaw, Raymond Hettinger, Anthony Baxter
Status:
Final
Type:
Informational
Topic:
Release
Created:
29-Jul-2003
Python-Version:
2.4
Post-History:
01-Dec-2004
Table of Contents
Abstract
Release Manager
Release Schedule
Completed features for 2.4
Deferred until 2.5
Ongoing tasks
Open issues
Carryover features from Python 2.3
References
Copyright
Abstract
This document describes the development and release schedule for
Python 2.4. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release.
There will be at least two alpha releases, two beta releases, and
one release candidate. The release date was 30th November, 2004.
Release Manager
Anthony Baxter
Martin von Lowis is building the Windows installers, Fred the
doc packages, Sean the RPMs.
Release Schedule
July 9: alpha 1 [completed]
August 5/6: alpha 2 [completed]
Sept 3: alpha 3 [completed]
October 15: beta 1 [completed]
November 3: beta 2 [completed]
November 18: release candidate 1 [completed]
November 30: final [completed]
Completed features for 2.4
PEP 218 Builtin Set Objects.
PEP 289 Generator expressions.
PEP 292 Simpler String Substitutions to be implemented as a module.
PEP 318: Function/method decorator syntax, using @syntax
PEP 322 Reverse Iteration.
PEP 327: A Decimal package for fixed precision arithmetic.
PEP 328: Multi-line Imports
Encapsulate the decorate-sort-undecorate pattern in a keyword for
list.sort().
Added a builtin called sorted() which may be used in expressions.
The itertools module has two new functions, tee() and groupby().
Add a collections module with a deque() object.
Add two statistical/reduction functions, nlargest() and nsmallest()
to the heapq module.
Python’s windows installer now uses MSI
Deferred until 2.5
Deprecate and/or remove the modules listed in PEP 4 (posixfile,
gopherlib, pre, others)
Remove support for platforms as described in PEP 11.
Finish implementing the Distutils bdist_dpkg command. (AMK)
Add support for reading shadow passwords [1]
It would be nice if the built-in SSL socket type could be used
for non-blocking SSL I/O. Currently packages such as Twisted
which implement async servers using SSL have to require third-party
packages such as pyopenssl.
AST-based compiler: this branch was not completed in time for
2.4, but will land on the trunk some time after 2.4 final is
out, for inclusion in 2.5.
reST is going to be used a lot in Zope3. Maybe it could become
a standard library module? (Since reST’s author thinks it’s too
instable, I’m inclined not to do this.)
Ongoing tasks
The following are ongoing TO-DO items which we should attempt to
work on without hoping for completion by any particular date.
Documentation: complete the distribution and installation
manuals.
Documentation: complete the documentation for new-style
classes.
Look over the Demos/ directory and update where required (Andrew
Kuchling has done a lot of this)
New tests.
Fix doc bugs on SF.
Remove use of deprecated features in the core.
Document deprecated features appropriately.
Mark deprecated C APIs with Py_DEPRECATED.
Deprecate modules which are unmaintained, or perhaps make a new
category for modules ‘Unmaintained’
In general, lots of cleanup so it is easier to move forward.
Open issues
None at this time.
Carryover features from Python 2.3
The import lock could use some redesign. [2]
A nicer API to open text files, replacing the ugly (in some
people’s eyes) “U” mode flag. There’s a proposal out there to
have a new built-in type textfile(filename, mode, encoding).
(Shouldn’t it have a bufsize argument too?)
New widgets for Tkinter???Has anyone gotten the time for this? Are there any new
widgets in Tk 8.4? Note that we’ve got better Tix support
already (though not on Windows yet).
PEP 304 (Controlling Generation of Bytecode Files by Montanaro)
seems to have lost steam.
For a class defined inside another class, the __name__ should be
“outer.inner”, and pickling should work. ([3]. I’m no
longer certain this is easy or even right.)
Decide on a clearer deprecation policy (especially for modules)
and act on it. For a start, see this message from Neal Norwitz [4].
There seems insufficient interest in moving this further in an
organized fashion, and it’s not particularly important.
Provide alternatives for common uses of the types module;
Skip Montanaro has posted a proto-PEP for this idea [5].
There hasn’t been any progress on this, AFAICT.
Use pending deprecation for the types and string modules. This
requires providing alternatives for the parts that aren’t
covered yet (e.g. string.whitespace and types.TracebackType).
It seems we can’t get consensus on this.
PEP 262 Database of Installed Python Packages (Kuchling)This turns out to be useful for Jack Jansen’s Python installer,
so the database is worth implementing. Code will go in
sandbox/pep262.
PEP 269 Pgen Module for Python (Riehl)(Some necessary changes are in; the pgen module itself needs to
mature more.)
PEP 266 Optimizing Global Variable/Attribute Access (Montanaro)PEP 267 Optimized Access to Module Namespaces (Hylton)
PEP 280 Optimizing access to globals (van Rossum)
These are basically three friendly competing proposals. Jeremy
has made a little progress with a new compiler, but it’s going
slowly and the compiler is only the first step. Maybe we’ll be
able to refactor the compiler in this release. I’m tempted to
say we won’t hold our breath.
Lazily tracking tuples? [6] [7]
Not much enthusiasm I believe.
PEP 286 Enhanced Argument Tuples (von Loewis)I haven’t had the time to review this thoroughly. It seems a
deep optimization hack (also makes better correctness guarantees
though).
Make ‘as’ a keyword. It has been a pseudo-keyword long enough.
Too much effort to bother.
References
[1]
Shadow Password Support Module
https://bugs.python.org/issue579435
[2]
PyErr_Warn may cause import deadlock
https://bugs.python.org/issue683658
[3]
Nested class __name__
https://bugs.python.org/issue633930
[4]
Neal Norwitz, random vs whrandom
https://mail.python.org/pipermail/python-dev/2002-April/023165.html
[5]
Skip Montanaro, python/dist/src/Lib types.py,1.26,1.27
https://mail.python.org/pipermail/python-dev/2002-May/024346.html
[6]
Daniel Dunbar, Lazily GC tracking tuples
https://mail.python.org/pipermail/python-dev/2002-May/023926.html
[7]
GC: untrack simple objects
https://bugs.python.org/issue558745
Copyright
This document has been placed in the public domain.
| Final | PEP 320 – Python 2.4 Release Schedule | Informational | This document describes the development and release schedule for
Python 2.4. The schedule primarily concerns itself with PEP-sized
items. Small features may be added up to and including the first
beta release. Bugs may be fixed until the final release. |
PEP 321 – Date/Time Parsing and Formatting
Author:
A.M. Kuchling <amk at amk.ca>
Status:
Withdrawn
Type:
Standards Track
Created:
16-Sep-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Input Formats
Generic Input Parsing
Output Formats
References
Copyright
Abstract
Python 2.3 added a number of simple date and time types in the
datetime module. There’s no support for parsing strings in various
formats and returning a corresponding instance of one of the types.
This PEP proposes adding a family of predefined parsing function for
several commonly used date and time formats, and a facility for generic
parsing.
The types provided by the datetime module all have
.isoformat() and .ctime() methods that return string
representations of a time, and the .strftime() method can be used
to construct new formats. There are a number of additional
commonly-used formats that would be useful to have as part of the
standard library; this PEP also suggests how to add them.
Input Formats
Useful formats to support include:
ISO8601
ARPA/RFC 2822
ctime
Formats commonly written by humans such as the American
“MM/DD/YYYY”, the European “YYYY/MM/DD”, and variants such as
“DD-Month-YYYY”.
CVS-style or tar-style dates (“tomorrow”, “12 hours ago”, etc.)
XXX The Perl ParseDate.pm module supports many different input formats,
both absolute and relative. Should we try to support them all?
Options:
Add functions to the datetime module:import datetime
d = datetime.parse_iso8601("2003-09-15T10:34:54")
Add class methods to the various types. There are already various
class methods such as .now(), so this would be pretty natural.:import datetime
d = datetime.date.parse_iso8601("2003-09-15T10:34:54")
Add a separate module (possible names: date, date_parse, parse_date)
or subpackage (possible names: datetime.parser) containing parsing
functions:import datetime
d = datetime.parser.parse_iso8601("2003-09-15T10:34:54")
Unresolved questions:
Naming convention to use.
What exception to raise on errors? ValueError, or a specialized exception?
Should you know what type you’re expecting, or should the parsing figure
it out? (e.g. parse_iso8601("yyyy-mm-dd") returns a date instance,
but parsing “yyyy-mm-ddThh:mm:ss” returns a datetime.) Should
there be an option to signal an error if a time is provided where
none is expected, or if no time is provided?
Anything special required for I18N? For time zones?
Generic Input Parsing
Is a strptime() implementation that returns datetime types sufficient?
XXX if yes, describe strptime here. Can the existing pure-Python
implementation be easily retargeted?
Output Formats
Not all input formats need to be supported as output formats, because it’s
pretty trivial to get the strftime() argument right for simple things
such as YYYY/MM/DD. Only complicated formats need to be supported; RFC 2822
is currently the only one I can think of.
Options:
Provide predefined format strings, so you could write this:import datetime
d = datetime.datetime(...)
print d.strftime(d.RFC2822_FORMAT) # or datetime.RFC2822_FORMAT?
Provide new methods on all the objects:d = datetime.datetime(...)
print d.rfc822_time()
Relevant functionality in other languages includes the PHP date
function (Python implementation by Simon Willison at
http://simon.incutio.com/archive/2003/10/07/dateInPython)
References
Other useful links:
http://www.egenix.com/files/python/mxDateTime.html
http://ringmaster.arc.nasa.gov/tools/time_formats.html
http://www.thinkage.ca/english/gcos/expl/b/lib/0tosec.html
https://moin.conectiva.com.br/DateUtil
Copyright
This document has been placed in the public domain.
| Withdrawn | PEP 321 – Date/Time Parsing and Formatting | Standards Track | Python 2.3 added a number of simple date and time types in the
datetime module. There’s no support for parsing strings in various
formats and returning a corresponding instance of one of the types.
This PEP proposes adding a family of predefined parsing function for
several commonly used date and time formats, and a facility for generic
parsing. |
PEP 322 – Reverse Iteration
Author:
Raymond Hettinger <python at rcn.com>
Status:
Final
Type:
Standards Track
Created:
24-Sep-2003
Python-Version:
2.4
Post-History:
24-Sep-2003
Table of Contents
Abstract
Motivation
Proposal
BDFL Pronouncement
Alternative Method Names
Discussion
Real World Use Cases
Rejected Alternatives
Copyright
Abstract
This proposal is to add a builtin function to support reverse
iteration over sequences.
Motivation
For indexable objects, current approaches for reverse iteration are
error prone, unnatural, and not especially readable:
for i in xrange(n-1, -1, -1):
print seqn[i]
One other current approach involves reversing a list before iterating
over it. That technique wastes computer cycles, memory, and lines of
code:
rseqn = list(seqn)
rseqn.reverse()
for value in rseqn:
print value
Extended slicing is a third approach that minimizes the code overhead
but does nothing for memory efficiency, beauty, or clarity.
Reverse iteration is much less common than forward iteration, but it
does arise regularly in practice. See Real World Use Cases below.
Proposal
Add a builtin function called reversed() that makes a reverse
iterator over sequence objects that support __getitem__() and
__len__().
The above examples then simplify to:
for i in reversed(xrange(n)):
print seqn[i]
for elem in reversed(seqn):
print elem
The core idea is that the clearest, least error-prone way of specifying
reverse iteration is to specify it in a forward direction and then say
reversed.
The implementation could be as simple as:
def reversed(x):
if hasattr(x, 'keys'):
raise ValueError("mappings do not support reverse iteration")
i = len(x)
while i > 0:
i -= 1
yield x[i]
No language syntax changes are needed. The proposal is fully backwards
compatible.
A C implementation and unit tests are at: https://bugs.python.org/issue834422
BDFL Pronouncement
This PEP has been conditionally accepted for Py2.4. The condition means
that if the function is found to be useless, it can be removed before
Py2.4b1.
Alternative Method Names
reviter – Jeremy Fincher’s suggestion matches use of iter()
ireverse – uses the itertools naming convention
inreverse – no one seems to like this one except me
The name reverse is not a candidate because it duplicates the name
of the list.reverse() which mutates the underlying list.
Discussion
The case against adoption of the PEP is a desire to keep the number of
builtin functions small. This needs to weighed against the simplicity
and convenience of having it as builtin instead of being tucked away in
some other namespace.
Real World Use Cases
Here are some instances of reverse iteration taken from the standard
library and comments on why reverse iteration was necessary:
atexit.exit_handlers() uses:while _exithandlers:
func, targs, kargs = _exithandlers.pop()
. . .
In this application popping is required, so the new function would
not help.
heapq.heapify() uses for i in xrange(n//2 - 1, -1, -1) because
higher-level orderings are more easily formed from pairs of
lower-level orderings. A forward version of this algorithm is
possible; however, that would complicate the rest of the heap code
which iterates over the underlying list in the opposite direction.
The replacement code for i in reversed(xrange(n//2)) makes
clear the range covered and how many iterations it takes.
mhlib.test() uses:testfolders.reverse();
for t in testfolders:
do('mh.deletefolder(%s)' % `t`)
The need for reverse iteration arises because the tail of the
underlying list is altered during iteration.
platform._dist_try_harder() uses
for n in range(len(verfiles)-1,-1,-1) because the loop deletes
selected elements from verfiles but needs to leave the rest of
the list intact for further iteration.
random.shuffle() uses for i in xrange(len(x)-1, 0, -1) because
the algorithm is most easily understood as randomly selecting
elements from an ever diminishing pool. In fact, the algorithm can
be run in a forward direction but is less intuitive and rarely
presented that way in literature. The replacement code
for i in reversed(xrange(1, len(x))) is much easier
to verify visually.
rfc822.Message.__delitem__() uses:list.reverse()
for i in list:
del self.headers[i]
The need for reverse iteration arises because the tail of the
underlying list is altered during iteration.
Rejected Alternatives
Several variants were submitted that attempted to apply reversed()
to all iterables by running the iterable to completion, saving the
results, and then returning a reverse iterator over the results.
While satisfying some notions of full generality, running the input
to the end is contrary to the purpose of using iterators
in the first place. Also, a small disaster ensues if the underlying
iterator is infinite.
Putting the function in another module or attaching it to a type object
is not being considered. Like its cousins, zip() and enumerate(),
the function needs to be directly accessible in daily programming. Each
solves a basic looping problem: lock-step iteration, loop counting, and
reverse iteration. Requiring some form of dotted access would interfere
with their simplicity, daily utility, and accessibility. They are core
looping constructs, independent of any one application domain.
Copyright
This document has been placed in the public domain.
| Final | PEP 322 – Reverse Iteration | Standards Track | This proposal is to add a builtin function to support reverse
iteration over sequences. |
PEP 323 – Copyable Iterators
Author:
Alex Martelli <aleaxit at gmail.com>
Status:
Deferred
Type:
Standards Track
Created:
25-Oct-2003
Python-Version:
2.5
Post-History:
29-Oct-2003
Table of Contents
Deferral
Abstract
Update and Comments
Motivation
Specification
Details
Rationale
References
Copyright
Deferral
This PEP has been deferred. Copyable iterators are a nice idea, but after
four years, no implementation or widespread interest has emerged.
Abstract
This PEP suggests that some iterator types should support shallow
copies of their instances by exposing a __copy__ method which meets
some specific requirements, and indicates how code using an iterator
might exploit such a __copy__ method when present.
Update and Comments
Support for __copy__ was included in Py2.4’s itertools.tee().
Adding __copy__ methods to existing iterators will change the
behavior under tee(). Currently, the copied iterators remain
tied to the original iterator. If the original advances, then
so do all of the copies. Good practice is to overwrite the
original so that anomalies don’t result: a,b=tee(a).
Code that doesn’t follow that practice may observe a semantic
change if a __copy__ method is added to an iterator.
Motivation
In Python up to 2.3, most built-in iterator types don’t let the user
copy their instances. User-coded iterators that do let their clients
call copy.copy on their instances may, or may not, happen to return,
as a result of the copy, a separate iterator object that may be
iterated upon independently from the original.
Currently, “support” for copy.copy in a user-coded iterator type is
almost invariably “accidental” – i.e., the standard machinery of the
copy method in Python’s standard library’s copy module does build and
return a copy. However, the copy will be independently iterable with
respect to the original only if calling .next() on an instance of that
class happens to change instance state solely by rebinding some
attributes to new values, and not by mutating some attributes’
existing values.
For example, an iterator whose “index” state is held as an integer
attribute will probably give usable copies, since (integers being
immutable) .next() presumably just rebinds that attribute. On the
other hand, another iterator whose “index” state is held as a list
attribute will probably mutate the same list object when .next()
executes, and therefore copies of such an iterator will not be
iterable separately and independently from the original.
Given this existing situation, copy.copy(it) on some iterator object
isn’t very useful, nor, therefore, is it at all widely used. However,
there are many cases in which being able to get a “snapshot” of an
iterator, as a “bookmark”, so as to be able to keep iterating along
the sequence but later iterate again on the same sequence from the
bookmark onwards, is useful. To support such “bookmarking”, module
itertools, in 2.4, has grown a ‘tee’ function, to be used as:
it, bookmark = itertools.tee(it)
The previous value of ‘it’ must not be used again, which is why this
typical usage idiom rebinds the name. After this call, ‘it’ and
‘bookmark’ are independently-iterable iterators on the same underlying
sequence as the original value of ‘it’: this satisfies application
needs for “iterator copying”.
However, when itertools.tee can make no hypotheses about the nature of
the iterator it is passed as an argument, it must save in memory all
items through which one of the two ‘teed’ iterators, but not yet both,
have stepped. This can be quite costly in terms of memory, if the two
iterators get very far from each other in their stepping; indeed, in
some cases it may be preferable to make a list from the iterator so as
to be able to step repeatedly through the subsequence, or, if that is
too costy in terms of memory, save items to disk, again in order to be
able to iterate through them repeatedly.
This PEP proposes another idea that will, in some important cases,
allow itertools.tee to do its job with minimal cost in terms of
memory; user code may also occasionally be able to exploit the idea in
order to decide whether to copy an iterator, make a list from it, or
use an auxiliary disk file.
The key consideration is that some important iterators, such as those
which built-in function iter builds over sequences, would be
intrinsically easy to copy: just get another reference to the same
sequence, and a copy of the integer index. However, in Python 2.3,
those iterators don’t expose the state, and don’t support copy.copy.
The purpose of this PEP, therefore, is to have those iterator types
expose a suitable __copy__ method. Similarly, user-coded iterator
types that can provide copies of their instances, suitable for
separate and independent iteration, with limited costs in time and
space, should also expose a suitable __copy__ method. While
copy.copy also supports other ways to let a type control the way
its instances are copied, it is suggested, for simplicity, that
iterator types that support copying always do so by exposing a
__copy__ method, and not in the other ways copy.copy supports.
Having iterators expose a suitable __copy__ when feasible will afford
easy optimization of itertools.tee and similar user code, as in:
def tee(it):
it = iter(it)
try: copier = it.__copy__
except AttributeError:
# non-copyable iterator, do all the needed hard work
# [snipped!]
else:
return it, copier()
Note that this function does NOT call “copy.copy(it)”, which (even
after this PEP is implemented) might well still “just happen to
succeed”. for some iterator type that is implemented as a user-coded
class. without really supplying an adequate “independently iterable”
copy object as its result.
Specification
Any iterator type X may expose a method __copy__ that is callable
without arguments on any instance x of X. The method should be
exposed if and only if the iterator type can provide copyability with
reasonably little computational and memory effort. Furthermore, the
new object y returned by method __copy__ should be a new instance
of X that is iterable independently and separately from x, stepping
along the same “underlying sequence” of items.
For example, suppose a class Iter essentially duplicated the
functionality of the iter builtin for iterating on a sequence:
class Iter(object):
def __init__(self, sequence):
self.sequence = sequence
self.index = 0
def __iter__(self):
return self
def next(self):
try: result = self.sequence[self.index]
except IndexError: raise StopIteration
self.index += 1
return result
To make this Iter class compliant with this PEP, the following
addition to the body of class Iter would suffice:
def __copy__(self):
result = self.__class__(self.sequence)
result.index = self.index
return result
Note that __copy__, in this case, does not even try to copy the
sequence; if the sequence is altered while either or both of the
original and copied iterators are still stepping on it, the iteration
behavior is quite likely to go awry anyway – it is not __copy__’s
responsibility to change this normal Python behavior for iterators
which iterate on mutable sequences (that might, perhaps, be the
specification for a __deepcopy__ method of iterators, which, however,
this PEP does not deal with).
Consider also a “random iterator”, which provides a nonterminating
sequence of results from some method of a random instance, called
with given arguments:
class RandomIterator(object):
def __init__(self, bound_method, *args):
self.call = bound_method
self.args = args
def __iter__(self):
return self
def next(self):
return self.call(*self.args)
def __copy__(self):
import copy, new
im_self = copy.copy(self.call.im_self)
method = new.instancemethod(self.call.im_func, im_self)
return self.__class__(method, *self.args)
This iterator type is slightly more general than its name implies, as
it supports calls to any bound method (or other callable, but if the
callable is not a bound method, then method __copy__ will fail). But
the use case is for the purpose of generating random streams, as in:
import random
def show5(it):
for i, result in enumerate(it):
print '%6.3f'%result,
if i==4: break
print
normit = RandomIterator(random.Random().gauss, 0, 1)
show5(normit)
copit = normit.__copy__()
show5(normit)
show5(copit)
which will display some output such as:
-0.536 1.936 -1.182 -1.690 -1.184
0.666 -0.701 1.214 0.348 1.373
0.666 -0.701 1.214 0.348 1.373
the key point being that the second and third lines are equal, because
the normit and copit iterators will step along the same “underlying
sequence”. (As an aside, note that to get a copy of self.call.im_self
we must use copy.copy, NOT try getting at a __copy__ method directly,
because for example instances of random.Random support copying via
__getstate__ and __setstate__, NOT via __copy__; indeed, using
copy.copy is the normal way to get a shallow copy of any object –
copyable iterators are different because of the already-mentioned
uncertainty about the result of copy.copy supporting these “copyable
iterator” specs).
Details
Besides adding to the Python docs a recommendation that user-coded
iterator types support a __copy__ method (if and only if it can be
implemented with small costs in memory and runtime, and produce an
independently-iterable copy of an iterator object), this PEP’s
implementation will specifically include the addition of copyability
to the iterators over sequences that built-in iter returns, and also
to the iterators over a dictionary returned by the methods __iter__,
iterkeys, itervalues, and iteritems of built-in type dict.
Iterators produced by generator functions will not be copyable.
However, iterators produced by the new “generator expressions” of
Python 2.4 (PEP 289) should be copyable if their underlying
iterator[s] are; the strict limitations on what is possible in a
generator expression, compared to the much vaster generality of a
generator, should make that feasible. Similarly, the iterators
produced by the built-in function enumerate, and certain functions
suppiled by module itertools, should be copyable if the underlying
iterators are.
The implementation of this PEP will also include the optimization of
the new itertools.tee function mentioned in the Motivation section.
Rationale
The main use case for (shallow) copying of an iterator is the same as
for the function itertools.tee (new in 2.4). User code will not
directly attempt to copy an iterator, because it would have to deal
separately with uncopyable cases; calling itertools.tee will
internally perform the copy when appropriate, and implicitly fallback
to a maximally efficient non-copying strategy for iterators that are
not copyable. (Occasionally, user code may want more direct control,
specifically in order to deal with non-copyable iterators by other
strategies, such as making a list or saving the sequence to disk).
A tee’d iterator may serve as a “reference point”, allowing processing
of a sequence to continue or resume from a known point, while the
other independent iterator can be freely advanced to “explore” a
further part of the sequence as needed. A simple example: a generator
function which, given an iterator of numbers (assumed to be positive),
returns a corresponding iterator, each of whose items is the fraction
of the total corresponding to each corresponding item of the input
iterator. The caller may pass the total as a value, if known in
advance; otherwise, the iterator returned by calling this generator
function will first compute the total.
def fractions(numbers, total=None):
if total is None:
numbers, aux = itertools.tee(numbers)
total = sum(aux)
total = float(total)
for item in numbers:
yield item / total
The ability to tee the numbers iterator allows this generator to
precompute the total, if needed, without necessarily requiring
O(N) auxiliary memory if the numbers iterator is copyable.
As another example of “iterator bookmarking”, consider a stream of
numbers with an occasional string as a “postfix operator” now and
then. By far most frequent such operator is a ‘+’, whereupon we must
sum all previous numbers (since the last previous operator if any, or
else since the start) and yield the result. Sometimes we find a ‘*’
instead, which is the same except that the previous numbers must
instead be multiplied, not summed.
def filter_weird_stream(stream):
it = iter(stream)
while True:
it, bookmark = itertools.tee(it)
total = 0
for item in it:
if item=='+':
yield total
break
elif item=='*':
product = 1
for item in bookmark:
if item=='*':
yield product
break
else:
product *= item
else:
total += item
Similar use cases of itertools.tee can support such tasks as
“undo” on a stream of commands represented by an iterator,
“backtracking” on the parse of a stream of tokens, and so on.
(Of course, in each case, one should also consider simpler
possibilities such as saving relevant portions of the sequence
into lists while stepping on the sequence with just one iterator,
depending on the details of one’s task).
Here is an example, in pure Python, of how the ‘enumerate’
built-in could be extended to support __copy__ if its underlying
iterator also supported __copy__:
class enumerate(object):
def __init__(self, it):
self.it = iter(it)
self.i = -1
def __iter__(self):
return self
def next(self):
self.i += 1
return self.i, self.it.next()
def __copy__(self):
result = self.__class__.__new__()
result.it = self.it.__copy__()
result.i = self.i
return result
Here is an example of the kind of “fragility” produced by “accidental
copyability” of an iterator – the reason why one must NOT use
copy.copy expecting, if it succeeds, to receive as a result an
iterator which is iterable-on independently from the original. Here
is an iterator class that iterates (in preorder) on “trees” which, for
simplicity, are just nested lists – any item that’s a list is treated
as a subtree, any other item as a leaf.
class ListreeIter(object):
def __init__(self, tree):
self.tree = [tree]
self.indx = [-1]
def __iter__(self):
return self
def next(self):
if not self.indx:
raise StopIteration
self.indx[-1] += 1
try:
result = self.tree[-1][self.indx[-1]]
except IndexError:
self.tree.pop()
self.indx.pop()
return self.next()
if type(result) is not list:
return result
self.tree.append(result)
self.indx.append(-1)
return self.next()
Now, for example, the following code:
import copy
x = [ [1,2,3], [4, 5, [6, 7, 8], 9], 10, 11, [12] ]
print 'showing all items:',
it = ListreeIter(x)
for i in it:
print i,
if i==6: cop = copy.copy(it)
print
print 'showing items >6 again:'
for i in cop: print i,
print
does NOT work as intended – the “cop” iterator gets consumed, and
exhausted, step by step as the original “it” iterator is, because
the accidental (rather than deliberate) copying performed by
copy.copy shares, rather than duplicating the “index” list, which
is the mutable attribute it.indx (a list of numerical indices).
Thus, this “client code” of the iterator, which attempts to iterate
twice over a portion of the sequence via a copy.copy on the
iterator, is NOT correct.
Some correct solutions include using itertools.tee, i.e., changing
the first for loop into:
for i in it:
print i,
if i==6:
it, cop = itertools.tee(it)
break
for i in it: print i,
(note that we MUST break the loop in two, otherwise we’d still
be looping on the ORIGINAL value of it, which must NOT be used
further after the call to tee!!!); or making a list, i.e.
for i in it:
print i,
if i==6:
cop = lit = list(it)
break
for i in lit: print i,
(again, the loop must be broken in two, since iterator ‘it’
gets exhausted by the call list(it)).
Finally, all of these solutions would work if Listiter supplied
a suitable __copy__ method, as this PEP recommends:
def __copy__(self):
result = self.__class__.new()
result.tree = copy.copy(self.tree)
result.indx = copy.copy(self.indx)
return result
There is no need to get any “deeper” in the copy, but the two
mutable “index state” attributes must indeed be copied in order
to achieve a “proper” (independently iterable) iterator-copy.
The recommended solution is to have class Listiter supply this
__copy__ method AND have client code use itertools.tee (with
the split-in-two-parts loop as shown above). This will make
client code maximally tolerant of different iterator types it
might be using AND achieve good performance for tee’ing of this
specific iterator type at the same time.
References
[1] Discussion on python-dev starting at post:
https://mail.python.org/pipermail/python-dev/2003-October/038969.html
[2] Online documentation for the copy module of the standard library:
https://docs.python.org/release/2.6/library/copy.html
Copyright
This document has been placed in the public domain.
| Deferred | PEP 323 – Copyable Iterators | Standards Track | This PEP suggests that some iterator types should support shallow
copies of their instances by exposing a __copy__ method which meets
some specific requirements, and indicates how code using an iterator
might exploit such a __copy__ method when present. |
PEP 324 – subprocess - New process module
Author:
Peter Astrand <astrand at lysator.liu.se>
Status:
Final
Type:
Standards Track
Created:
19-Nov-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Motivation
Rationale
Specification
Exceptions
Security
Popen objects
Replacing older functions with the subprocess module
Replacing /bin/sh shell backquote
Replacing shell pipe line
Replacing os.system()
Replacing os.spawn*
Replacing os.popen*
Replacing popen2.*
Open Issues
Backwards Compatibility
Reference Implementation
References
Copyright
Abstract
This PEP describes a new module for starting and communicating
with processes.
Motivation
Starting new processes is a common task in any programming
language, and very common in a high-level language like Python.
Good support for this task is needed, because:
Inappropriate functions for starting processes could mean a
security risk: If the program is started through the shell, and
the arguments contain shell meta characters, the result can be
disastrous. [1]
It makes Python an even better replacement language for
over-complicated shell scripts.
Currently, Python has a large number of different functions for
process creation. This makes it hard for developers to choose.
The subprocess module provides the following enhancements over
previous functions:
One “unified” module provides all functionality from previous
functions.
Cross-process exceptions: Exceptions happening in the child
before the new process has started to execute are re-raised in
the parent. This means that it’s easy to handle exec()
failures, for example. With popen2, for example, it’s
impossible to detect if the execution failed.
A hook for executing custom code between fork and exec. This
can be used for, for example, changing uid.
No implicit call of /bin/sh. This means that there is no need
for escaping dangerous shell meta characters.
All combinations of file descriptor redirection is possible.
For example, the “python-dialog” [2] needs to spawn a process
and redirect stderr, but not stdout. This is not possible with
current functions, without using temporary files.
With the subprocess module, it’s possible to control if all open
file descriptors should be closed before the new program is
executed.
Support for connecting several subprocesses (shell “pipe”).
Universal newline support.
A communicate() method, which makes it easy to send stdin data
and read stdout and stderr data, without risking deadlocks.
Most people are aware of the flow control issues involved with
child process communication, but not all have the patience or
skills to write a fully correct and deadlock-free select loop.
This means that many Python applications contain race
conditions. A communicate() method in the standard library
solves this problem.
Rationale
The following points summarizes the design:
subprocess was based on popen2, which is tried-and-tested.
The factory functions in popen2 have been removed, because I
consider the class constructor equally easy to work with.
popen2 contains several factory functions and classes for
different combinations of redirection. subprocess, however,
contains one single class. Since the subprocess module supports
12 different combinations of redirection, providing a class or
function for each of them would be cumbersome and not very
intuitive. Even with popen2, this is a readability problem.
For example, many people cannot tell the difference between
popen2.popen2 and popen2.popen4 without using the documentation.
One small utility function is provided: subprocess.call(). It
aims to be an enhancement over os.system(), while still very
easy to use:
It does not use the Standard C function system(), which has
limitations.
It does not call the shell implicitly.
No need for quoting; using an argument list.
The return value is easier to work with.
The call() utility function accepts an ‘args’ argument, just
like the Popen class constructor. It waits for the command to
complete, then returns the returncode attribute. The
implementation is very simple:
def call(*args, **kwargs):
return Popen(*args, **kwargs).wait()
The motivation behind the call() function is simple: Starting a
process and wait for it to finish is a common task.
While Popen supports a wide range of options, many users have
simple needs. Many people are using os.system() today, mainly
because it provides a simple interface. Consider this example:
os.system("stty sane -F " + device)
With subprocess.call(), this would look like:
subprocess.call(["stty", "sane", "-F", device])
or, if executing through the shell:
subprocess.call("stty sane -F " + device, shell=True)
The “preexec” functionality makes it possible to run arbitrary
code between fork and exec. One might ask why there are special
arguments for setting the environment and current directory, but
not for, for example, setting the uid. The answer is:
Changing environment and working directory is considered
fairly common.
Old functions like spawn() has support for an
“env”-argument.
env and cwd are considered quite cross-platform: They make
sense even on Windows.
On POSIX platforms, no extension module is required: the module
uses os.fork(), os.execvp() etc.
On Windows platforms, the module requires either Mark Hammond’s
Windows extensions [5], or a small extension module called
_subprocess.
Specification
This module defines one class called Popen:
class Popen(args, bufsize=0, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=False, shell=False,
cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0):
Arguments are:
args should be a string, or a sequence of program arguments.
The program to execute is normally the first item in the args
sequence or string, but can be explicitly set by using the
executable argument.On UNIX, with shell=False (default): In this case, the Popen
class uses os.execvp() to execute the child program. args
should normally be a sequence. A string will be treated as a
sequence with the string as the only item (the program to
execute).
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the
child program, which operates on strings. If args is a
sequence, it will be converted to a string using the
list2cmdline method. Please note that not all MS Windows
applications interpret the command line the same way: The
list2cmdline is designed for applications using the same rules
as the MS C runtime.
bufsize, if given, has the same meaning as the corresponding
argument to the built-in open() function: 0 means unbuffered, 1
means line buffered, any other positive value means use a buffer
of (approximately) that size. A negative bufsize means to use
the system default, which usually means fully buffered. The
default value for bufsize is 0 (unbuffered).
stdin, stdout and stderr specify the executed programs’ standard
input, standard output and standard error file handles,
respectively. Valid values are PIPE, an existing file
descriptor (a positive integer), an existing file object, and
None. PIPE indicates that a new pipe to the child should be
created. With None, no redirection will occur; the child’s file
handles will be inherited from the parent. Additionally, stderr
can be STDOUT, which indicates that the stderr data from the
applications should be captured into the same file handle as for
stdout.
If preexec_fn is set to a callable object, this object will be
called in the child process just before the child is executed.
If close_fds is true, all file descriptors except 0, 1 and 2
will be closed before the child process is executed.
If shell is true, the specified command will be executed through
the shell.
If cwd is not None, the current directory will be changed to cwd
before the child is executed.
If env is not None, it defines the environment variables for the
new process.
If universal_newlines is true, the file objects stdout and
stderr are opened as a text file, but lines may be terminated
by any of \n, the Unix end-of-line convention, \r, the
Macintosh convention or \r\n, the Windows convention. All of
these external representations are seen as \n by the Python
program. Note: This feature is only available if Python is
built with universal newline support (the default). Also, the
newlines attribute of the file objects stdout, stdin and stderr
are not updated by the communicate() method.
The startupinfo and creationflags, if given, will be passed to
the underlying CreateProcess() function. They can specify
things such as appearance of the main window and priority for
the new process. (Windows only)
This module also defines two shortcut functions:
call(*args, **kwargs):Run command with arguments. Wait for command to complete,
then return the returncode attribute.The arguments are the same as for the Popen constructor.
Example:
retcode = call(["ls", "-l"])
Exceptions
Exceptions raised in the child process, before the new program has
started to execute, will be re-raised in the parent.
Additionally, the exception object will have one extra attribute
called ‘child_traceback’, which is a string containing traceback
information from the child’s point of view.
The most common exception raised is OSError. This occurs, for
example, when trying to execute a non-existent file. Applications
should prepare for OSErrors.
A ValueError will be raised if Popen is called with invalid
arguments.
Security
Unlike some other popen functions, this implementation will never
call /bin/sh implicitly. This means that all characters,
including shell meta-characters, can safely be passed to child
processes.
Popen objects
Instances of the Popen class have the following methods:
poll()Check if child process has terminated. Returns returncode
attribute.
wait()Wait for child process to terminate. Returns returncode
attribute.
communicate(input=None)Interact with process: Send data to stdin. Read data from
stdout and stderr, until end-of-file is reached. Wait for
process to terminate. The optional stdin argument should be a
string to be sent to the child process, or None, if no data
should be sent to the child.communicate() returns a tuple (stdout, stderr).
Note: The data read is buffered in memory, so do not use this
method if the data size is large or unlimited.
The following attributes are also available:
stdinIf the stdin argument is PIPE, this attribute is a file object
that provides input to the child process. Otherwise, it is
None.
stdoutIf the stdout argument is PIPE, this attribute is a file
object that provides output from the child process.
Otherwise, it is None.
stderrIf the stderr argument is PIPE, this attribute is file object
that provides error output from the child process. Otherwise,
it is None.
pidThe process ID of the child process.
returncodeThe child return code. A None value indicates that the
process hasn’t terminated yet. A negative value -N indicates
that the child was terminated by signal N (UNIX only).
Replacing older functions with the subprocess module
In this section, “a ==> b” means that b can be used as a
replacement for a.
Note: All functions in this section fail (more or less) silently
if the executed program cannot be found; this module raises an
OSError exception.
In the following examples, we assume that the subprocess module is
imported with from subprocess import *.
Replacing /bin/sh shell backquote
output=`mycmd myarg`
==>
output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0]
Replacing shell pipe line
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
Replacing os.system()
sts = os.system("mycmd" + " myarg")
==>
p = Popen("mycmd" + " myarg", shell=True)
sts = os.waitpid(p.pid, 0)
Note:
Calling the program through the shell is usually not required.
It’s easier to look at the returncode attribute than the
exit status.
A more real-world example would look like this:
try:
retcode = call("mycmd" + " myarg", shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Replacing os.spawn*
P_NOWAIT example:
pid = os.spawnlp(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg")
==>
pid = Popen(["/bin/mycmd", "myarg"]).pid
P_WAIT example:
retcode = os.spawnlp(os.P_WAIT, "/bin/mycmd", "mycmd", "myarg")
==>
retcode = call(["/bin/mycmd", "myarg"])
Vector example:
os.spawnvp(os.P_NOWAIT, path, args)
==>
Popen([path] + args[1:])
Environment example:
os.spawnlpe(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg", env)
==>
Popen(["/bin/mycmd", "myarg"], env={"PATH": "/usr/bin"})
Replacing os.popen*
pipe = os.popen(cmd, mode='r', bufsize)
==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdout=PIPE).stdout
pipe = os.popen(cmd, mode='w', bufsize)
==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin
(child_stdin, child_stdout) = os.popen2(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdin, child_stdout) = (p.stdin, p.stdout)
(child_stdin,
child_stdout,
child_stderr) = os.popen3(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
(child_stdin,
child_stdout,
child_stderr) = (p.stdin, p.stdout, p.stderr)
(child_stdin, child_stdout_and_stderr) = os.popen4(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
(child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout)
Replacing popen2.*
Note: If the cmd argument to popen2 functions is a string, the
command is executed through /bin/sh. If it is a list, the command
is directly executed.
(child_stdout, child_stdin) = popen2.popen2("somestring", bufsize, mode)
==>
p = Popen(["somestring"], shell=True, bufsize=bufsize
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdout, child_stdin) = (p.stdout, p.stdin)
(child_stdout, child_stdin) = popen2.popen2(["mycmd", "myarg"], bufsize, mode)
==>
p = Popen(["mycmd", "myarg"], bufsize=bufsize,
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdout, child_stdin) = (p.stdout, p.stdin)
The popen2.Popen3 and popen3.Popen4 basically works as
subprocess.Popen, except that:
subprocess.Popen raises an exception if the execution fails
the capturestderr argument is replaced with the stderr argument.
stdin=PIPE and stdout=PIPE must be specified.
popen2 closes all file descriptors by default, but you have to
specify close_fds=True with subprocess.Popen.
Open Issues
Some features have been requested but is not yet implemented.
This includes:
Support for managing a whole flock of subprocesses
Support for managing “daemon” processes
Built-in method for killing subprocesses
While these are useful features, it’s expected that these can be
added later without problems.
expect-like functionality, including pty support.
pty support is highly platform-dependent, which is a
problem. Also, there are already other modules that provide this
kind of functionality [6].
Backwards Compatibility
Since this is a new module, no major backward compatible issues
are expected. The module name “subprocess” might collide with
other, previous modules [3] with the same name, but the name
“subprocess” seems to be the best suggested name so far. The
first name of this module was “popen5”, but this name was
considered too unintuitive. For a while, the module was called
“process”, but this name is already used by Trent Mick’s
module [4].
The functions and modules that this new module is trying to
replace (os.system, os.spawn*, os.popen*, popen2.*,
commands.*) are expected to be available in future Python versions
for a long time, to preserve backwards compatibility.
Reference Implementation
A reference implementation is available from
http://www.lysator.liu.se/~astrand/popen5/.
References
[1]
Secure Programming for Linux and Unix HOWTO, section 8.3.
http://www.dwheeler.com/secure-programs/
[2]
Python Dialog
http://pythondialog.sourceforge.net/
[3]
http://www.iol.ie/~padraiga/libs/subProcess.py
[4]
http://starship.python.net/crew/tmick/
[5]
http://starship.python.net/crew/mhammond/win32/
[6]
http://www.lysator.liu.se/~ceder/pcl-expect/
Copyright
This document has been placed in the public domain.
| Final | PEP 324 – subprocess - New process module | Standards Track | This PEP describes a new module for starting and communicating
with processes. |
PEP 325 – Resource-Release Support for Generators
Author:
Samuele Pedroni <pedronis at python.org>
Status:
Rejected
Type:
Standards Track
Created:
25-Aug-2003
Python-Version:
2.4
Post-History:
Table of Contents
Abstract
Pronouncement
Rationale
Possible Semantics
Remarks
Open Issues
Alternative Ideas
Copyright
Abstract
Generators allow for natural coding and abstraction of traversal
over data. Currently if external resources needing proper timely
release are involved, generators are unfortunately not adequate.
The typical idiom for timely release is not supported, a yield
statement is not allowed in the try clause of a try-finally
statement inside a generator. The finally clause execution can be
neither guaranteed nor enforced.
This PEP proposes that the built-in generator type implement a
close method and destruction semantics, such that the restriction
on yield placement can be lifted, expanding the applicability of
generators.
Pronouncement
Rejected in favor of PEP 342 which includes substantially all of
the requested behavior in a more refined form.
Rationale
Python generators allow for natural coding of many data traversal
scenarios. Their instantiation produces iterators,
i.e. first-class objects abstracting traversal (with all the
advantages of first- classness). In this respect they match in
power and offer some advantages over the approach using iterator
methods taking a (smalltalkish) block. On the other hand, given
current limitations (no yield allowed in a try clause of a
try-finally inside a generator) the latter approach seems better
suited to encapsulating not only traversal but also exception
handling and proper resource acquisition and release.
Let’s consider an example (for simplicity, files in read-mode are
used):
def all_lines(index_path):
for path in file(index_path, "r"):
for line in file(path.strip(), "r"):
yield line
this is short and to the point, but the try-finally for timely
closing of the files cannot be added. (While instead of a path, a
file, whose closing then would be responsibility of the caller,
could be passed in as argument, the same is not applicable for the
files opened depending on the contents of the index).
If we want timely release, we have to sacrifice the simplicity and
directness of the generator-only approach: (e.g.)
class AllLines:
def __init__(self, index_path):
self.index_path = index_path
self.index = None
self.document = None
def __iter__(self):
self.index = file(self.index_path, "r")
for path in self.index:
self.document = file(path.strip(), "r")
for line in self.document:
yield line
self.document.close()
self.document = None
def close(self):
if self.index:
self.index.close()
if self.document:
self.document.close()
to be used as:
all_lines = AllLines("index.txt")
try:
for line in all_lines:
...
finally:
all_lines.close()
The more convoluted solution implementing timely release, seems
to offer a precious hint. What we have done is encapsulate our
traversal in an object (iterator) with a close method.
This PEP proposes that generators should grow such a close method
with such semantics that the example could be rewritten as:
# Today this is not valid Python: yield is not allowed between
# try and finally, and generator type instances support no
# close method.
def all_lines(index_path):
index = file(index_path, "r")
try:
for path in index:
document = file(path.strip(), "r")
try:
for line in document:
yield line
finally:
document.close()
finally:
index.close()
all = all_lines("index.txt")
try:
for line in all:
...
finally:
all.close() # close on generator
Currently PEP 255 disallows yield inside a try clause of a
try-finally statement, because the execution of the finally clause
cannot be guaranteed as required by try-finally semantics.
The semantics of the proposed close method should be such that
while the finally clause execution still cannot be guaranteed, it
can be enforced when required. Specifically, the close method
behavior should trigger the execution of the finally clauses
inside the generator, either by forcing a return in the generator
frame or by throwing an exception in it. In situations requiring
timely resource release, close could then be explicitly invoked.
The semantics of generator destruction on the other hand should be
extended in order to implement a best-effort policy for the
general case. Specifically, destruction should invoke close().
The best-effort limitation comes from the fact that the
destructor’s execution is not guaranteed in the first place.
This seems to be a reasonable compromise, the resulting global
behavior being similar to that of files and closing.
Possible Semantics
The built-in generator type should have a close method
implemented, which can then be invoked as:
gen.close()
where gen is an instance of the built-in generator type.
Generator destruction should also invoke close method behavior.
If a generator is already terminated, close should be a no-op.
Otherwise, there are two alternative solutions, Return or
Exception Semantics:
A - Return Semantics: The generator should be resumed, generator
execution should continue as if the instruction at the re-entry
point is a return. Consequently, finally clauses surrounding the
re-entry point would be executed, in the case of a then allowed
try-yield-finally pattern.
Issues: is it important to be able to distinguish forced
termination by close, normal termination, exception propagation
from generator or generator-called code? In the normal case it
seems not, finally clauses should be there to work the same in all
these cases, still this semantics could make such a distinction
hard.
Except-clauses, like by a normal return, are not executed, such
clauses in legacy generators expect to be executed for exceptions
raised by the generator or by code called from it. Not executing
them in the close case seems correct.
B - Exception Semantics: The generator should be resumed and
execution should continue as if a special-purpose exception
(e.g. CloseGenerator) has been raised at re-entry point. Close
implementation should consume and not propagate further this
exception.
Issues: should StopIteration be reused for this purpose? Probably
not. We would like close to be a harmless operation for legacy
generators, which could contain code catching StopIteration to
deal with other generators/iterators.
In general, with exception semantics, it is unclear what to do if
the generator does not terminate or we do not receive the special
exception propagated back. Other different exceptions should
probably be propagated, but consider this possible legacy
generator code:
try:
...
yield ...
...
except: # or except Exception:, etc
raise Exception("boom")
If close is invoked with the generator suspended after the yield,
the except clause would catch our special purpose exception, so we
would get a different exception propagated back, which in this
case ought to be reasonably consumed and ignored but in general
should be propagated, but separating these scenarios seems hard.
The exception approach has the advantage to let the generator
distinguish between termination cases and have more control. On
the other hand, clear-cut semantics seem harder to define.
Remarks
If this proposal is accepted, it should become common practice to
document whether a generator acquires resources, so that its close
method ought to be called. If a generator is no longer used,
calling close should be harmless.
On the other hand, in the typical scenario the code that
instantiated the generator should call close if required by it.
Generic code dealing with iterators/generators instantiated
elsewhere should typically not be littered with close calls.
The rare case of code that has acquired ownership of and need to
properly deal with all of iterators, generators and generators
acquiring resources that need timely release, is easily solved:
if hasattr(iterator, 'close'):
iterator.close()
Open Issues
Definitive semantics ought to be chosen. Currently Guido favors
Exception Semantics. If the generator yields a value instead of
terminating, or propagating back the special exception, a special
exception should be raised again on the generator side.
It is still unclear whether spuriously converted special
exceptions (as discussed in Possible Semantics) are a problem and
what to do about them.
Implementation issues should be explored.
Alternative Ideas
The idea that the yield placement limitation should be removed and
that generator destruction should trigger execution of finally
clauses has been proposed more than once. Alone it cannot
guarantee that timely release of resources acquired by a generator
can be enforced.
PEP 288 proposes a more general solution, allowing custom
exception passing to generators. The proposal in this PEP
addresses more directly the problem of resource release. Were
PEP 288 implemented, Exceptions Semantics for close could be layered
on top of it, on the other hand PEP 288 should make a separate
case for the more general functionality.
Copyright
This document has been placed in the public domain.
| Rejected | PEP 325 – Resource-Release Support for Generators | Standards Track | Generators allow for natural coding and abstraction of traversal
over data. Currently if external resources needing proper timely
release are involved, generators are unfortunately not adequate.
The typical idiom for timely release is not supported, a yield
statement is not allowed in the try clause of a try-finally
statement inside a generator. The finally clause execution can be
neither guaranteed nor enforced. |
PEP 326 – A Case for Top and Bottom Values
Author:
Josiah Carlson <jcarlson at uci.edu>,
Terry Reedy <tjreedy at udel.edu>
Status:
Rejected
Type:
Standards Track
Created:
20-Dec-2003
Python-Version:
2.4
Post-History:
20-Dec-2003, 03-Jan-2004, 05-Jan-2004, 07-Jan-2004,
21-Feb-2004
Table of Contents
Results
Abstract
Rationale
Motivation
Max Examples
A Min Example
Other Examples
Independent Implementations?
Reference Implementation
Open Issues
References
Changes
Copyright
Results
This PEP has been rejected by the BDFL [8]. As per the
pseudo-sunset clause [9], PEP 326 is being updated one last time
with the latest suggestions, code modifications, etc., and includes a
link to a module [10] that implements the behavior described in the
PEP. Users who desire the behavior listed in this PEP are encouraged
to use the module for the reasons listed in
Independent Implementations?.
Abstract
This PEP proposes two singleton constants that represent a top and
bottom [3] value: Max and Min (or two similarly suggestive
names [4]; see Open Issues).
As suggested by their names, Max and Min would compare higher
or lower than any other object (respectively). Such behavior results
in easier to understand code and fewer special cases in which a
temporary minimum or maximum value is required, and an actual minimum
or maximum numeric value is not limited.
Rationale
While None can be used as an absolute minimum that any value can
attain [1], this may be deprecated [4] in Python 3.0 and shouldn’t
be relied upon.
As a replacement for None being used as an absolute minimum, as
well as the introduction of an absolute maximum, the introduction of
two singleton constants Max and Min address concerns for the
constants to be self-documenting.
What is commonly done to deal with absolute minimum or maximum values,
is to set a value that is larger than the script author ever expects
the input to reach, and hope that it isn’t reached.
Guido has brought up [2] the fact that there exists two constants
that can be used in the interim for maximum values: sys.maxint and
floating point positive infinity (1e309 will evaluate to positive
infinity). However, each has their drawbacks.
On most architectures sys.maxint is arbitrarily small (2**31-1 or
2**63-1) and can be easily eclipsed by large ‘long’ integers or
floating point numbers.
Comparing long integers larger than the largest floating point
number representable against any float will result in an exception
being raised:>>> cmp(1.0, 10**309)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
OverflowError: long int too large to convert to float
Even when large integers are compared against positive infinity:
>>> cmp(1e309, 10**309)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
OverflowError: long int too large to convert to float
These same drawbacks exist when numbers are negative.
Introducing Max and Min that work as described above does not
take much effort. A sample Python reference implementation of both
is included.
Motivation
There are hundreds of algorithms that begin by initializing some set
of values to a logical (or numeric) infinity or negative infinity.
Python lacks either infinity that works consistently or really is the
most extreme value that can be attained. By adding Max and
Min, Python would have a real maximum and minimum value, and such
algorithms can become clearer due to the reduction of special cases.
Max Examples
When testing various kinds of servers, it is sometimes necessary to
only serve a certain number of clients before exiting, which results
in code like the following:
count = 5
def counts(stop):
i = 0
while i < stop:
yield i
i += 1
for client_number in counts(count):
handle_one_client()
When using Max as the value assigned to count, our testing server
becomes a production server with minimal effort.
As another example, in Dijkstra’s shortest path algorithm on a graph
with weighted edges (all positive).
Set distances to every node in the graph to infinity.
Set the distance to the start node to zero.
Set visited to be an empty mapping.
While shortest distance of a node that has not been visited is less
than infinity and the destination has not been visited.
Get the node with the shortest distance.
Visit the node.
Update neighbor distances and parent pointers if necessary for
neighbors that have not been visited.
If the destination has been visited, step back through parent
pointers to find the reverse of the path to be taken.
Below is an example of Dijkstra’s shortest path algorithm on a graph
with weighted edges using a table (a faster version that uses a heap
is available, but this version is offered due to its similarity to the
description above, the heap version is available via older versions of
this document).
def DijkstraSP_table(graph, S, T):
table = {} #3
for node in graph.iterkeys():
#(visited, distance, node, parent)
table[node] = (0, Max, node, None) #1
table[S] = (0, 0, S, None) #2
cur = min(table.values()) #4a
while (not cur[0]) and cur[1] < Max: #4
(visited, distance, node, parent) = cur
table[node] = (1, distance, node, parent) #4b
for cdist, child in graph[node]: #4c
ndist = distance+cdist #|
if not table[child][0] and ndist < table[child][1]:#|
table[child] = (0, ndist, child, node) #|_
cur = min(table.values()) #4a
if not table[T][0]:
return None
cur = T #5
path = [T] #|
while table[cur][3] is not None: #|
path.append(table[cur][3]) #|
cur = path[-1] #|
path.reverse() #|
return path #|_
Readers should note that replacing Max in the above code with an
arbitrarily large number does not guarantee that the shortest path
distance to a node will never exceed that number. Well, with one
caveat: one could certainly sum up the weights of every edge in the
graph, and set the ‘arbitrarily large number’ to that total. However,
doing so does not make the algorithm any easier to understand and has
potential problems with numeric overflows.
Gustavo Niemeyer [7] points out that using a more Pythonic data
structure than tuples, to store information about node distances,
increases readability. Two equivalent node structures (one using
None, the other using Max) and their use in a suitably
modified Dijkstra’s shortest path algorithm is given below.
class SuperNode:
def __init__(self, node, parent, distance, visited):
self.node = node
self.parent = parent
self.distance = distance
self.visited = visited
class MaxNode(SuperNode):
def __init__(self, node, parent=None, distance=Max,
visited=False):
SuperNode.__init__(self, node, parent, distance, visited)
def __cmp__(self, other):
return cmp((self.visited, self.distance),
(other.visited, other.distance))
class NoneNode(SuperNode):
def __init__(self, node, parent=None, distance=None,
visited=False):
SuperNode.__init__(self, node, parent, distance, visited)
def __cmp__(self, other):
pair = ((self.visited, self.distance),
(other.visited, other.distance))
if None in (self.distance, other.distance):
return -cmp(*pair)
return cmp(*pair)
def DijkstraSP_table_node(graph, S, T, Node):
table = {} #3
for node in graph.iterkeys():
table[node] = Node(node) #1
table[S] = Node(S, distance=0) #2
cur = min(table.values()) #4a
sentinel = Node(None).distance
while not cur.visited and cur.distance != sentinel: #4
cur.visited = True #4b
for cdist, child in graph[node]: #4c
ndist = distance+cdist #|
if not table[child].visited and\ #|
ndist < table[child].distance: #|
table[child].distance = ndist #|_
cur = min(table.values()) #4a
if not table[T].visited:
return None
cur = T #5
path = [T] #|
while table[cur].parent is not None: #|
path.append(table[cur].parent) #|
cur = path[-1] #|
path.reverse() #|
return path #|_
In the above, passing in either NoneNode or MaxNode would be
sufficient to use either None or Max for the node distance
‘infinity’. Note the additional special case required for None
being used as a sentinel in NoneNode in the __cmp__ method.
This example highlights the special case handling where None is
used as a sentinel value for maximum values “in the wild”, even though
None itself compares smaller than any other object in the standard
distribution.
As an aside, it is not clear to the author that using Nodes as a
replacement for tuples has increased readability significantly, if at
all.
A Min Example
An example of usage for Min is an algorithm that solves the
following problem [5]:
Suppose you are given a directed graph, representing a
communication network. The vertices are the nodes in the network,
and each edge is a communication channel. Each edge (u, v) has
an associated value r(u, v), with 0 <= r(u, v) <= 1, which
represents the reliability of the channel from u to v
(i.e., the probability that the channel from u to v will
not fail). Assume that the reliability probabilities of the
channels are independent. (This implies that the reliability of
any path is the product of the reliability of the edges along the
path.) Now suppose you are given two nodes in the graph, A
and B.
Such an algorithm is a 7 line modification to the DijkstraSP_table
algorithm given above (modified lines prefixed with *):
def DijkstraSP_table(graph, S, T):
table = {} #3
for node in graph.iterkeys():
#(visited, distance, node, parent)
* table[node] = (0, Min, node, None) #1
* table[S] = (0, 1, S, None) #2
* cur = max(table.values()) #4a
* while (not cur[0]) and cur[1] > Min: #4
(visited, distance, node, parent) = cur
table[node] = (1, distance, node, parent) #4b
for cdist, child in graph[node]: #4c
* ndist = distance*cdist #|
* if not table[child][0] and ndist > table[child][1]:#|
table[child] = (0, ndist, child, node) #|_
* cur = max(table.values()) #4a
if not table[T][0]:
return None
cur = T #5
path = [T] #|
while table[cur][3] is not None: #|
path.append(table[cur][3]) #|
cur = path[-1] #|
path.reverse() #|
return path #|_
Note that there is a way of translating the graph to so that it can be
passed unchanged into the original DijkstraSP_table algorithm.
There also exists a handful of easy methods for constructing Node
objects that would work with DijkstraSP_table_node. Such
translations are left as an exercise to the reader.
Other Examples
Andrew P. Lentvorski, Jr. [6] has pointed out that various data
structures involving range searching have immediate use for Max
and Min values. More specifically; Segment trees, Range trees,
k-d trees and database keys:
…The issue is that a range can be open on one side and does not
always have an initialized case.The solutions I have seen are to either overload None as the
extremum or use an arbitrary large magnitude number. Overloading
None means that the built-ins can’t really be used without special
case checks to work around the undefined (or “wrongly defined”)
ordering of None. These checks tend to swamp the nice performance
of built-ins like max() and min().
Choosing a large magnitude number throws away the ability of
Python to cope with arbitrarily large integers and introduces a
potential source of overrun/underrun bugs.
Further use examples of both Max and Min are available in the
realm of graph algorithms, range searching algorithms, computational
geometry algorithms, and others.
Independent Implementations?
Independent implementations of the Min/Max concept by users
desiring such functionality are not likely to be compatible, and
certainly will produce inconsistent orderings. The following examples
seek to show how inconsistent they can be.
Let us pretend we have created proper separate implementations of
MyMax, MyMin, YourMax and YourMin with the same code as given in
the sample implementation (with some minor renaming):>>> lst = [YourMin, MyMin, MyMin, YourMin, MyMax, YourMin, MyMax,
YourMax, MyMax]
>>> lst.sort()
>>> lst
[YourMin, YourMin, MyMin, MyMin, YourMin, MyMax, MyMax, YourMax,
MyMax]
Notice that while all the “Min”s are before the “Max”s, there is no
guarantee that all instances of YourMin will come before MyMin, the
reverse, or the equivalent MyMax and YourMax.
The problem is also evident when using the heapq module:>>> lst = [YourMin, MyMin, MyMin, YourMin, MyMax, YourMin, MyMax,
YourMax, MyMax]
>>> heapq.heapify(lst) #not needed, but it can't hurt
>>> while lst: print heapq.heappop(lst),
...
YourMin MyMin YourMin YourMin MyMin MyMax MyMax YourMax MyMax
Furthermore, the findmin_Max code and both versions of Dijkstra
could result in incorrect output by passing in secondary versions of
Max.
It has been pointed out [7] that the reference implementation given
below would be incompatible with independent implementations of
Max/Min. The point of this PEP is for the introduction of
“The One True Implementation” of “The One True Maximum” and “The One
True Minimum”. User-based implementations of Max and Min
objects would thusly be discouraged, and use of “The One True
Implementation” would obviously be encouraged. Ambiguous behavior
resulting from mixing users’ implementations of Max and Min
with “The One True Implementation” should be easy to discover through
variable and/or source code introspection.
Reference Implementation
class _ExtremeType(object):
def __init__(self, cmpr, rep):
object.__init__(self)
self._cmpr = cmpr
self._rep = rep
def __cmp__(self, other):
if isinstance(other, self.__class__) and\
other._cmpr == self._cmpr:
return 0
return self._cmpr
def __repr__(self):
return self._rep
Max = _ExtremeType(1, "Max")
Min = _ExtremeType(-1, "Min")
Results of Test Run:
>>> max(Max, 2**65536)
Max
>>> min(Max, 2**65536)
20035299304068464649790...
(lines removed for brevity)
...72339445587895905719156736L
>>> min(Min, -2**65536)
Min
>>> max(Min, -2**65536)
-2003529930406846464979...
(lines removed for brevity)
...072339445587895905719156736L
Open Issues
As the PEP was rejected, all open issues are now closed and
inconsequential. The module will use the names UniversalMaximum
and UniversalMinimum due to the fact that it would be very
difficult to mistake what each does. For those who require a shorter
name, renaming the singletons during import is suggested:
from extremes import UniversalMaximum as uMax,
UniversalMinimum as uMin
References
[1]
RE: [Python-Dev] Re: Got None. Maybe Some?, Peters, Tim
(https://mail.python.org/pipermail/python-dev/2003-December/041374.html)
[2]
Re: [Python-Dev] Got None. Maybe Some?, van Rossum, Guido
(https://mail.python.org/pipermail/python-dev/2003-December/041352.html)
[3]
RE: [Python-Dev] Got None. Maybe Some?, Peters, Tim
(https://mail.python.org/pipermail/python-dev/2003-December/041332.html)
[4] (1, 2)
[Python-Dev] Re: PEP 326 now online, Reedy, Terry
(https://mail.python.org/pipermail/python-dev/2004-January/041685.html)
[5]
Homework 6, Problem 7, Dillencourt, Michael
(link may not be valid in the future)
(http://www.ics.uci.edu/~dillenco/ics161/hw/hw6.pdf)
[6]
RE: [Python-Dev] PEP 326 now online, Lentvorski, Andrew P., Jr.
(https://mail.python.org/pipermail/python-dev/2004-January/041727.html)
[7] (1, 2)
[Python-Dev] Re: PEP 326 now online, Niemeyer, Gustavo
(https://mail.python.org/pipermail/python-dev/2004-January/042261.html);
[Python-Dev] Re: PEP 326 now online, Carlson, Josiah
(https://mail.python.org/pipermail/python-dev/2004-January/042272.html)
[8] (1, 2)
[Python-Dev] PEP 326 (quick location possibility), van Rossum, Guido
(https://mail.python.org/pipermail/python-dev/2004-January/042306.html)
[9]
[Python-Dev] PEP 326 (quick location possibility), Carlson, Josiah
(https://mail.python.org/pipermail/python-dev/2004-January/042300.html)
[10]
Recommended standard implementation of PEP 326, extremes.py,
Carlson, Josiah
(https://web.archive.org/web/20040410135029/http://www.ics.uci.edu:80/~jcarlson/pep326/extremes.py)
Changes
Added this section.
Added Motivation section.
Changed markup to reStructuredText.
Clarified Abstract, Motivation, Reference Implementation and
Open Issues based on the simultaneous concepts of Max and
Min.
Added two implementations of Dijkstra’s Shortest Path algorithm that
show where Max can be used to remove special cases.
Added an example of use for Min to Motivation.
Added an example and Other Examples subheading.
Modified Reference Implementation to instantiate both items from
a single class/type.
Removed a large number of open issues that are not within the scope
of this PEP.
Replaced an example from Max Examples, changed an example in
A Min Example.
Added some References.
BDFL rejects [8] PEP 326
Copyright
This document has been placed in the public domain.
| Rejected | PEP 326 – A Case for Top and Bottom Values | Standards Track | This PEP proposes two singleton constants that represent a top and
bottom [3] value: Max and Min (or two similarly suggestive
names [4]; see Open Issues). |
PEP 327 – Decimal Data Type
Author:
Facundo Batista <facundo at taniquetil.com.ar>
Status:
Final
Type:
Standards Track
Created:
17-Oct-2003
Python-Version:
2.4
Post-History:
30-Nov-2003, 02-Jan-2004, 29-Jan-2004
Table of Contents
Abstract
Motivation
The problem with binary float
Why floating point?
Why not rational?
So, what do we have?
General Decimal Arithmetic Specification
The Arithmetic Model
Numbers
Context
Default Contexts
Exceptional Conditions
Rounding Algorithms
Rationale
Explicit construction
From int or long
From string
From float
From tuples
From Decimal
Syntax for All Cases
Creating from Context
Implicit construction
From int or long
From string
From float
From Decimal
Use of Context
Python Usability
Documentation
Decimal Attributes
Decimal Methods
Context Attributes
Context Methods
Reference Implementation
References
Copyright
Abstract
The idea is to have a Decimal data type, for every use where decimals
are needed but binary floating point is too inexact.
The Decimal data type will support the Python standard functions and
operations, and must comply with the decimal arithmetic ANSI standard
X3.274-1996 [1].
Decimal will be floating point (as opposed to fixed point) and will
have bounded precision (the precision is the upper limit on the
number of significant digits in a result). However, precision is
user-settable, and a notion of significant trailing zeroes is supported
so that fixed-point usage is also possible.
This work is based on code and test functions written by Eric Price,
Aahz and Tim Peters. Just before Python 2.4a1, the decimal.py
reference implementation was moved into the standard library; along
with the documentation and the test suite, this was the work of
Raymond Hettinger. Much of the explanation in this PEP is taken from
Cowlishaw’s work [2], comp.lang.python and python-dev.
Motivation
Here I’ll expose the reasons of why I think a Decimal data type is
needed and why other numeric data types are not enough.
I wanted a Money data type, and after proposing a pre-PEP in
comp.lang.python, the community agreed to have a numeric data type
with the needed arithmetic behaviour, and then build Money over it:
all the considerations about quantity of digits after the decimal
point, rounding, etc., will be handled through Money. It is not the
purpose of this PEP to have a data type that can be used as Money
without further effort.
One of the biggest advantages of implementing a standard is that
someone already thought out all the creepy cases for you. And to a
standard GvR redirected me: Mike Cowlishaw’s General Decimal
Arithmetic specification [2]. This document defines a general
purpose decimal arithmetic. A correct implementation of this
specification will conform to the decimal arithmetic defined in
ANSI/IEEE standard 854-1987, except for some minor restrictions, and
will also provide unrounded decimal arithmetic and integer arithmetic
as proper subsets.
The problem with binary float
In decimal math, there are many numbers that can’t be represented with
a fixed number of decimal digits, e.g. 1/3 = 0.3333333333…….
In base 2 (the way that standard floating point is calculated), 1/2 =
0.1, 1/4 = 0.01, 1/8 = 0.001, etc. Decimal 0.2 equals 2/10 equals
1/5, resulting in the binary fractional number
0.001100110011001… As you can see, the problem is that some decimal
numbers can’t be represented exactly in binary, resulting in small
roundoff errors.
So we need a decimal data type that represents exactly decimal
numbers. Instead of a binary data type, we need a decimal one.
Why floating point?
So we go to decimal, but why floating point?
Floating point numbers use a fixed quantity of digits (precision) to
represent a number, working with an exponent when the number gets too
big or too small. For example, with a precision of 5:
1234 ==> 1234e0
12345 ==> 12345e0
123456 ==> 12346e1
(note that in the last line the number got rounded to fit in five digits).
In contrast, we have the example of a long integer with infinite
precision, meaning that you can have the number as big as you want,
and you’ll never lose any information.
In a fixed point number, the position of the decimal point is fixed.
For a fixed point data type, check Tim Peter’s FixedPoint at
SourceForge [4]. I’ll go for floating point because it’s easier to
implement the arithmetic behaviour of the standard, and then you can
implement a fixed point data type over Decimal.
But why can’t we have a floating point number with infinite precision?
It’s not so easy, because of inexact divisions. E.g.: 1/3 =
0.3333333333333… ad infinitum. In this case you should store an
infinite amount of 3s, which takes too much memory, ;).
John Roth proposed to eliminate the division operator and force the
user to use an explicit method, just to avoid this kind of trouble.
This generated adverse reactions in comp.lang.python, as everybody
wants to have support for the / operator in a numeric data type.
With this exposed maybe you’re thinking “Hey! Can we just store the 1
and the 3 as numerator and denominator?”, which takes us to the next
point.
Why not rational?
Rational numbers are stored using two integer numbers, the numerator
and the denominator. This implies that the arithmetic operations
can’t be executed directly (e.g. to add two rational numbers you first
need to calculate the common denominator).
Quoting Alex Martelli:
The performance implications of the fact that summing two
rationals (which take O(M) and O(N) space respectively) gives a
rational which takes O(M+N) memory space is just too troublesome.
There are excellent Rational implementations in both pure Python
and as extensions (e.g., gmpy), but they’ll always be a “niche
market” IMHO. Probably worth PEPping, not worth doing without
Decimal – which is the right way to represent sums of money, a
truly major use case in the real world.
Anyway, if you’re interested in this data type, you maybe will want to
take a look at PEP 239: Adding a Rational Type to Python.
So, what do we have?
The result is a Decimal data type, with bounded precision and floating
point.
Will it be useful? I can’t say it better than Alex Martelli:
Python (out of the box) doesn’t let you have binary floating point
numbers with whatever precision you specify: you’re limited to
what your hardware supplies. Decimal, be it used as a fixed or
floating point number, should suffer from no such limitation:
whatever bounded precision you may specify on number creation
(your memory permitting) should work just as well. Most of the
expense of programming simplicity can be hidden from application
programs and placed in a suitable decimal arithmetic type. As per
http://speleotrove.com/decimal/, a single data type can be
used for integer, fixed-point, and floating-point decimal
arithmetic – and for money arithmetic which doesn’t drive the
application programmer crazy.
There are several uses for such a data type. As I said before, I will
use it as base for Money. In this case the bounded precision is not
an issue; quoting Tim Peters:
A precision of 20 would be way more than enough to account for
total world economic output, down to the penny, since the
beginning of time.
General Decimal Arithmetic Specification
Here I’ll include information and descriptions that are part of the
specification [2] (the structure of the number, the context, etc.).
All the requirements included in this section are not for discussion
(barring typos or other mistakes), as they are in the standard, and
the PEP is just for implementing the standard.
Because of copyright restrictions, I can not copy here explanations
taken from the specification, so I’ll try to explain it in my own
words. I firmly encourage you to read the original specification
document [2] for details or if you have any doubt.
The Arithmetic Model
The specification is based on a decimal arithmetic model, as defined
by the relevant standards: IEEE 854 [3], ANSI X3-274 [1], and the
proposed revision [5] of IEEE 754 [6].
The model has three components:
Numbers: just the values that the operation uses as input or output.
Operations: addition, multiplication, etc.
Context: a set of parameters and rules that the user can select and
which govern the results of operations (for example, the precision
to be used).
Numbers
Numbers may be finite or special values. The former can be
represented exactly. The latter are infinites and undefined (such as
0/0).
Finite numbers are defined by three parameters:
Sign: 0 (positive) or 1 (negative).
Coefficient: a non-negative integer.
Exponent: a signed integer, the power of ten of the coefficient
multiplier.
The numerical value of a finite number is given by:
(-1)**sign * coefficient * 10**exponent
Special values are named as following:
Infinity: a value which is infinitely large. Could be positive or
negative.
Quiet NaN (“qNaN”): represent undefined results (Not a Number).
Does not cause an Invalid operation condition. The sign in a NaN
has no meaning.
Signaling NaN (“sNaN”): also Not a Number, but will cause an
Invalid operation condition if used in any operation.
Context
The context is a set of parameters and rules that the user can select
and which govern the results of operations (for example, the precision
to be used).
The context gets that name because it surrounds the Decimal numbers,
with parts of context acting as input to, and output of, operations.
It’s up to the application to work with one or several contexts,
but definitely the idea is not to get a context per Decimal number.
For example, a typical use would be to set the context’s precision to
20 digits at the start of a program, and never explicitly use context
again.
These definitions don’t affect the internal storage of the Decimal
numbers, just the way that the arithmetic operations are performed.
The context is mainly defined by the following parameters (see
Context Attributes for all context attributes):
Precision: The maximum number of significant digits that can result
from an arithmetic operation (integer > 0). There is no maximum for
this value.
Rounding: The name of the algorithm to be used when rounding is
necessary, one of “round-down”, “round-half-up”, “round-half-even”,
“round-ceiling”, “round-floor”, “round-half-down”, and “round-up”.
See Rounding Algorithms below.
Flags and trap-enablers: Exceptional conditions are grouped into
signals, controllable individually, each consisting of a flag
(boolean, set when the signal occurs) and a trap-enabler (a boolean
that controls behavior). The signals are: “clamped”,
“division-by-zero”, “inexact”, “invalid-operation”, “overflow”,
“rounded”, “subnormal” and “underflow”.
Default Contexts
The specification defines two default contexts, which should be easily
selectable by the user.
Basic Default Context:
flags: all set to 0
trap-enablers: inexact, rounded, and subnormal are set to 0; all
others are set to 1
precision: is set to 9
rounding: is set to round-half-up
Extended Default Context:
flags: all set to 0
trap-enablers: all set to 0
precision: is set to 9
rounding: is set to round-half-even
Exceptional Conditions
The table below lists the exceptional conditions that may arise during
the arithmetic operations, the corresponding signal, and the defined
result. For details, see the specification [2].
Condition
Signal
Result
Clamped
clamped
see spec [2]
Division by zero
division-by-zero
[sign,inf]
Inexact
inexact
unchanged
Invalid operation
invalid-operation
[0,qNaN] (or [s,qNaN] or [s,qNaN,d]
when the cause is a signaling NaN)
Overflow
overflow
depends on the rounding mode
Rounded
rounded
unchanged
Subnormal
subnormal
unchanged
Underflow
underflow
see spec [2]
Note: when the standard talks about “Insufficient storage”, as long as
this is implementation-specific behaviour about not having enough
storage to keep the internals of the number, this implementation will
raise MemoryError.
Regarding Overflow and Underflow, there’s been a long discussion in
python-dev about artificial limits. The general consensus is to keep
the artificial limits only if there are important reasons to do that.
Tim Peters gives us three:
…eliminating bounds on exponents effectively means overflow
(and underflow) can never happen. But overflow is a valuable
safety net in real life fp use, like a canary in a coal mine,
giving danger signs early when a program goes insane.Virtually all implementations of 854 use (and as IBM’s standard
even suggests) “forbidden” exponent values to encode non-finite
numbers (infinities and NaNs). A bounded exponent can do this at
virtually no extra storage cost. If the exponent is unbounded,
then additional bits have to be used instead. This cost remains
hidden until more time- and space- efficient implementations are
attempted.
Big as it is, the IBM standard is a tiny start at supplying a
complete numeric facility. Having no bound on exponent size will
enormously complicate the implementations of, e.g., decimal sin()
and cos() (there’s then no a priori limit on how many digits of
pi effectively need to be known in order to perform argument
reduction).
Edward Loper give us an example of when the limits are to be crossed:
probabilities.
That said, Robert Brewer and Andrew Lentvorski want the limits to be
easily modifiable by the users. Actually, this is quite possible:
>>> d1 = Decimal("1e999999999") # at the exponent limit
>>> d1
Decimal("1E+999999999")
>>> d1 * 10 # exceed the limit, got infinity
Traceback (most recent call last):
File "<pyshell#3>", line 1, in ?
d1 * 10
...
...
Overflow: above Emax
>>> getcontext().Emax = 1000000000 # increase the limit
>>> d1 * 10 # does not exceed any more
Decimal("1.0E+1000000000")
>>> d1 * 100 # exceed again
Traceback (most recent call last):
File "<pyshell#3>", line 1, in ?
d1 * 100
...
...
Overflow: above Emax
Rounding Algorithms
round-down: The discarded digits are ignored; the result is
unchanged (round toward 0, truncate):
1.123 --> 1.12
1.128 --> 1.12
1.125 --> 1.12
1.135 --> 1.13
round-half-up: If the discarded digits represent greater than or
equal to half (0.5) then the result should be incremented by 1;
otherwise the discarded digits are ignored:
1.123 --> 1.12
1.128 --> 1.13
1.125 --> 1.13
1.135 --> 1.14
round-half-even: If the discarded digits represent greater than
half (0.5) then the result coefficient is incremented by 1; if they
represent less than half, then the result is not adjusted; otherwise
the result is unaltered if its rightmost digit is even, or incremented
by 1 if its rightmost digit is odd (to make an even digit):
1.123 --> 1.12
1.128 --> 1.13
1.125 --> 1.12
1.135 --> 1.14
round-ceiling: If all of the discarded digits are zero or if the
sign is negative the result is unchanged; otherwise, the result is
incremented by 1 (round toward positive infinity):
1.123 --> 1.13
1.128 --> 1.13
-1.123 --> -1.12
-1.128 --> -1.12
round-floor: If all of the discarded digits are zero or if the
sign is positive the result is unchanged; otherwise, the absolute
value of the result is incremented by 1 (round toward negative
infinity):
1.123 --> 1.12
1.128 --> 1.12
-1.123 --> -1.13
-1.128 --> -1.13
round-half-down: If the discarded digits represent greater than
half (0.5) then the result is incremented by 1; otherwise the
discarded digits are ignored:
1.123 --> 1.12
1.128 --> 1.13
1.125 --> 1.12
1.135 --> 1.13
round-up: If all of the discarded digits are zero the result is
unchanged, otherwise the result is incremented by 1 (round away from
0):
1.123 --> 1.13
1.128 --> 1.13
1.125 --> 1.13
1.135 --> 1.14
Rationale
I must separate the requirements in two sections. The first is to
comply with the ANSI standard. All the requirements for this are
specified in the Mike Cowlishaw’s work [2]. He also provided a
very large suite of test cases.
The second section of requirements (standard Python functions support,
usability, etc.) is detailed from here, where I’ll include all the
decisions made and why, and all the subjects still being discussed.
Explicit construction
The explicit construction does not get affected by the context (there
is no rounding, no limits by the precision, etc.), because the context
affects just operations’ results. The only exception to this is when
you’re Creating from Context.
From int or long
There’s no loss and no need to specify any other information:
Decimal(35)
Decimal(-124)
From string
Strings containing Python decimal integer literals and Python float
literals will be supported. In this transformation there is no loss
of information, as the string is directly converted to Decimal (there
is not an intermediate conversion through float):
Decimal("-12")
Decimal("23.2e-7")
Also, you can construct in this way all special values (Infinity and
Not a Number):
Decimal("Inf")
Decimal("NaN")
From float
The initial discussion on this item was what should
happen when passing floating point to the constructor:
Decimal(1.1) == Decimal('1.1')
Decimal(1.1) ==
Decimal('110000000000000008881784197001252...e-51')
an exception is raised
Several people alleged that (1) is the better option here, because
it’s what you expect when writing Decimal(1.1). And quoting John
Roth, it’s easy to implement:
It’s not at all difficult to find where the actual number ends and
where the fuzz begins. You can do it visually, and the algorithms
to do it are quite well known.
But If I really want my number to be
Decimal('110000000000000008881784197001252...e-51'), why can’t I
write Decimal(1.1)? Why should I expect Decimal to be “rounding”
it? Remember that 1.1 is binary floating point, so I can
predict the result. It’s not intuitive to a beginner, but that’s the
way it is.
Anyway, Paul Moore showed that (1) can’t work, because:
(1) says D(1.1) == D('1.1')
but 1.1 == 1.1000000000000001
so D(1.1) == D(1.1000000000000001)
together: D(1.1000000000000001) == D('1.1')
which is wrong, because if I write Decimal('1.1') it is exact, not
D(1.1000000000000001). He also proposed to have an explicit
conversion to float. bokr says you need to put the precision in the
constructor and mwilson agreed:
d = Decimal (1.1, 1) # take float value to 1 decimal place
d = Decimal (1.1) # gets `places` from pre-set context
But Alex Martelli says that:
Constructing with some specified precision would be fine. Thus,
I think “construction from float with some default precision” runs
a substantial risk of tricking naive users.
So, the accepted solution through c.l.p is that you can not call Decimal
with a float. Instead you must use a method: Decimal.from_float(). The
syntax:
Decimal.from_float(floatNumber, [decimal_places])
where floatNumber is the float number origin of the construction
and decimal_places are the number of digits after the decimal
point where you apply a round-half-up rounding, if any. In this way
you can do, for example:
Decimal.from_float(1.1, 2): The same as doing Decimal('1.1').
Decimal.from_float(1.1, 16): The same as doing Decimal('1.1000000000000001').
Decimal.from_float(1.1): The same as doing Decimal('1100000000000000088817841970012523233890533447265625e-51').
Based on later discussions, it was decided to omit from_float() from the
API for Py2.4. Several ideas contributed to the thought process:
Interactions between decimal and binary floating point force the user to
deal with tricky issues of representation and round-off. Avoidance of those
issues is a primary reason for having the module in the first place.
The first release of the module should focus on that which is safe, minimal,
and essential.
While theoretically nice, real world use cases for interactions between floats
and decimals are lacking. Java included float/decimal conversions to handle
an obscure case where calculations are best performed in decimal even though
a legacy data structure requires the inputs and outputs to be stored in
binary floating point.
If the need arises, users can use string representations as an intermediate
type. The advantage of this approach is that it makes explicit the
assumptions about precision and representation (no wondering what is going
on under the hood).
The Java docs for BigDecimal(double val) reflected their experiences with
the constructor:The results of this constructor can be somewhat
unpredictable and its use is generally not recommended.
From tuples
Aahz suggested to construct from tuples: it’s easier
to implement eval()’s round trip and “someone who has numeric
values representing a Decimal does not need to convert them to a
string.”
The structure will be a tuple of three elements: sign, number and
exponent. The sign is 1 or 0, the number is a tuple of decimal digits
and the exponent is a signed int or long:
Decimal((1, (3, 2, 2, 5), -2)) # for -32.25
Of course, you can construct in this way all special values:
Decimal( (0, (0,), 'F') ) # for Infinity
Decimal( (0, (0,), 'n') ) # for Not a Number
From Decimal
No mystery here, just a copy.
Syntax for All Cases
Decimal(value1)
Decimal.from_float(value2, [decimal_places])
where value1 can be int, long, string, 3-tuple or Decimal,
value2 can only be float, and decimal_places is an optional
non negative int.
Creating from Context
This item arose in python-dev from two sources in parallel. Ka-Ping
Yee proposes to pass the context as an argument at instance creation
(he wants the context he passes to be used only in creation time: “It
would not be persistent”). Tony Meyer asks from_string to honor the
context if it receives a parameter “honour_context” with a True value.
(I don’t like it, because the doc specifies that the context be
honored and I don’t want the method to comply with the specification
regarding the value of an argument.)
Tim Peters gives us a reason to have a creation that uses context:
In general number-crunching, literals may be given to high
precision, but that precision isn’t free and usually isn’t
needed
Casey Duncan wants to use another method, not a bool arg:
I find boolean arguments a general anti-pattern, especially given
we have class methods. Why not use an alternate constructor like
Decimal.rounded_to_context(“3.14159265”).
In the process of deciding the syntax of that, Tim came up with a
better idea: he proposes not to have a method in Decimal to create
with a different context, but having instead a method in Context to
create a Decimal instance. Basically, instead of:
D.using_context(number, context)
it will be:
context.create_decimal(number)
From Tim:
While all operations in the spec except for the two to-string
operations use context, no operations in the spec support an
optional local context. That the Decimal() constructor ignores
context by default is an extension to the spec. We must supply a
context-honoring from-string operation to meet the spec. I
recommend against any concept of “local context” in any operation
– it complicates the model and isn’t necessary.
So, we decided to use a context method to create a Decimal that will
use (only to be created) that context in particular (for further
operations it will use the context of the thread). But, a method with
what name?
Tim Peters proposes three methods to create from diverse sources
(from_string, from_int, from_float). I proposed to use one method,
create_decimal(), without caring about the data type. Michael
Chermside: “The name just fits my brain. The fact that it uses the
context is obvious from the fact that it’s Context method”.
The community agreed with that. I think that it’s OK because a newbie
will not be using the creation method from Context (the separate
method in Decimal to construct from float is just to prevent newbies
from encountering binary floating point issues).
So, in short, if you want to create a Decimal instance using a
particular context (that will be used just at creation time and not
any further), you’ll have to use a method of that context:
# n is any datatype accepted in Decimal(n) plus float
mycontext.create_decimal(n)
Example:
>>> # create a standard decimal instance
>>> Decimal("11.2233445566778899")
Decimal("11.2233445566778899")
>>>
>>> # create a decimal instance using the thread context
>>> thread_context = getcontext()
>>> thread_context.prec
28
>>> thread_context.create_decimal("11.2233445566778899")
Decimal("11.2233445566778899")
>>>
>>> # create a decimal instance using other context
>>> other_context = thread_context.copy()
>>> other_context.prec = 4
>>> other_context.create_decimal("11.2233445566778899")
Decimal("11.22")
Implicit construction
As the implicit construction is the consequence of an operation, it
will be affected by the context as is detailed in each point.
John Roth suggested that “The other type should be handled in the same
way the decimal() constructor would handle it”. But Alex Martelli
thinks that
this total breach with Python tradition would be a terrible
mistake. 23+”43” is NOT handled in the same way as 23+int(“45”),
and a VERY good thing that is too. It’s a completely different
thing for a user to EXPLICITLY indicate they want construction
(conversion) and to just happen to sum two objects one of which by
mistake could be a string.
So, here I define the behaviour again for each data type.
From int or long
An int or long is a treated like a Decimal explicitly constructed from
Decimal(str(x)) in the current context (meaning that the to-string rules
for rounding are applied and the appropriate flags are set). This
guarantees that expressions like Decimal('1234567') + 13579 match
the mental model of Decimal('1234567') + Decimal('13579'). That
model works because all integers are representable as strings without
representation error.
From string
Everybody agrees to raise an exception here.
From float
Aahz is strongly opposed to interact with float, suggesting an
explicit conversion:
The problem is that Decimal is capable of greater precision,
accuracy, and range than float.
The example of the valid python expression, 35 + 1.1, seems to suggest
that Decimal(35) + 1.1 should also be valid. However, a closer look
shows that it only demonstrates the feasibility of integer to floating
point conversions. Hence, the correct analog for decimal floating point
is 35 + Decimal(1.1). Both coercions, int-to-float and int-to-Decimal,
can be done without incurring representation error.
The question of how to coerce between binary and decimal floating point
is more complex. I proposed allowing the interaction with float,
making an exact conversion and raising ValueError if exceeds the
precision in the current context (this is maybe too tricky, because
for example with a precision of 9, Decimal(35) + 1.2 is OK but
Decimal(35) + 1.1 raises an error).
This resulted to be too tricky. So tricky, that c.l.p agreed to raise
TypeError in this case: you could not mix Decimal and float.
From Decimal
There isn’t any issue here.
Use of Context
In the last pre-PEP I said that “The Context must be omnipresent,
meaning that changes to it affects all the current and future Decimal
instances”. I was wrong. In response, John Roth said:
The context should be selectable for the particular usage. That
is, it should be possible to have several different contexts in
play at one time in an application.
In comp.lang.python, Aahz explained that the idea is to have a
“context per thread”. So, all the instances of a thread belongs to a
context, and you can change a context in thread A (and the behaviour
of the instances of that thread) without changing nothing in thread B.
Also, and again correcting me, he said:
(the) Context applies only to operations, not to Decimal
instances; changing the Context does not affect existing instances
if there are no operations on them.
Arguing about special cases when there’s need to perform operations
with other rules that those of the current context, Tim Peters said
that the context will have the operations as methods. This way, the
user “can create whatever private context object(s) it needs, and
spell arithmetic as explicit method calls on its private context
object(s), so that the default thread context object is neither
consulted nor modified”.
Python Usability
Decimal should support the basic arithmetic (+, -, *, /, //, **,
%, divmod) and comparison (==, !=, <, >, <=, >=, cmp)
operators in the following cases (check Implicit Construction to
see what types could OtherType be, and what happens in each case):
Decimal op Decimal
Decimal op otherType
otherType op Decimal
Decimal op= Decimal
Decimal op= otherType
Decimal should support unary operators (-, +, abs).
repr() should round trip, meaning that:m = Decimal(...)
m == eval(repr(m))
Decimal should be immutable.
Decimal should support the built-in methods:
min, max
float, int, long
str, repr
hash
bool (0 is false, otherwise true)
There’s been some discussion in python-dev about the behaviour of
hash(). The community agrees that if the values are the same, the
hashes of those values should also be the same. So, while Decimal(25)
== 25 is True, hash(Decimal(25)) should be equal to hash(25).
The detail is that you can NOT compare Decimal to floats or strings,
so we should not worry about them giving the same hashes. In short:
hash(n) == hash(Decimal(n)) # Only if n is int, long, or Decimal
Regarding str() and repr() behaviour, Ka-Ping Yee proposes that repr()
have the same behaviour as str() and Tim Peters proposes that str()
behave like the to-scientific-string operation from the Spec.
This is possible, because (from Aahz): “The string form already
contains all the necessary information to reconstruct a Decimal
object”.
And it also complies with the Spec; Tim Peters:
There’s no requirement to have a method named “to_sci_string”,
the only requirement is that some way to spell to-sci-string’s
functionality be supplied. The meaning of to-sci-string is
precisely specified by the standard, and is a good choice for both
str(Decimal) and repr(Decimal).
Documentation
This section explains all the public methods and attributes of Decimal
and Context.
Decimal Attributes
Decimal has no public attributes. The internal information is stored
in slots and should not be accessed by end users.
Decimal Methods
Following are the conversion and arithmetic operations defined in the
Spec, and how that functionality can be achieved with the actual
implementation.
to-scientific-string: Use builtin function str():>>> d = Decimal('123456789012.345')
>>> str(d)
'1.23456789E+11'
to-engineering-string: Use method to_eng_string():>>> d = Decimal('123456789012.345')
>>> d.to_eng_string()
'123.456789E+9'
to-number: Use Context method create_decimal(). The standard
constructor or from_float() constructor cannot be used because
these do not use the context (as is specified in the Spec for this
conversion).
abs: Use builtin function abs():>>> d = Decimal('-15.67')
>>> abs(d)
Decimal('15.67')
add: Use operator +:>>> d = Decimal('15.6')
>>> d + 8
Decimal('23.6')
subtract: Use operator -:>>> d = Decimal('15.6')
>>> d - 8
Decimal('7.6')
compare: Use method compare(). This method (and not the
built-in function cmp()) should only be used when dealing with
special values:>>> d = Decimal('-15.67')
>>> nan = Decimal('NaN')
>>> d.compare(23)
'-1'
>>> d.compare(nan)
'NaN'
>>> cmp(d, 23)
-1
>>> cmp(d, nan)
1
divide: Use operator /:>>> d = Decimal('-15.67')
>>> d / 2
Decimal('-7.835')
divide-integer: Use operator //:>>> d = Decimal('-15.67')
>>> d // 2
Decimal('-7')
max: Use method max(). Only use this method (and not the
built-in function max()) when dealing with special values:>>> d = Decimal('15')
>>> nan = Decimal('NaN')
>>> d.max(8)
Decimal('15')
>>> d.max(nan)
Decimal('NaN')
min: Use method min(). Only use this method (and not the
built-in function min()) when dealing with special values:>>> d = Decimal('15')
>>> nan = Decimal('NaN')
>>> d.min(8)
Decimal('8')
>>> d.min(nan)
Decimal('NaN')
minus: Use unary operator -:>>> d = Decimal('-15.67')
>>> -d
Decimal('15.67')
plus: Use unary operator +:>>> d = Decimal('-15.67')
>>> +d
Decimal('-15.67')
multiply: Use operator *:>>> d = Decimal('5.7')
>>> d * 3
Decimal('17.1')
normalize: Use method normalize():>>> d = Decimal('123.45000')
>>> d.normalize()
Decimal('123.45')
>>> d = Decimal('120.00')
>>> d.normalize()
Decimal('1.2E+2')
quantize: Use method quantize():>>> d = Decimal('2.17')
>>> d.quantize(Decimal('0.001'))
Decimal('2.170')
>>> d.quantize(Decimal('0.1'))
Decimal('2.2')
remainder: Use operator %:>>> d = Decimal('10')
>>> d % 3
Decimal('1')
>>> d % 6
Decimal('4')
remainder-near: Use method remainder_near():>>> d = Decimal('10')
>>> d.remainder_near(3)
Decimal('1')
>>> d.remainder_near(6)
Decimal('-2')
round-to-integral-value: Use method to_integral():>>> d = Decimal('-123.456')
>>> d.to_integral()
Decimal('-123')
same-quantum: Use method same_quantum():>>> d = Decimal('123.456')
>>> d.same_quantum(Decimal('0.001'))
True
>>> d.same_quantum(Decimal('0.01'))
False
square-root: Use method sqrt():>>> d = Decimal('123.456')
>>> d.sqrt()
Decimal('11.1110756')
power: User operator **:>>> d = Decimal('12.56')
>>> d ** 2
Decimal('157.7536')
Following are other methods and why they exist:
adjusted(): Returns the adjusted exponent. This concept is
defined in the Spec: the adjusted exponent is the value of the
exponent of a number when that number is expressed as though in
scientific notation with one digit before any decimal point:>>> d = Decimal('12.56')
>>> d.adjusted()
1
from_float(): Class method to create instances from float data
types:>>> d = Decimal.from_float(12.35)
>>> d
Decimal('12.3500000')
as_tuple(): Show the internal structure of the Decimal, the
triple tuple. This method is not required by the Spec, but Tim
Peters proposed it and the community agreed to have it (it’s useful
for developing and debugging):>>> d = Decimal('123.4')
>>> d.as_tuple()
(0, (1, 2, 3, 4), -1)
>>> d = Decimal('-2.34e5')
>>> d.as_tuple()
(1, (2, 3, 4), 3)
Context Attributes
These are the attributes that can be changed to modify the context.
prec (int): the precision:>>> c.prec
9
rounding (str): rounding type (how to round):>>> c.rounding
'half_even'
trap_enablers (dict): if trap_enablers[exception] = 1, then an
exception is raised when it is caused:>>> c.trap_enablers[Underflow]
0
>>> c.trap_enablers[Clamped]
0
flags (dict): when an exception is caused, flags[exception] is
incremented (whether or not the trap_enabler is set). Should be
reset by the user of Decimal instance:>>> c.flags[Underflow]
0
>>> c.flags[Clamped]
0
Emin (int): minimum exponent:>>> c.Emin
-999999999
Emax (int): maximum exponent:>>> c.Emax
999999999
capitals (int): boolean flag to use ‘E’ (True/1) or ‘e’
(False/0) in the string (for example, ‘1.32e+2’ or ‘1.32E+2’):>>> c.capitals
1
Context Methods
The following methods comply with Decimal functionality from the Spec.
Be aware that the operations that are called through a specific
context use that context and not the thread context.
To use these methods, take note that the syntax changes when the
operator is binary or unary, for example:
>>> mycontext.abs(Decimal('-2'))
'2'
>>> mycontext.multiply(Decimal('2.3'), 5)
'11.5'
So, the following are the Spec operations and conversions and how to
achieve them through a context (where d is a Decimal instance and
n a number that can be used in an Implicit construction):
to-scientific-string: to_sci_string(d)
to-engineering-string: to_eng_string(d)
to-number: create_decimal(number), see Explicit construction
for number.
abs: abs(d)
add: add(d, n)
subtract: subtract(d, n)
compare: compare(d, n)
divide: divide(d, n)
divide-integer: divide_int(d, n)
max: max(d, n)
min: min(d, n)
minus: minus(d)
plus: plus(d)
multiply: multiply(d, n)
normalize: normalize(d)
quantize: quantize(d, d)
remainder: remainder(d)
remainder-near: remainder_near(d)
round-to-integral-value: to_integral(d)
same-quantum: same_quantum(d, d)
square-root: sqrt(d)
power: power(d, n)
The divmod(d, n) method supports decimal functionality through
Context.
These are methods that return useful information from the Context:
Etiny(): Minimum exponent considering precision.>>> c.Emin
-999999999
>>> c.Etiny()
-1000000007
Etop(): Maximum exponent considering precision.>>> c.Emax
999999999
>>> c.Etop()
999999991
copy(): Returns a copy of the context.
Reference Implementation
As of Python 2.4-alpha, the code has been checked into the standard
library. The latest version is available from:
http://svn.python.org/view/python/trunk/Lib/decimal.py
The test cases are here:
http://svn.python.org/view/python/trunk/Lib/test/test_decimal.py
References
[1] (1, 2)
ANSI standard X3.274-1996 (Programming Language REXX):
http://www.rexxla.org/Standards/ansi.html
[2] (1, 2, 3, 4, 5, 6, 7, 8)
General Decimal Arithmetic specification (Cowlishaw):
http://speleotrove.com/decimal/decarith.html (related
documents and links at http://speleotrove.com/decimal/)
[3]
ANSI/IEEE standard 854-1987 (Radix-Independent Floating-Point
Arithmetic):
http://www.cs.berkeley.edu/~ejr/projects/754/private/drafts/854-1987/dir.html
(unofficial text; official copies can be ordered from
http://standards.ieee.org/catalog/ordering.html)
[4]
Tim Peter’s FixedPoint at SourceForge:
http://fixedpoint.sourceforge.net/
[5]
IEEE 754 revision:
http://grouper.ieee.org/groups/754/revision.html
[6]
IEEE 754 references:
http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html
Copyright
This document has been placed in the public domain.
| Final | PEP 327 – Decimal Data Type | Standards Track | The idea is to have a Decimal data type, for every use where decimals
are needed but binary floating point is too inexact. |